top of page
Meta Generative AI • 2023
Bringing Generative AI to Advertisers with Background Generation
Background Generation is one creative optimization that was introduced to the Ads Manager experience to help businesses create visually captivating performant ads with less effort.

Project Overview
Background Generation simplifies ad creation by giving advertisers the ability to generate background variations for product ads across Instagram and Facebook. This project had 2 work streams; the Ads Manager experience of how advertisers would use this new features, and the Model Improvements of how the outputs of these generations would function. This case study outlines the process and strategy for model improvements; generating better outcomes to increase advertiser adoption.
My contributions
I led the end-to-end design and strategy for the model improvements of background generation; specifically strategized with engineers on how to improve the model to generate better outputs, implemented design standards and principles to focus on quality and scalability, and collaborated with designers, engineers, and leadership across the Ads Manager experiences and other Generative AI initiatives.

Hypothesis
By improving the quality of background generation, we can have more outputs that meet advertisers needs and align with customers expectations, leading to advertiser adoption and an increase in ad performance.
Problem space
In early 2023, AI Sandbox launched as a "testing playground" for advertisers to try out new generative AI powered ad tools like text generation, image outcropping, and background generation. Background generation received the lowest adoption, due to quality and relevance issues.
-
Advertisers perceive product ad creatives as generic and boring, but have limited options to customize.
-
Advertisers struggle to create performant ad creatives at scale.

Current product ad on FB Feed

Current product ad on IG Stories
ADVERTISER ADOPTION
Quality improvements
In order to improve the model to generate higher quality outputs, we needed to first define what a high quality output looks like. Through extensive research; talking to customers, creative professionals, and investigating performant ad creatives, a team of designers and I across GenAI were able to come up with a framework of how we can define and work towards high quality generations.

ADVERTISER ADOPTION
Model improvements
Once we had a foundation and principles to abide to for quality, the next step was to dive into the technical structure of how the model worked. This gave a sense of where in the model structure certain improvements could be made and what would be feasible. The 2 opportunity areas were correctness and relevance.

How might we improve the correctness?
Define 'good creatives,' adding them into the model and improve the feedback loop to train more quickly. The more usage and feedback the model gets, the better it will become.

IMPROVE CORRECTNESS
Image scoring and quality questionnaires
Set a standard approach to rate quality that can feedback back into the model. Looking at things like color contrast, segmentation, drop shadows, and lighting to determine quality scores. With each rating, quality will improve over time.
There wasn't a ton of usage with Background Generation as it was only used as a "testing playground" with AI Sandbox. With the help of annotators, more and more generations could be assessed. Asking specific questions related to the framework of defining good quality with people that are more relevant could help provide more valuable feedback to the model.


How might we improve the relevance?
Improve prompt generation to effectively communicate what should be generated base on categories and themes.

IMPROVE RELEVANCE
Expand themes
Abstract and Lifestyle themes determined the types of backgrounds that were generated. By expanding themes to include other components like environments, surfaces, and styles, we can help train the model to deliver more variants that are relevant, contextual, and on brand. Being more specific tends to generate higher quality outputs.

IMPROVE RELEVANCE
Introduce dynamic prompts
There were only 3 fixed prompts per product category, where the product category is the only thing that was dynamic. This means that the environment and styles we reference are pre-defined, limiting the variations that were generated and its relevance.
The dynamic approach introduces placeholders for things like the surface it’s placed on, the environment it’s in, or the styles. This increases diversity, relevance, and overall quality.

IMPROVE RELEVANCE
Incorporate brand characteristics
Introducing components will create more variations and opportunities for higher quality and relevant outputs, but there could be a risk of these not aligning with advertisers' brands or having similar outputs as their competitors.
Instead we can reference past ads to give insight into what styles can be incorporated. These styles would be specific to that advertiser, creating more personalized outputs that fit the advertisers brand.


Background generation began rolling out to all advertisers in October 2023. The model improvements are continuously being updated to improve the quality and increase advertiser adoption.

bottom of page