A short intro to Google shopping ads (GSA): Differences between search and shopping ads
Google Shopping, formerly Google Product Search, is a Google service which allows users to search for products on online shopping websites and compare prices between different vendors. In 2012, the service shifted to a paid advertising model where retailers had to pay to be featured in the Google Shopping search results. From that point forward, Google Shopping became a “branch” of Google AdWords – yet another way for retailers and e-commerce businesses to advertise their physical products on Google.
Maybe you have seen the Google Shopping Ads before (hereinafter referred to as GSA). When you input search terms on specific products, the small squares appear on the top or right side of the result page, displaying various information including product picture and pricing. Unlike a text ad, GSA show users a photo of your product, plus a title, price, store name, and more. These ads give users a strong sense of the product you’re selling before they click the ad, which leads merchants with more qualified leads.
Why is GSA good at (comparing to search ads) for consumer
For consumers, GSA directly shows the product pictures and prices of each merchant. This means that by the time the shopper clicks on the ad, he/she has a good sense of the product and its cost.
Why is GSA good at (comparing to search ads) for the advertiser
For advertisers, GSA has several advantages over traditional search ads:
- Manage product catalogs only. You can place ads for a large number of products without having to set additional keywords and copy files.
- Attract likely-to-buy users with precise needs. Consumers are informed with product details (brand, price, picture, specifications, etc.) when they click through the website. Merchants have the opportunity to find the right users without wasting advertising budget.
- Display products on top of the search result. Shopping ads usually appear at the top of the first page (up to 30 items above, up to 9 items on the right). Same advertiser’s shopping/search ads may appear at the same time, expanding to the maximum exposure area.
Challenges: Requirements and tips for utilizing shopping ads
For utilizing GSA better, there are some requirements and tips you should be aware:
1. Set up a clear goal and corresponding metrics.
Depending on your goal, different parameters apply. (e.g., Profit volume, ROAS, CPC, etc.)
2. Make your product catalog data well organized, formatted and updated.
Instead of keywords, Shopping ads use the product attributes you defined in your Merchant Center data feed to show your ads on relevant searches. So, it is crucial to make your product catalog data well organized, formatted, and updated.
It’s a big challenge, especially when you have a large amount of product catalog data. How to transform them into the valid format? And how to map your product catalog hierarchy to GSA defined ones? How to keep data of both sides synced, and most importantly, how to automate these without hassle?
BigQuery is an enterprise data warehouse which supports super-fast SQL queries over massive datasets. It is very suitable for managing the product catalog data. So you can leverage the robust ETL services Google provides such as Cloud Dataprep or Cloud Dataflow to transform your original data. After making your product catalog data ready for Merchant Center, you can import them into Merchant Center by Cloud Storage. And this data processing pipeline can be automated by Cloud Composer. According to this architecture, you have a flexible pipeline to transform your product catalog data to fulfill GSA requirements and keep the data updated automatically.
3. Prepare high-quality product image.
Visuals affect shopper’s first impression on your product. It is also essential for Google to better understand your products, then match the right catalog to them. Below are some guidelines to ensure the best performance with high-quality images:
- Show the product you are selling
- Show the product by itself
- Use a simple background
- Don’t put anything distracting in the photo
- Use the photo at maximum pixels you have
- Include photos of all your product options
The most challenging part of the above guidelines is 2, 3, 4. Usually, you don’t have clear images of all your products, but you do have access to some from your product catalog. They may include watermarks, promotional sale overlays, etc. Manually removing these human artifacts on the product image is time-consuming, especially when you have thousands of products.
Next, we will discuss how to transform the manual work to the fully automatic process by introducing AI technology of image processing.
Technical intro for image segmentation and inpainting
For removing human artifacts on the product image, it could be divided into two parts. First is identifying where are the human artifacts, segment them then remove them. Second is reconstructing missing parts (the area of removed human artifacts) of an image so that observers are unable to tell that these regions have undergone restoration. They are two respectively critical image processing problems: image segmentation and image inpainting.
In computer vision, image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something more meaningful and easier to analyze.
Object detection builds a bounding box corresponding to each class in the image. However, it tells us nothing about the shape of the object. We only get the set of bounding box coordinates. We want to get more information.
Image segmentation creates a pixel-wise mask for each object in the image. This technique gives us a far more granular understanding of the object(s) in the image. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.
Image inpainting is the process of reconstructing lost or deteriorated parts of images. It applies sophisticated algorithms to replace lost or corrupted parts of the image data. This technique is often used to remove unwanted objects from an image or to restore damaged portions of old photos.
Introducing how big data beats complex rules: Deep learning on image segmentation and inpainting
“Machine Learning changes the way you think about a problem. The focus shifts from a mathematical science to natural science, running experiments and using statistics, not logic, to analyze its results.” – Peter Norvig – Google Research Director
Normally, applications are programmed to make particular decisions; for example, there may be a scenario based on predefined rules. These rules are based on the human experience of the frequently-occurring scenarios. However, as the number of scenarios increases significantly, it would demand massive investment to define rules to address all scenarios accurately. Either efficiency or accuracy might be sacrificed.
Traditionally, image segmentation is addressed using Region-based segmentation, Edge detection segmentation, Clustering-based segmentation, etc. Image inpainting is addressed using diffusion-based approaches that propagates local structures into the unknown parts, or exemplary-based approaches that construct the missing piece one pixel (or patch) at a time while maintaining the consistency with the neighborhood pixels.
Machine learning is an algorithm or model that learns patterns in data and then predicts similar patterns in new-coming data. For example, if you want to classify children’s books, instead of setting up precise rules for what constitutes a children’s book, developers can feed the computer hundreds of examples of children’s books. The computer finds the patterns in these books and uses that pattern to identify future books in that category.
With the rapid development of machine learning, image segmentation and inpainting have made great progress compared to the previous method. It’s just the core engine driving the Picaas.
Picaas ML model lifecycle
Picaas utilizes Google Cloud Platform (GCP) to effectively process a massive number of product pictures simultaneously. This satisfies the need for e-commerce platforms to deliver product advertisements in bulk with one click of the mouse.
The traditional method of manually editing Google Shopping Ads images took an average of fifteen minutes to process one picture. With Picaas, in best-case scenarios, a picture can be automatically edited in 2.2 seconds, which is four hundred times faster than humans. Currently, the minimum price per picture is around 0.1 US dollars, which is 50% to 95% cheaper than the traditional method of using human labor to edit pictures.
A healthy life cycle for a production-ready ML service is very important. The typical ML lifecycle as shown below could be divided into three parts briefly: data labeling, model training, and model serving.
For data labeling, the main focus should be efficiency. Because of the performance of machine learning depends on a large amount of correctly and precisely labeled data. If you already have collected labeled data, that’s good, if not, you need to find a way to generate the labeled data efficiently. You can leverage the Google AI Platform Data Labeling Service, provided a large amount of product image which includes watermarks, promotional sale overlays, etc. Once you get the labeled results which mark these areas within a short time period.
In model training and serving, Picaas leverages Cloud ML engine which is a managed service that lets developers and data scientists build and run superior machine learning models in production. With Cloud ML engine, developers can focus on training logic instead of machine learning technology stack provisioning and the training resources could be dynamically scaled by requirements.
When we deploy models to production and expect to track errors, we are making an assumption that newly-generated data will be similar to existing data. Specifically, we assume that the distributions of the features and targets will remain relatively constant. Usually, the facts proved our assumption wrong. The model deployment should be treated as a continuous process. Don’t ever think of deploying a model once and moving on to the next. Developers need to retrain their models if they find that the data distributions have deviated significantly from those of the original training set.
In Picaas, we develop a tool to check if one product image could pass the validation, so we can use the tool to feedback the model performance and use these failing cases for next training iteration.
The trend about applying ML on Ads
The increasing rise of ML/AI in digital marketing and advertising opens up a new world for marketers to focus on what brings the most value to their customers. Saving time from tedious repeated work, so that markers can really engage in creativity. If you have used Google Ads, Google Analytics 360, you should go further to integrate BigQuery as your marketing data warehouse which will enable the machine learning journey for your business.
Now we have the sense about how GSA works and the requirements for your business to enable GSA. The challenges will be the integration part: How to integrate your product catalog data with Google Merchant Center and keep the data updated? How to make your product image acceptable for GSA and high quality for better performance? How to evaluate the Ads performance after deploying the GSA then make a data-driven decision to adjust the Ad strategy?
Leverage platform (GCP) and SaaS (Picaas) to speed up modernization
Leverage the AI technology Picaas offers, integrate it into your Ad pipeline as a SaaS service, save you a lot of time and effort to make sure the ad graphics met all the Google Shopping Ads requirements. Put the marketing data warehouse concept into you Ad strategy planning and make good use of the big data infrastructure provided by GCP.
More data-driven marketing strategy and infrastructure architecture planning, please visit GCP.expert.