It’s been a rollercoaster couple of years for brick-and-mortar retailers. Being forced to shutter overnight, and then to reopen and close in fits and starts, all while navigating a global supply chain crisis and, now, an overly stretched employment market, has forced retailers to reimagine their businesses. 

One significant reimagining is taking the form of self-checkout systems, which reduce, and in many cases eliminate, the friction caused by long checkout lines. At the lowest level of self-checkout is the self-checkout kiosk, which shoppers use to scan items and check out themselves. At the next level is the smart cart, which uses computer vision to identify and weigh items within it. At the highest level is the Grab-and-Go autonomous checkout system, which empowers shoppers to simply place items in a bag and walk out.   

The higher levels of self-checkout systems not only save retailers and consumers the cost and frustration associated with check out; they also help retailers quickly realize additional benefits, such as an understanding of shopping trends and ideal product placement, instant inventory analysis, and reduced labor costs. 

Another factor leading more retailers to take advantage of self-checkout technology is that the cost of implementing it is falling dramatically. Case in point: One leading retailer reduced the cost of operating a Grab-and-Go autonomous system by a staggering 96%. Retailers can also quickly and affordably implement smart cart technology from providers like Trigo or Zippin with few, if any, modifications to stores. 

What’s behind this new wave of technology? The answer: Computer vision.

Computer Vision for Autonomous Checkout: How it Works

Despite all the advances, completely autonomous checkout is still in its early days. Computer vision — the field in AI that enables systems to derive meaningful information from digital images and videos — is the primary source powering in-store, autonomous checkout. 

But what powers computer vision? People and tools. Computer vision requires the manually intensive process of labeling or annotating data in a way that produces the outcome you want your machine learning model to predict. You are marking — labeling, tagging, transcribing, structuring, or processing — a dataset to give it the features you want your machine learning system to learn to recognize. 

Although AI-based tools certainly facilitate the labeling process, those tools are meant to be used by humans in the loop. But choosing which humans will be in your loop isn’t an easy decision. 

The Backfire of Poor Quality Data Annotation 

As a retail business leader, you might be concerned about whether a managed workforce can deliver the same level of quality as an internal team. Or whether they’ll be able to start small and scale projects fast, or adapt to your capacity and elasticity requirements. But the top concern we hear about from retail AI innovators revolves around quality — because your machine learning model will live or die based on the quality of annotations you feed it.  

Low-quality annotated data can backfire twice: First, during computer vision model training, and second in production when your model uses the annotated data to make predictions. Even after your model is in production, you’ll need to periodically retrain, maintain, and optimize it to account for real-world changes in the environment where the model is deployed. You’ll also need to deal with edge cases and exceptions, and the unexpected and inexplicable changes they can cause in your model. AI teams are familiar with these outliers; they understand that properly controlling and handling such phenomena is essential to a shippable model. 

In other words, to build, test, and maintain production for high-performing computer vision models, you must consistently train those models using trusted, reliable, well-annotated data.

Don’t Skimp on Quality for Your Machine Learning Training Data

The quality of retail video and image annotation is paramount for autonomous checkout, which, after all, is replacing traditional systems in which every item is scanned before shoppers complete the checkout process. If smart cart and Grab-and-Go systems can’t properly identify the items consumers are buying, retail profits might be walking right out the door. 

Consider grocery items like produce, too. Not only must your machine learning algorithm know which grapes the consumer selected—Red Globe? Concord? Dominga?—it must also determine how much the bunch of grapes weighs. In a fully autonomous checkout scenario, your model must also process swaps, which happen when a shopper in a camera’s view switches positions and isn’t consistently identified by different cameras as they move about the store.  

Underperforming models can also incorrectly assign a product purchase to the wrong person. If that happens once, no big deal. But when a poorly trained algorithm is deployed across a chain of thousands of stores, the impact compounds and becomes severe, hitting hard on the bottom line.  

Questions for the Road: How to Focus on Quality 

Whether you decide to outsource your data labeling or to keep the work in-house, keeping a tight grip on quality must be a matter of fact. If you do decide to outsource, keep these quality questions in mind when evaluating potential partners: 

  • Is the workforce managed? Is it trained in all computer vision annotation techniques for video and images? Does the workforce provider understand the tooling landscape? Are they flexible enough to have a tooling-agnostic approach?
  • Can the potential partner provide references? Do they demonstrate high customer satisfaction and the ability to meet or exceed pixel-level accuracy goals?
  • Can the company choose the right QA methods to match your project's acceptable margin of error? Quality must always come first.
  • Is the company certified? Some certifications can assure you of high levels of quality data annotation and management.
  • Will your partner provide you with a single point of contact who understands your business rules and requirements and is responsible for meeting or exceeding quality goals? Accountability and communication are vital.

Finally, ask yourself whether the automation partner you’re considering leaves you feeling confident about the quality of the work they’ll provide. The time to focus on quality is NOW — before you begin your project. Otherwise, you might find yourself scrambling after a faulty model has wreaked havoc — on you, on your teams, on your customers, and on your bottom line. 

Learn more about our retail AI expertise.New Call-to-action

Video Annotation Computer Vision Image Annotation Retail

Get the latest updates on CloudFactory by subscribing to our blog