Picking and sorting items for shipment in a retail warehouse might sound like a straightforward, albeit tedious and time-consuming process, but it’s a notoriously difficult one to automate.

Why is automating the picking and sorting process so tough?

Because robots designed for sorting items not only need to be able to pick them up and place them – but they also need to identify them correctly. As such, they must be able to do the job at least as well as people can in order for them to be a viable investment.

Also, warehouse sorting robots are often paired with conveyor systems and equipped with robotic arms for grasping items. These arms are typically equipped with high-resolution cameras that ‘see’ the item. The images captured by the cameras run through a computer vision algorithm to be compared with other images in order to identify the items correctly.

Why should retail warehouses automate picking and sorting?

One good reason is that retailers are facing a growing struggle to hire enough warehouse workers to keep up with consumer demand.

A 2022 study found that 73% of warehouse operators can’t attract and retain enough labor to support normal staffing levels, let alone seasonal influxes.

The sector is unfortunately known for its long hours and poor working conditions, which has led to significant shortages of staff. Furthermore, manual picking and sorting is often a tedious and repetitive process that requires people to be on their feet for many hours.

Amazon is perhaps the most often-cited example, which is also why the world’s largest online retailer invests heavily in warehouse robotics and artificial intelligence.

The integration of advanced robotics in retail warehousing can help streamline their operations and take a significant burden off the shoulders of existing employees. As consumer demand continues to grow, especially during busier periods of the year, retail robotics provide support in scaling operations, reducing costs, and reducing the risk of employee burnout.

Why is retail picking and sorting challenging to automate?

Because training algorithms isn’t easy.

While computer vision technology makes it easy to train a robotic arm to pick and sort uniformly sized boxes or choose between packages of different colors, training an algorithm to sift through thousands of random items and recognize specific pieces and sort them into bins is a far more complicated process. For example, robots might need to identify unique line setups and package configurations as well as look for flaws.

Because of this, computer vision models for use in retail picking and sorting robotics must be trained using very large amounts of properly prepared training data, especially if there are lots of variables and exceptions. In this particular use case, that may involve thousands of correctly labeled images with humans being kept in the loop to continually refresh and update models.

3-D cameras serve as the eyes of robotic picking and sorting arms. These are equipped with regular 2-D cameras combined with infrared cameras for recording depth data to create a 3-D image. Naturally, such images require much more complex annotations. Items in the images must be correctly labeled to eliminate background items, such as the conveyor belt itself.

Robotic sorting arms also need to know where and how to grip items, especially fragile items, which may get damaged if they are picked up incorrectly or turned over. Moreover, as products and their packaging evolve, the image recognition algorithms must be refreshed and updated accordingly. This is why we need humans-in-the-loop (HITL) to handle exceptions and constantly label data to retrain models.

The challenges in bringing robotic picking and sorting arms to life also vary from one industry to the next. For example, robotics used in packing plants that deal with fruits and vegetables need to be able to recognize the sometimes subtle differences between things found in nature. After all, they rarely come in uniform shapes and sizes, and algorithms must be able to identify them from different angles. In grocery-packing, robots must be able to identify thousands of items, many of which will be inherently varied in their appearance.

Robotics used in retail sectors that deal with extremely heavy or fragile items require especially accurate and high-quality training to minimize the risk of costly damage or injury to employees on-site. In any retail sector, robots should also be able to identify things like damaged products or packaging and askew labels. Moreover, some use cases, such as automotive parts, require close attention to sorting and prioritization in order to accelerate and optimize production lines.

Overcoming the challenges of retail automation with quality data preparation

Data preparation accounts for the bulk of the time spent on any AI project. In the case of retail picking and sorting robotics, this involves going through thousands of 3-D images to annotate them in preparation for training the model. Partnering with a managed workforce at this stage can greatly ease the burden on your development team, freeing up time for data scientists to focus on important business innovations.

CloudFactory’s managed workforce serves as an extension of your team by tackling computer vision training data annotation at scale so you can focus on building the next generation of AI-powered solutions for your business. Read our retail AI guide to learn more about how smart technology is transforming the retail experience.

Bringing Robotic Picking and Sorting Arms to Life in Retail Warehouses

Video Annotation Computer Vision Image Annotation AI & Machine Learning Retail

Get the latest updates on CloudFactory by subscribing to our blog