How to Accelerate AI Deployment in Machine Vision Applications
Machine learning at the edge addresses applications too complex for rule-based vision but too simple to warrant investment in a full deep learning solution.
Traditional machine vision relies on analytical, rule-based algorithms to detect and parameterize defects that can be mathematically defined. In such applications, highly skilled systems developers and engineers evaluate each problem, apply a series of rules that can accomplish the task, and then program the system. To streamline the process, many vendors offer low-code and no-code solutions that help ease the process of tuning a set of analytical pattern matching, blob, edge, caliper, or other machine vision tools to meet application requirements. Despite these advances, rule-based solutions reach their limit when defects are difficult to define numerically or their appearance varies significantly.
As a result, ongoing development and maintenance of rulebased machine vision algorithms remains a challenge. It’s often required due to part and process changes. Part changes can be caused by shrinking product lifecycles or component obsolescence, for example. Process changes may be required due to raw material or component variations from different suppliers, keeping up with technological advancements, or lighting changes in the production environment. This level of machine vision system maintenance relies on hard-to-find and expensive engineers with machine vision experience and skills.
Enter Deep Learning
A decade ago, deep learning was available only to specialized professionals with big budgets. However, advancements in theory, computer hardware such as GPUs, and data availability have recently led to its emergence in industrial machine vision applications. Deep learning excels in two areas: situations where subjective decisions need to be made, such as those requiring human inspectors, and confusing scenes where identifying specific features in the image is difficult due to high complexity or extreme variability. Scenes with significant background noise—for example, leather products with texture— are a good fit for deep learning.
In contrast to rule-based machine vision, which relies on experts to develop new algorithms, deep learning relies on operators, line managers, and other subject matter experts to label images as good or bad and classify the types of defects present in an image. This approach eliminates the need for highly skilled machine vision specialists and reduces the size of the engineering crew required to deploy and maintain machine vision solutions. When something changes, anyone who knows what the defect looks like can retrain the model by recording and labeling new images.
Deep Learning Challenges
Deep learning toolkits enable people to deploy learning-based machine vision systems more easily, but obstacles remain. For example, most successful deep learning projects still require large budgets and specialized expertise from vision engineers and data scientists to initially set up the system. However, not all projects will deliver sufficient value to the operation that would justify a significant investment, limiting the ability of deep learning to meet requirements in such applications.
As with any machine vision application, image acquisition hardware plays a critical role in the success of a deep learning solution. A well-designed imaging system is required to perform image acquisition and collection. Reliable and repeatable imaging techniques must be able to clearly distinguish features or objects of interest.
Part presentation, illumination techniques, and image resolution play an important role in identifying the subtleties differentiating various classifications. And processing used for image analysis must be robust and powerful enough to handle typical production rates and algorithmic demands.
On the software side, model development can take a long time and require tagging of hundreds or thousands of images. Furthermore, obtaining images of defects can be challenging, particularly for prototype production lines that run small numbers of parts, as well as for consumer electronics and mobile devices that have very short production runs lasting a year or less. Such situations require frequent iteration. Moreover, highly automated production lines typically produce good parts with few defects. Consequently, it may take several months of running the line to obtain a sample size large enough to generate a reliable model.
Edge Learning Minds the Gap
Considering all these challenges, many machine vision applications are too complex for a rule-based solution. Still, they don’t warrant the time and resources required to develop a full-blown deep learning solution. To address this gap in machine vision application coverage between traditional rule-based and full deep learning solutions, hardware manufacturers have developed edge AI that runs on their embedded smart camera platforms.
Dubbed “edge learning,” this type of deep learning utilizes a collection of preexisting algorithms that facilitate model training and subsequent image analysis directly on the device. Edge learning is a machine learning approach specifically tailored for industrial automation. It is trained in two steps: pretraining and specific use case training.
The first step is done by the edge learning supplier on a large dataset optimized for industrial automation. The pretrained tool is then embedded in a smart camera and shipped to the customer, who completes the second part of the training for their specific use case. This approach allows for faster training, requiring only a few images, and does not require a computer or GPU.
Use Case Training
Image setup and acquisition also take less time because smart camera platforms combine multiple elements, such as sensor, optics, processor, and sometimes even illumination. This approach reduces hardware integration problems such as cabling to a PC and incorporating the inference engine, which can be time-consuming and increase the complexity of a machine learning system.
Edge Learning Benefits
Edge learning offers several benefits. It’s much less costly to deploy than rule-based machine vision and deep learning solutions. It requires fewer images and takes less time to train and compute. It allows for faster production ramp-ups and product changeovers because training and production occur in the same place, on the same device.
It should be noted, however, that the many benefits these edge learning-embedded smart cameras offer come at a cost. As a result, edge learning is not suitable for the most complex problems, but it can address a large portion of applications right out of the box.
Shorter Optimization Loop
Compared to deep learning, edge learning has a much shorter optimization loop and eliminates the need to send images to another device for labeling and retraining. Additionally, it optimizes workforce utilization and reduces the long-term maintenance required for collecting and managing data.
Furthermore, edge learning is a viable option for automation as it doesn’t require any prior knowledge of machine vision. Instead of relying on experts, edge learning allows operators and line engineers to label images for retraining of the system when part or process changes arise.
By enabling beginners and experts to quickly automate inspection tasks, edge learning benefits original equipment manufacturers (OEMs), machine builders, and end users alike.
Using edge learning, OEMs can more easily tackle challenging machine vision problems and empower their end-user customers. Edge learning enables end users to quickly address issues and add new products quickly, which minimizes the need to go back to the OEM and reduces the financial impact of after-sales support and service costs.
Meanwhile, system integrators can use edge learning to increase revenue by performing more feasibility studies in less time. Edge learning allows system integrators to reduce time spent on tasks such as image acquisition setup and machine vision tools selection so that they can quickly determine the feasibility of an application and win more business faster, while taking on more projects.
End-users can benefit from edge learning by automating many manual optical inspections or automation tasks that don’t justify the investment of developing a sophisticated machine vision or deep learning system. Edge learning helps manufacturers more easily deal with part and process changes as they arise and iterate without developing new algorithms for each new generation of product.
Edge learning also can simplify existing rule-based machine vision applications and reduce costs associated with expensive image acquisition components, such as telecentric optics, illumination, or part handling systems. By simplifying or eliminating these costly components, a lower cost setup can often be achieved, with savings on image formation, fixturing, or complex image processing requirements.
Edge learning on embedded smart camera platforms offers a unique solution for many applications that are too challenging for conventional rule-based machine vision yet don’t warrant the expense of investing in a full deep learning solution. Edge learning has proven to be more capable than traditional machine vision analytical tools in situations where human inspectors need to make difficult subjective decisions, for instance when identifying specific features in an image is difficult due to high complexity or extreme variability.
At the same time, edge learning is more cost-effective and user-friendly than traditional deep learning solutions, allowing more applications to be addressed economically. Edge learning tools can be trained using just a few images per class.
Ultimately, edge learning is another tool in the toolbox that can improve workforce utilization for OEMs, machine builders, system integrators, and end users.