Deep learning technology is inspired by the way the human brain works, using trained artificial neural networks to perform recognition and decision-making tasks. A convolutional neural network (CNN) is a type of deep artificial neural network based on multilayer perceptrons — a machine learning unit algorithm — most commonly applied to analyzing images. Neural networks are good at classifying data, which can be seen as the challenge for many imaging applications.
Cracking Open the Layers
A CNN includes input and output layers, as well as multiple hidden layers between. The hidden layers are the convolutional layers; it is there that the deep learning occurs, and from whence it takes its name. The CNN is taught or learns how to filter data, thereby eliminating the painstaking task of figuring out how to reliably extract features relevant for the required analysis. [Figure 1]
One of the key challenges with any image analysis is identifying the essential features upon which to base the analysis. These can vary greatly from one application to the next. In order to supervise the training of a neural network from scratch, there must first be a large collection of labeled data, for example, a vast series of images of items already categorized. This data set becomes the input. The hidden layers perform mathematical computations on the inputs, establishing connections between neurons that are each ascribed a weight. The weight designates the importance of the input value. When the training commences, there is no way to know what weight should be ascribed to each data set, so the computer is tasked with randomly assigning weights to each hidden layer.
The metric is predetermined, and the training itself works via backpropagation, a process of repeatedly changing the weights in the hidden layers by small increments after each data set iteration to minimize the difference between actual output and desired output. In time and with repeated exposure, the CNN learns to identify the desired features within the hidden layers, making the network's output closer to the desired output each subsequent time it is run. As such, hidden layers identify, sort, and present information in a way that makes it easy for the rest of the neural network to classify the problem or pattern. Once training is completed, a CNN is ready to deploy for “inference” — the process of classifying data to infer a result.
It is worth noting that a fair amount of work is needed for collecting and preparing the hundreds of images required to effectively train and validate a CNN. The training process itself is iterative, leading to an initial network that must be subsequently maintained to accommodate changing application requirements, in order to maintain system accuracy. With such a complex process, requiring both knowledge and experience, the training of a CNN can be tricky and is therefore often best left in the hands of deep learning specialists.
Brain Drains and Brain Gains
Deep learning is proving to be an excellent way to categorize images or regions in images. Neural networks are, in general, very good at inferring, which can be understood as recognizing, reading, and detecting. It is therefore ideally suited for applications such as presence/absence detection, provided sufficient variations are taught.
CNN technology is adept at classifying images or image regions with challenging content, such as highly textured, naturally varying, or acceptably deformed goods like produce. The underlying inference capabilities make it a ready fit for categorizing data based on pre-established classes for identification and detection and can thus readily differentiate between varieties of goods, or between acceptable and defective parts.
Deep learning and CNNs are less adept at performing measuring tasks or high-accuracy locating tasks, particularly if the object is one that can be scaled or rotated.
The use of neural networks is certainly poised to accelerate and democratize the development of industrial imaging solutions. The technology is uniquely suited to handling inspection scenarios too complex to solve via traditional machine vision algorithms. It is imperative to note, however, that deep learning does pose significant challenges to users and providers of the technology. First and foremost, an extensive data set of images is necessary to effectively train and validate a neural network. Another challenge lies in the physical sourcing of image data. It is far easier to obtain images of acceptable goods, for example, than it is of defective goods, especially when defective goods tend to vary extensively in their “defectiveness.” Machine vision developers will need to account for the process of acquiring and labelling images — not only for initial development but also for future project adaptations. It, therefore, often makes sense to use the providers of deep learning technology to streamline the process of establishing the parameters for a successful neural network.
To the Future
The machine vision sector is undergoing major shifts, and deep learning is the latest manifestation of its trajectory. Deep learning constitutes a powerful technology that enhances existing computer vision techniques and can help to lessen labor costs, increase production speeds, and reduce product defects in manufacturing applications. In spite of the buzz, however, leading authorities suggest that the technology is not yet smart enough to function as a comprehensive replacement for human knowledge. Moreover, training CNNs remains as much an art as it is science. The initial training process necessitates tremendous amounts of clearly labelled data as well as additional data that reflects changes to the process over time.
In the coming years, deep learning gives every indication of being a major industry disruptor, and the deployment of CNN-based technology is likely to have tremendous impact on imaging software. With a growing ability to leverage artificial neural networks to imitate recognition and decision-making tasks performed by the human brain, deep learning is but part of the evolution of the Industrial Internet of Things (IIoT), a sea change we are only just beginning to see take shape.
This article was written by Arnaud Lina, Director, Research and Innovation; Pierantonio Boriero, Director, Product Management; and Katia Ostrowski, Marketing Communications, Matrox Imaging (Dorval, Quebec, Canada). For more information, contact Ms. Ostrowski at