Although machine learning and artificial intelligence (AI) are terms that are often used interchangeably, they are quite different. That difference becomes more important as applications for these technologies become more prevalent. Tech Briefs posed questions to machine learning/AI industry executives to get their views on issues such as machine learning platform selection, interpreting data created by these platforms, and pros and cons of implementing machine learning.

Our participants are Dr. Florian Baumann, Chief Technology Officer - Automotive & AI, at Dell Technologies; Mario Bergeron, Technical Marketing Engineer at Avnet, Inc; Zach Mayer, Vice President of Data Science at Data Robot; George Rendell, Senior Director of NX Design at Siemens Digital Industries Software; and Rajesh Ramachandran, Chief Digital Officer - Industrial Automation, at ABB Inc.

Tech Briefs: Machine learning is a term that has confused many people, partly because its definition has taken on multiple forms. How do you define machine learning and how do you see it being used in manufacturing, medical, transportation, or other industrial applications?

Rajesh Ramachandran

Rajesh Ramachandran: Machine learning and AI are used interchangeably by many and that makes the definition sometimes confusing and complex. While AI is an overarching science of all the aspects of making machines and physical systems smarter by embedding “artificial” intelligence to it, machine learning is a subset of AI that is defined as systems that gain knowledge through “self-learning” to get smarter and more predictable over time without human intervention.

Machine learning can be defined as a set of algorithms from a diverse set of fields like statistics, mathematics, and control systems. In manufacturing and industrial applications, machine learning is used widely for both predictive analysis and optimization of various key elements such as improving efficiency, quality, predictive maintenance, lifecycle management, safety, and sustainability.

Zach Mayer

Dr. Florian Baumann: Machine learning is the process to extract valuable information out of data. It is the tool part of AI that helps to extract characteristics and information out of data. The machine learning process is divided into training, test and validation, and inferencing. Machine learning is used in a vast number of different verticals, such as manufacturing, to improve worker safety, avoid downtime, or for predictive analytics and quality. In the transportation industry, machine learning support will enable cars to drive autonomously.

Zach Mayer: My favorite definition of machine learning is this: machine learning = representation + evaluation + optimization. First, you need to represent your data in terms that a computer can understand, which really boils down to “everything has to be a number.” Evaluation is where a lot of machine learning efforts go wrong. Up front, you need to define a way to evaluate your model and define the criteria by which you separate good models from bad models. Optimization is the fun part. Once you’ve represented your problem and defined your success criteria in terms a computer can understand, you can turn all that computing power loose to find a model that maximizes (or minimizes) your evaluation criteria.

George Rendell: Machine learning is the ability to guide a user to an optimal product design solution by intelligently recognizing design approaches and solutions, learning from extracted engineering knowledge, and gaining business insight from extracted knowledge. Machine learning is playing a role in the manufacturing industry all the way from the product design stage to optimizing production operations and automating quality testing.

Mario Bergeron

Mario Bergeron: Today, we are observing an explosion in machine learning applied to real-world examples. This is due in part to the computing power that allows the implementation of these algorithms in real time. One thing is clear: when it comes to machine learning algorithms, it all starts with the data. In manufacturing and industrial applications, long-term monitoring of images and/or sensor data, correlated with events such as defects and maintenance, could be leveraged to implement algorithms that identify defects and schedule preventative maintenance. In medical, x-ray images, correlated with the symptoms of various illnesses, are being used to create systems that can assist in delivering more precise and early diagnoses. In transportation, the accumulation of data relating to human driving patterns is being used to create new systems capable of assisting or even replacing the human driver.

Tech Briefs: The number and complexity of machine learning platforms grows on what seems to be a weekly basis. What are some of the most important criteria to be considered when choosing the right machine learning platform?

Mayer: I would frame this in terms of my answer to the first question. How flexible is the data representation? Do you have to do a lot of data cleaning yourself to make a nice, neat matrix of numbers, or can you upload an Excel spreadsheet? Can the platform handle missing values, categorical data, text data, image data, and geospatial data? As far as evaluation, how flexible is the evaluation? Can you define a cost/benefit matrix and maximize benefit while minimizing cost? For optimization, how many different models does the platform support? Ask how much handholding you get vs. how much code you’ll have to write yourself to use these models.

Dr. Florian Baumann

Baumann: Machine learning platforms should be scalable (compute, network, storage), flexible, and easy to use. Resources (compute, storage) should be easily managed and able to be shared across clusters. They should burst workload seamlessly into the public cloud and allow developers to connect their own applications using different APIs to the platform. They also must be based on a micro-service architecture. Finally, platforms must allow usage of third-party tools, and integration of existing data and metadata should be possible.

Rendell: Machine learning has definitely become the catch phrase in software today with various interpretations and claims to have it. This question implies that a separate “platform” be selected by end users to provide machine learning.

And I see some software vendors have to work to connect their existing CAD software to another “platform” to deliver end user value leveraging machine learning technology. The technology is only as good as the value it brings to the user for completing their work and user experience to get their work done.

Ramachandran: The power of a machine learning platform, and especially an industrial AI modeling platform, is at its best when it is able to develop a model on contextualized data — data coming from multiple source systems within an enterprise — but contextualized based on the business requirement. Key criteria in choosing the right platform include data access, ingestion, filtering, and manipulation to integrate data from disparate sources and types, and to transform and prepare data for modeling.

The platform needs components that can provide descriptive statistical analysis of details on the quality of data; components to generate and manage models that predict behavior or estimate unknown outcomes; components that use a suite of mathematical algorithms to choose the “best” alternative(s) that meet specified objectives and constraints; and the ability to integrate with enterprise-grade applications. A full-fledged platform should cater to effective management of offline and online training and tuning of models.

Bergeron: The first criteria is whether to deploy your algorithm in the cloud or on the edge. If deploying to the cloud, no hardware platform may be necessary and the solution will consist of a service that is paid on a monthly or yearly basis. If deploying at the edge, then things get a little more involved. Many criteria will dictate the ideal platform for your application such as performance and power.

When considering performance, two of the many metrics to consider are throughput and latency. Throughput will indicate the peak workload that can be achieved when batches of input data are fed into the algorithm in a pipelined fashion and latency indicates how much time it takes to get a response after presenting a single input to the algorithm. Since an edge platform may typically be a battery-powered embedded platform, power consumption may be an equally important criterion. Real-world applications typically also include some form of pre-processing and post-processing in addition to the machine learning algorithms.

Tech Briefs: What are some of the challenges in accurately interpreting results generated by machine learning algorithms?

Mayer: I think a big one here is bias and fairness. A good machine learning platform should give you tools to make sure your model isn’t biased against specific groups of people, gender, or other protected class (sexuality, disability). This is a hard problem to solve, but there are good automated approaches to help you think about it such as statistical techniques and tools to assess bias, model confidence, and robustness across multiple dimensions and phases of the machine learning lifecycle.

Ramachandran: The fundamental challenge of using machine learning algorithms to solve industrial problems is interpreting the results. One definition of interpretability is the degree to which a human can understand the cause of a decision. In the industrial domain, engineers and designers are used to understanding causality of decisions. Machine learning models are trained mathematically to discover relationships between input and output parameters that are difficult to deduce from plain observation. In practice, machine learning models are good enough if they can predict the outcomes accurately with consistency and with an acceptable level of uncertainty.

George Rendell

Rendell: Simply put, the main challenge is making it real and making it work successfully for the user. The large volume of data used for processing and learning may include engineering parameters that are redundant or irrelevant. Therefore, it is important that the framework has the ability to select and analyze a subset of data to reduce the machine learning training time and to generate a simplified model. Also, algorithms may execute differently and give different results, depending on an unexpected variable. As a result, we are constantly looking at ways in which we can learn from a diverse set of data across various industries. If we fail to extract important features and cover all kinds of best practices from different industries to train the machine learning model, we will end up with biased interpretation of results good for certain design situations only.

Baumann: Challenges are to visually debug algorithms and manage the data and metadata. Lifecycle management of vast amounts of data — such as datasets, metadata sets, and algorithms — is crucial. Visually preparing results in front ends allows accurate interpretation of them.

Bergeron: For machine learning algorithms that perform classification to identify whether or not an input data point is part of a class or not, the “classification function” effectiveness can be measured based upon hit/miss rates. Once those measurements have been evaluated for an algorithm against a validation dataset, further calculations of accuracy, precision, and recall can help us determine whether an algorithm has good enough sensitivity and specificity for a particular application.

Tech Briefs: Advantages of machine learning are obvious: no human intervention (automation), continuous improvement (or learning), and a wide array of applications. But what are some of the disadvantages?

Ramachandran: The quality of machine learning models depends upon the dataset for which it has been trained. Typically, a large dataset encompassing multiple patterns provides a viable output. Second, machine learning should always be combined with the design specifications and the operational procedures of an industrial automation system. Every output of the machine learning model should be validated with the domain know-how and sometimes physics based on model outputs. This helps to avoid conditions due to bad data or unseen model outputs. Third, interpretation of the output of machine learning models should be completely based on business. If machine learning model output adheres to appropriate accuracy but it cannot be interpreted as per the operational and asset specifications, then the output of the model can be completely misjudged.

Rendell: The biggest challenge is misaligned expectations from users relative to what the software really accomplishes. The next challenge is that engineering results are the responsibility of the professional engineer. Machine learning should improve user productivity, improve created design, and make engineering change more efficient; the resulting design still needs to be understood and accepted by the user. It is not always about training the user in new technology — rather, training the new technology to truly aid the user.

Baumann: More and more, machine learning is used as a black-box — just to throw in the data and check the output. The traditional way of developing — thinking about how to solve your problem — disappears. ROIs (return on investment) are not defined properly. Machine learning is not magic. Models become better with time — this is often forgotten. Throwing in the data expecting superior results won’t work.

Mayer: Automated machine learning (AutoML) is complementary to your human data scientists. AutoML allows you to “power up” your human data scientists and make them 10X more productive. Humans are good at thinking creatively; computers are good at rote, repetitive tasks, so you should focus your time accordingly. Humans are really good at thinking of novel representations for data and defining business objectives for evaluating models. Computers, on the other hand, are good at trying thousands of different modeling recipes and evaluating them according to the human-defined criteria. You can use AutoML as a filter to identify the really hard business problems that are most in need of human attention.

Bergeron: When deploying machine learning algorithms to the cloud, one of the greatest concerns is privacy. If the algorithms contain proprietary IP or if the input data contain sensitive information, deploying to a cloud solution may pose some security concerns along with many questions about who the owner of the data is and who will have access to it. Choosing a solution at the edge, on the other hand, inherently solves this concern, since the data is never sent to a cloud or shared outside of the embedded platform. By processing the data locally and sending only a digested message of the result, an edge solution is inherently secure and can protect user privacy.

Resources

  1. ABB
  2. Avnet, Inc
  3. Data Robot
  4. Dell Technologies
  5. Siemens Digital Industries Software