A new concept uses Google Glass for operating machinery, with all of the benefits delivered by wearable computing in an industrial environment. With Google’s Web-enabled glasses, status or dialog messages can be projected via a head-up display directly into a person’s field of vision. Online information and communication is also possible with this innovative device, and error messages can be acknowledged using a touchpad.

Google Glass is equipped with a head-up display, camera, microphone, and bone conduction audio transducer serving as a loudspeaker. It also has vibration sensors and a touchpad integrated in the sidepieces of the headset. With these features, Google Glass is well suited for visualization, diagnostics, and service purposes, as well as for technical interventions and person-to-person communication.

Unlike conventional control concepts, this device belongs to a class of innovative technology known as “wearable computing” — a group of devices that can be worn daily by users to dramatically increase their connectedness. In fact, many of these devices today already have all of the functions of a modern smartphone. What’s more, Google Glass-type devices are convenient to use thanks to the overall ergonomics of a semi-transparent visor and the largely hands-free operation. The great advantage here — also with an eye toward industrial automation — is that existing mobile computing technology can be used without any limitations to our sensory perception or our physical movements, and there are no wires or cables to contend with.

Google Glass can be easily integrated with control technology using TwinCAT automation software from Beckhoff. The glasses communicate with a Web server that provides the status of the machine controlled by TwinCAT. The glasses receive this status information and express it in the form of signals or error messages, perhaps even indicating the exact location of any problems. Confirmation and resetting of the machine status can also be done on the spot with Google Glass.

Using the Technology in Everyday Situations

Potential application scenarios can be classified as either “direct or indirect.” For example, an operator can use the glasses to “directly” monitor the machine or even take action to change or correct the machine status directly, without always having to be on site. With large machines and production facilities, the operator needs to walk around the equipment and check the process status values at specific critical points while watching how the machine is functioning at the same time. If necessary, the operator can take manual action because both hands are free.

Both static and dynamic machine status information can be projected to the head-up display of Google Glass.
The “indirect” options are related to gathering and saving information that is not fully dependent on the processes being run. These options include, for example, studying the manufacturer’s documentation about specific machine components, searching for information on the Internet, and engaging in person-to-person interaction through e- mails and chats with video support. However, the combination of direct and indirect applications is also possible. Even while a machine is running, the operator can contact an expert for advice about a specific problem by using the glasses to send a video of the machine in action. The expert can then give the operator support — in the form of a video or voice message — so that corrective measures can be taken. This is an example of how the IT concept known as “What You See Is What I See” (WYSIWIS) can also be applied in an industrial environment.

There are many different approaches to implementation. A service engineer, for example, could use the integrated camera to capture the QR code of a motor or limit switch to retrieve information about its features, history, or current status. Another option would be browsing Web sites containing the control software for machines — with a resolution of 640 × 360 pixels and without the need for a mouse. This would be possible if the source material is formatted for use with the augmented-reality glasses. The operator could look up machine settings defined by the manufacturer and then take appropriate action.

It would also be feasible to program special applications that run locally on Google Glass and establish an interface with the machine’s control computer using protocols like OPC UA or TwinCAT ADS via WLAN. As part of the Google Glass technology study, Beckhoff provided an application for the real-time visualization of binary and analog variables as a local program. In all of these cases, status data (variables, errors) can be displayed. In addition, the operator can use Google Glass to change machine settings or cycles, such as starting or stopping production steps.

Google Glass vs. Traditional Control

With its many options, Google Glass is an outstanding development for enhancing operations and control concepts, but it is not intended to serve as a complete substitute device for controlling machines or manufacturing facilities. The conventional machine control terminal, with a touchscreen or display and keyboard, cannot be replaced entirely because of its higher resolution, better readability, and the electromechanical integration of critical control elements, such as an emergency switch or joystick. The same can be said regarding the vision of realizing complete “no-touch control” based on Google Glass.

If an error occurs, Google Glass gently vibrates to draw the operator’s attention to the error message. Once the error is corrected, the message can be reset directly via the glasses.
In real-world scenarios of the future, it is most likely that we will find a mix of traditional touch and innovative no-touch approaches. Remember that touching the side of the augmented-reality glasses is much faster than giving a voice command. Of course, the glasses do function in the hands-free mode: they are turned on with a simple upward nod of the head. Browsing through menu items is possible by gently nodding the head up and down, and a particular item can be activated with a voice command. Launching functions shown on the so-called slides would also be possible through voice control. However, to achieve this, special programming is required, and the ergonomics of the software must be precisely designed to accommodate voice control.

Ensuring Machine Security

There is a general misconception that Google as a business enterprise or the Google Cloud is always party to all of the communication conducted with the augmented-reality glasses. This is not the case. Google Glass can be easily encapsulated and embedded in the WLAN intranet of a business enterprise, where it is safeguarded by standard IT procedures in effect there. And when it comes to machine operation, the functions of Google Glass are, in principle, the same ones found on the machine’s control panel or display screen. What’s more, any actions that would prove dangerous must be prevented using effective and approved security technologies (such as an emergency stop concept).

According to data protection experts, Google Glass is just as safe as any cellphone. Both devices are suited for taking photos of machinery and people. This aspect is not new and has been part of the mass proliferation of smartphones. Furthermore, wearing the augmented-reality glasses can hardly go unnoticed, so it is highly unlikely that anyone would use them secretly for unlawful purposes. In sum, it means that Google Glass is subject to the same rules and code of conduct that apply to the smartphone — the device will not be approved for use in highly sensitive areas of business.

Mainstream Use in Manufacturing

To safeguard the integrity of internal communications (red), the glasses can be encapsulated and embedded in the WLAN intranet of the business enterprise, independent of the Google Cloud.
Today, it is already clear that augmented-reality glasses like Google Glass are becoming a trend in the commercial market. However, it is difficult to forecast when these devices will become mainstream items. On the one hand, vendors such as Meta Pro, Samsung, and Epson have already announced that they are developing similar hardware. On the other hand, just like smartphones, augmented-reality glasses will undergo a continuing development process as more sensors and higher processor performance can be integrated into the devices.

Beckhoff is studying the acceptance of these devices in industrial environments based on existing software that is of benefit to users. Field tests are being conducted in cooperation with interested users. However, machines and production facilities equipped with Beckhoff controllers and TwinCAT software already offer all of the communication interfaces required to effectively use Google Glass or other augmented-reality glasses today.

This article was written by Andreas Thome, Product Manager of PC Control at Beckhoff Automation, Verl, Germany. For more information, Click Here .