Traditional robot applications limit operator access to hazards through hard-guarding and protective devices that either detect and stop the hazard, or prevent access into the safeguarding space until the hazard no longer exists. The introduction of power- and force-limited robots used in collaborative applications changes this environment. Reduced or nonexistent hard-guarding, along with continuous motion and interaction between the robot and the operator, makes the environment inherently dynamic and uncertain. Methods to reduce risks to a tolerable level include limiting forces and speed, but these measures can yield unacceptable production rates.

Commonly used intrusion-detection sensors for robotic applications include safety mats.

Traditional robotic safeguarding stops hazardous system motion regardless of the operator's intent or task. A collaborative robot application limits motion to a level where inherent safety designs have time to respond and stop motion if contact with an operator is made. Currently, collaborative robot application speed is limited by its most hazardous task, even though the risks for different tasks may vary. To maintain a safe environment for the operator while optimizing a robot's speed, size, and capability, machine safeguarding must transition from traditional preventative procedures to emerging predictive concepts.

Predicting Human Behaviors

Safety applications determined by human behaviors are based on previous experience and translated into appropriate standards. The preferred method to reduce interaction risk is to design a system so it is inherently safe. Safeguarding is then only required to keep the operator away from the hazards when the design cannot reduce the risks to a tolerable level. Administrative controls such as warning signs, barriers, and training make operators aware of the hazards, but they also rely on the operator's willingness to follow the guidelines. With this methodology, the robot's reaction is primarily based on the operator's current behavior.

Power- and force-limited robots may have inherently safe design through features such as low-inertia servomotors, elastic actuators, and collision detection. These features may reduce the need for additional safeguarding in collaborative applications. While administrative controls make the operator aware of expected robot paths and shared workspaces, risks remain.

Robot applications are programmed to complete a predetermined path or make an adjustment based on information from sources such as sensors, barcode readers, and vision systems. Rigid programming stops the robot's motion while an operator is in its path. This behavior can encourage an operator to bypass safety to meet production expectations.

Production could potentially be increased if the robot was able to work around the operator. Applying concepts from other sectors, along with data collection, may provide solutions for optimizing safety and enhancing production. An example is automated intelligent vehicles (AIVs) that adapt to their environment. They scan the mapped area and adjust their direction if an operator is in its path, or slow down when the operator is in close proximity. This gives them the flexibility to adjust based on the operator's movement within a dynamic environment.

Detecting intrusion using stereo imaging.

Reducing Risk by Design

Traditional robot structures are designed to withstand harsh environments and accidental collisions with other machinery. During normal operation, the operator is not exposed to hazards associated with the robot system that may include fixtures, parts, and end-effectors; however, these risks must be evaluated and changes implemented when an operator will be exposed to hazards in a collaborative application. Risks can be minimized in a collaborative application by “softening” the potential impact areas. This approach includes using softer and more compliant materials for the structure; for example, padding or spring-based protective covers can absorb some of the force, edges and corners can be smoothed and rounded, and wider surface areas can be used to reduce impact effects.

The operator's perceived safety is important when designing a collaborative system. Traditional robot systems normally have a detectable boundary due to hard-guarding and other visible protective devices. In a collaborative application, safeguarding may be part of the inherent design and not visible to the operator. If operators do not trust the safety of the system and cannot visualize its boundaries, they may adjust their tasks to fit their own concept of what is safe and how it should be implemented.

Safeguarding boundaries must be defined for all robotic systems to reduce risks to a tolerable level. This process is done with a risk assessment that evaluates the probability of an occurrence and the severity of harm if the operator comes into contact with a hazard. Direct safeguarding methods create a physical separation between the operator and the robot. They are inefficient in terms of time, floor space, and resources, and place limits on the types of tasks that can be performed. Indirect safeguarding methods detect and initiate when a boundary is violated. While they allow the operator to have more convenient access into the safeguarded system when hazards are not present, a stop may be triggered by an unknown object, and it may be difficult to evaluate the source.

Zones permit operators to access limited areas of the robot's workspace when no hazards are present, while the robot operates in another area. One design method enables maintenance and operator tasks to be completed in one area without stopping the robot. One difficulty with multiple zones is designing the safeguarding so it efficiently accounts for operator transition between zones without sacrificing the cycle time of the process. Events such as an operator's sudden change of movement to quickly re-enter a zone he or she just exited need to be accounted for in the design.

Sensors detect system changes and provide status information. For a safety system, activating a protective device changes its state so the signal is sent and the system can respond before the operator enters the hazardous area. Commonly used intrusion-detection sensors for robotic applications include light curtains, single-beam safety sensors, safety area scanners, and safety mats (Figure 1). Collecting data when a sensor was activated and locating the activated devices was typically only useful with safety area scanners, which helped to isolate the intrusion triggering the event.

In the past, the costs of additional sensors for collecting data provided minimal benefit. But sensors are getting better, smaller, cheaper, and easier to integrate. Computing resources for analyzing sensor data also are becoming more effective and affordable. With these changes, sensor systems are now capable of storing and managing detailed data that can be used in future applications for predictive collaborative robotic safeguarding. Without sensors or vision systems, robots cannot adapt to unknown or unpredictable environments.

Most collaborative robot applications define the robot's path based on the required task. The environment could change and not affect the robot's path, but the change could affect how the operator interacts within the collaborative system. Future robot applications will need a way to adapt path planning so they can avoid collisions. One feature that could make collision avoidance easier is adding a seventh axis that would increase flexibility, allow the robot a wider range of motion, and facilitate movement around an operator.

When to Predict the Future

Current protective safety devices detect the moment an operator enters the hazard zone, but they cannot determine what part of the operator's body triggers the protective stop. With a safety area scanner, a slight intrusion by a foot could trigger an unintentional stop since boundaries are not clearly visible. A control station too close to a safety light curtain could also trigger unintentional stops, but on a more consistent basis. When triggered stops have a regular pattern, it could be a sign of an upcoming maintenance issue, design discrepancy between its intended function and its actual use, or an issue within the process such as jammed parts caught on a sharp edge of a conveyor.

Vision-based methods using cameras work reasonably well when people are well separated, minimally occluded, and in neutral poses. Pose estimation methods can detect when people are bending over or reaching out. The background and the robot are explicitly modeled, which enables the detection of people, even in changing environments. For people to work safely in the proximity of industrial robots, their positions within the system must be constantly monitored, regardless of what they are wearing or doing. Since people are not predictable, estimating their detailed body poses is a challenging problem.

It is possible to use a vision-based protective device (VBPD) and vision-based protective device stereovision techniques (VBPDST) to monitor user-configured 3D volumes. Stereo imaging detects how and at what height the operator action triggers a stop. It uses two cameras to capture two images of a scene from two viewpoints. The locations and optical parameters of each camera use a triangulation method to determine the correspondence between pixels in each image. The relative depth of each point is inversely proportional to the differences in the distance of the corresponding points and their camera centers.

Figure 2 shows a system using camera images to capture the event that triggered a stop. Since stereo imaging can compute the depth of each point, the system could also provide height and location information, which is useful in differentiating between actual and false activations. Safeguarding is sometimes bypassed when it impedes productivity. If operators have different perceptions about how the system is designed to work and it produces varied results, training could become an effective tool to establish consistency and improve quality.

When a Robot Makes its Own Decisions

An industrial robot requires extensive safeguarding because it does exactly what it is programmed to do. Any capability to make a decision depends on information given to the robot by another source, such as a switch, vision system, or sensor. The robot cannot distinguish between correct and incorrect data, which is why a safety system is required to monitor and shut down the system when an operator enters the hazard zone.

A power- and force-limited robot changes this scenario by using detection and sophisticated algorithms to make the robot inherently safe. Safeguarding can be reduced if there are no other hazards in the area; however, to avoid impacts strong enough to cause operator injury, the speed and payload of a power and force robot must be limited.

After inherently safe design, the next step is to give the robot artificial intelligence (AI) or machine intelligence so it can distinguish between good and bad data and formulate its action. Types of AI proven to be useful with sensor systems may include knowledge-based systems, fuzzy logic, automatic knowledge acquisition, neural networks, genetic algorithms, case-based reasoning, and ambient intelligence. AI combines a wide variety of advanced technologies to give machines the ability to learn, adapt, make decisions, and display new behaviors.

Knowledge-based systems are computer programs that facilitate problem-solving related to a specific domain. Knowledge is expressed as a combination of if/then rules, factual statements, frames, objects, procedures, and cases. The systems receive and send input and output signals through external connections to the outside world. They may be useful in collaborative applications where data is known and parameters can be defined.

Fuzzy logic uses a level of human reasoning instead of a true or false value. This approach may be used in applications where the operator does not know the process and the sequence can change, as when using indicator lights to let an operator know the next step in a process. Neural networks obtain implicit knowledge through training. They can be trained by being presented with typical input patterns and the corresponding expected output patterns. This approach may be suitable for applications such as a process where similar sized parts are arranged and assembled in the same order.

In a facility where solutions for known problems are defined, case-based reasoning may be applicable. New situations adapt solutions from previous problems into solutions for current problems. The solutions represent the experience of human specialists that are stored in a database. When a new problem occurs, the system compares it to previous cases, and selects one that is closest to the current problem.

Ambient intelligence gathers information and knowledge from sensors within the environment to optimize processes. It creates a seamless interaction between people and sensor systems to meet actual and anticipated needs. This method could be used in a packaging facility where the rate and type of product change and automated intelligent vehicles may be deployed to different stations.

Conclusion

Power- and force-limited robots have paved the way for machines and humans to work together, but safely implementing a collaborative application currently limits productivity due to reduced speed and payload. To address these issues, new methods of predicting an operator's approach, speed, and direction need to be further developed.

Using zones and sensors to collect data such as entry location, time within the collaborative workspace, and exit location can be used to design systems, change the operator's existing tasks, or modify the robot's path planning so potential collision can be minimized. This data also can be used to indicate potential maintenance issues by sending alerts when access into the collaborative workspace becomes more frequent, and by monitoring usage and wear of the components.

On a dynamically changing system, intelligent robotic safeguarding can be used to predict the operator's actions and adjust the path of a power- and force-limited robot. This method would improve productivity by allowing the robot to anticipate and avoid collisions while operating at higher speeds and payload levels.

This article was adapted from SAE Technical Paper 2017-01-0293 authored by Tina Hull of Omron, Hoffman Estates, IL. To obtain the full technical paper and access more than 200,000 resources for the aerospace, automotive, and commercial vehicle industries, visit the SAE MOBILUS site at here .