As early as 25 years ago, industrial system integrators saw the great potential that the Windows operating system brought to PCs. They saw the possibility of using the advanced graphic capabilities that Windows offered versus the relatively primitive human interfaces of DOS-based applications and those of other proprietary OSes. Windows enabled the development of controllers with advanced human-ma chine interfaces (HMIs) that provide a whole new level of functionality, and make machines easier to use and maintain.

Figure 1. Loop speed versus processing speed.
The problem with Windows, however, is that it isn’t deterministic. Factory automation applications typically involve motion control systems, which rely on timely reading of hardware position sensors to provide feedback on the position of motion axes. The Windows OS, however, is not designed to respond to outside stimulus in a predictable amount of time, and therefore, by itself, cannot be used to control applications involving multiple rapid events that have to occur at specific times. So most early industrial PCs were limited to being used for operator interfaces or were interfaced to a second computer that ran a real-time operating system (RTOS). In due time, multi-workload environments enabled Windows and a RTOS to run on the same system. The result: a single system running both environments.

Recently, industrial PCs have begun to be fitted with multicore processors that offer amazing processing capacity and many enhanced computing features that enable them to perform functions that were once exclusive to application-specific processors such as digital signal processors (DSPs). OEMs that desire to decrease system costs are looking to leverage multi-cores to consolidate control functions that have been implemented on several separate pieces of computing hardware. But adding more real-time tasks, plus human interface tasks, and distributing those among multiple processor cores while preserving the real-time responsiveness of the overall system, is a challenging task.

Fundamentals of Control Applications

To better understand the challenge of running several real-time functional blocks such as motion control operations on the same computing platform, it is useful to go over the fundamentals of what a real-time control system consists of from a software perspective. Machines are controlled by control loops. A control system sends out a command to a motion system on the machine and then periodically samples the result, making corrections until such time as execution of the command is complete. The faster and more complex the action is, the faster the periodic sampling and correction (called the control loop) must be.

From a software point of view, control loops as run by a RTOS consist of a high-priority sampling thread that is triggered by an event like an internal clock, which interrupts background processing by the computer. This thread reads data from machine sensors and re-enables interrupts when complete. The RTOS then goes off to run the next-highest-priority thread. Frequently, the task that is resumed is the thread where acquired data is processed and a result used to make correction to the system as required. Or, the task that is resumed could be updating a Windows HMI application running alongside the RTOS control loop. If nothing needs to be done, the processor may merely stay in an idle state until such time as the control loop starts again.

From a graphical point of view, an application that performs periodic monitoring of an event can be depicted as a loop as shown in Figure 1. Increasing the processing capacity without decreasing the loop-time leaves more processor idle time, as shown in Figure 1 (A →A-1), while speeding up the loop decreases the overall time that the loop takes to complete, as well as reduces the idle time (A →A-2 and A-1 →A-3). Reducing loop times is sometimes necessary when more precise control is required.

Figure 2. Shown are two control loops, Loop A (a 1ms loop) and Loop B (a 3ms loop). When combined as shown in “3x Loop A + Loop B,” with Loop A the fastest of the loops having priority over Loop B, Loop B sampling threads interrupt Loop B’s processing thread. Loop B’s processing thread is also interrupted. Note the effect of the loops interrupting each other on the response time of the control correction of each loop.
It gets interesting when one tries to integrate two time-critical workloads such as two independent control loops on the same processor, as described in Figure 2. For example, consider a machine that performs motion control and interfaces to remote motor drives via an Ethernet-based control bus such as EtherCat or Profinet. Both functions, the motion control and the Ethernet-based control bus, must be serviced at specific time intervals that are typically asynchronous to each other. In situations such as this, certain things need to be understood and accounted for, including:

1. The data acquisition typically needs to happen at a prescribed time or at least within the loop time. This means that the sampling threads must have the highest priority over any other threads. Also, the fastest control loop will probably be given highest priority over the slower one. So, as shown in Figure 3, the priorities might be: sampling thread of Loop A, sampling thread of Loop B, processing thread of Loop A, processing thread of Loop B, and then any other threads. In the diagram, because Loop A has to acquire its data at a certain time, it has to interrupt Loop B during its data processing cycle to acquire its data and then return to Loop B until it has completed its data processing, before Loop A data processing can start.
2. Every time a thread is interrupted, context switching takes place (context switching is the term used to apply to the saving of all information that is critical to the previous state of the machine so that processing of the interrupted task can be resumed after the interrupt is handled). This burns precious time, especially in cases where several fast control loops are running at the same time. It’s costly even in the above example where only two control loops are running on the same processor.
3. Since both loops are running asynchronously to each other, it is possible for their sampling threads to coincide. When this happens, because Loop A’s sampling thread is higher priority than Loop B’s, handling Loop A’s interrupt will take precedence. Good programming practice requires that interrupts be re-enabled as soon as the data has been read by the sampling thread. Any delay will affect the data acquisition performed by Loop B’s sampling thread, which is initiated by the clock event that causes the Loop B interrupt.


The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.