The term “smart grid” is an umbrella term used to refer to new technologies that aim to address today’s electrical power grid challenges. At a high level, these technologies address challenges associated with grid reliability and reactive maintenance, renewables integration, and disturbance detection. One way to help address these challenges is to push decision-making and intelligence closer to the grid, embedded within flexible instrumentation to achieve faster response times, better bandwidth utilization, and functionality field upgrades that will keep field instruments up-to-date with the latest algorithms and methodologies to monitor and protect the grid.

Critical Components

Figure 1: Graphical FPGA design translated to independent parts of an FPGA.
There is no silver bullet when it comes to smart grid implementation, and it is likely to be an ongoing global effort for years to come that will require multiple iterations with constantly evolving requirements. On one side, standalone traditional instruments such as reclosers, power-quality meters, transient recorders, and phasor measurement units (PMUs) are robust, standards-based, and embedded, but are designed to perform one or more specific/fixed tasks defined by the vendor (i.e. the user generally cannot extend or customize them). In addition, special technologies and costly components must be developed to build these instruments, making them very expensive and slow to adapt. On the other side, the rapid adoption of the PC in the past 30 years catalyzed a revolution in instrumentation for test, measurement, and automation. Computers are powerful, open-source, I/O expandable, and programmable, but not robust and not embedded enough for field deployment.

One major development resulting from the ubiquity of the PC is the concept of virtual instrumentation, which offers several benefits to engineers and scientists who require increased productivity, accuracy, and performance. Virtual instrumentation bridges traditional instrumentation, with computers offering the best of both worlds: measurement and quality, embedded processing power, reliability and robustness, open-source programmability, and field-upgradability.

Virtual instrumentation is the foundation for smart-grid-ready instrumentation. Engineers and scientists working on smart grid applications where needs and requirements change very quickly need flexibility to create their own solutions. Virtual instruments, by virtue of being PC-based, inherently take advantage of the benefits from the latest technology incorporated into off-the-shelf PCs, and they can be adapted via software and plug-in hardware to meet particular application needs without having to replace the entire device.

Figure 2: Sequential vs. parallel implementation of a tap filter utilizing an FPGA with 2,016 DSP slices at 600 million samples per second (MSPS).
While software tools provide the programming environment to customize the functionality of a smart-grid-ready instrument, there is a need for an added layer of robustness and reliability that a standard off-the-shelf PC cannot offer. One of the most empowering technologies that adds this required level of reliability, robustness, and performance is the Field Programmable Gate Array (FPGA).

FPGAs

At the highest level, FPGAs are reprogrammable silicon chips. Using prebuilt logic blocks and programmable routing resources, you can configure these chips to implement custom hardware functionality without ever having to pick up a breadboard or soldering iron. You develop digital computing tasks in software and compile them down to a configuration file or bitstream that contains information on how the components should be wired together. In addition, FPGAs are completely reconfigurable and instantly take on a brand new “personality” when you recompile a different configuration of circuitry. In the past, FPGA technology was only available to engineers with a deep understanding of digital hardware design. The rise of high-level design tools, however, is changing the rules of FPGA programming, with new technologies that convert graphical block diagrams or even C code into digital hardware circuitry (Figure 1).

FPGA chip adoption across all industries is driven by the fact that FPGAs combine the best parts of ASICs and processor-based systems. FPGAs provide hardware-timed speed and reliability, but they do not require high volumes to justify the large upfront expense of custom ASIC design. Reprogrammable silicon also has the same flexibility of software running on a processor-based system, but is not limited by the number of processing cores available. Unlike processors, FPGAs are truly parallel in nature so different processing operations do not have to compete for the same resources. Each independent processing task is assigned to a dedicated section of the chip, and can function autonomously without any influence from other logic blocks. As a result, the performance of one part of the application is not affected when additional processing is added (Figure 2).

Figure 3: Moore’s Law comparing FPGA and CPU performance.
FPGA circuitry is truly a “hard” implementation of program execution. Processor-based systems often involve several layers of abstraction to help schedule tasks and share resources among multiple processes. The driver layer controls hardware resources and the operating system manages memory and processor bandwidth. For any given processor core, only one instruction can execute at a time, and processor-based systems are continually at risk of time-critical tasks pre-empting one another. FPGAs, which do not use operating systems, minimize reliability concerns with true parallel execution and deterministic hardware dedicated to every task. Taking advantage of hardware parallelism, FPGAs exceed the computing power of computer processors and digital signal processors (DSPs) by breaking the paradigm of sequential execution and accomplishing more per clock cycle.

Figure 4: Processor + FPGA combined architecture.
Moore’s law has driven the processing capabilities of microprocessors, and multicore architectures on those chips continue to push this curve higher. BDTI, a noted analyst and benchmarking firm, released benchmarks showing how FPGAs can deliver many times the processing power per dollar of a DSP solution in some applications. Controlling inputs and outputs (I/O) at the hardware level provides faster response times and specialized functionality to closely match application requirements (Figure 3).

The incredible parallel processing of an FPGA has enabled it to scale at a similar rate, while optimized for different types of calculations. The best architectures take advantage of both these technologies (Figure 4).

As mentioned earlier, FPGA chips are field-upgradable and do not require the time and expense involved with ASIC redesign. Digital communication protocols, for example, have specifications that can change over time, and ASIC-based interfaces may cause maintenance and forward compatibility challenges. Being reconfigurable, FPGA chips are able to keep up with future modifications that might be necessary. As a product or system matures, you can make functional enhancements without spending time redesigning hardware or modifying the board layout.

Solution Implementation

Distributed systems such as PMUs or distributed intelligence is not a novel concept. For mathematicians, it may be farming out computing tasks to a computer grid. Facilities managers may imagine wireless sensor networks monitoring the health of a building. These examples share a fundamental theme — a distributed system is any system that uses multiple processors/nodes to solve a problem. Because of the tremendous cost and performance improvements in FPGA technology, and its applications to build smart-grid-ready instrumentation, power engineers are finding more effective ways to meet smart grid application challenges by adding more computing engines/nodes to smart grid systems.

Figure 5: Dataflow programming example.
Distributed intelligence promotes optimum network response times and bandwidth utilization, allows unprecedented amounts of data and grid control operations to be seamlessly managed through the system without clogging wireless networks, and enhances reliability through decentralized coordination instead of through the imposition of hierarchical control via a central SCADA system. However, designing multiple computing engines into a smart grid control system, and later managing those systems, has not been as easy as engineers might hope.

Developing distributed systems introduces an entirely new set of programming challenges that traditional tools do not properly address. For instance, in a sensor network, wireless sensors are self-organizing units that organically connect to other sensors in the vicinity to build a communication fabric. In another example, grid monitoring systems feature remotely distributed headless reclosers, power quality meters, circuit breakers, and PMUs that monitor and control different grid conditions while logging data to SCADA databases. The challenges engineers and scientists face in developing distributed systems include: (1) programming applications that take advantage of multiple processors/nodes based on the same or mixed architectures; (2) sharing data efficiently among multiple processors/nodes that are either directly connected on a single PCB or box, or remotely connected on a network; (3) coordinating all nodes as a single system, including the timing and synchronization between nodes; (4) integrating different types of I/O such as high-speed digital, analog waveforms, and phasor measurements; and (5) incorporating additional services to the data shared between nodes such as logging, alarming, remote viewing, and integration with enterprise SCADA systems.

Graphical System Design

The graphical system design approach addresses programming challenges by providing the tools to program dissimilar nodes from a single development environment using a block diagram approach engineers and scientists are familiar with (Figure 5). Engineers can then develop code to run on computing devices ranging from desktop PCs, embedded controllers, FPGAs, and DSPs utilizing the same development environment. The ability of one tool to transcend the boundaries of node functionality dramatically reduces the complexity and increases the efficiency of distributed application development.

Communication and Data Transfer

Distributed systems also require various forms of communication and data sharing. Addressing communication needs between functionally different nodes is challenging. While various standards and protocols exist for communication, one protocol cannot usually meet all of an engineer’s needs, and each protocol has a different API. This forces engineers designing distributed systems to use multiple communication protocols to complete the entire system. For deterministic data transfer between nodes, engineers are often forced to use complex and sometimes expensive solutions. In addition, any communication protocol or system an engineer uses also must integrate with existing enterprise SCADA systems. One way to address these often competing needs is to abstract the specific transport layer and protocol. By doing this, engineers can use multiple protocols under the hood, unify the code development, and dramatically save development time.

Synchronizing Across Multiple Nodes

Another important component of many distributed systems is coordination and synchronization across intelligent nodes of a network. For many grid control systems, the interface to the external system is through I/O — sensors, actuators, or direct electronic signals. Traditional instruments connected through GPIB, USB, or Ethernet to a computer can be considered a node on a distributed system because the instruments provide in-box processing and analysis using a processor. However, the system developer may not have direct access to the inner workings of a traditional instrument, making it difficult to optimize the performance of the instrument within the context of an entire system.

Through virtual instrumentation platforms, engineers have more options for synchronization and control. FPGA-based reconfigurable I/O devices integrate with dedicated circuitry to synchronize multiple devices to act as one for distributed and high-channel-count applications.

This article was contributed by National Instruments, Austin, TX. For more information, Click Here .