The term “smart grid” is an umbrella term used to refer to new technologies that aim to address today’s electrical power grid challenges. At a high level, these technologies address challenges associated with grid reliability and reactive maintenance, renewables integration, and disturbance detection. One way to help address these challenges is to push decision-making and intelligence closer to the grid, embedded within flexible instrumentation to achieve faster response times, better bandwidth utilization, and functionality field upgrades that will keep field instruments up-to-date with the latest algorithms and methodologies to monitor and protect the grid.

Critical Components

Figure 1: Graphical FPGA design translated to independent parts of an FPGA.
There is no silver bullet when it comes to smart grid implementation, and it is likely to be an ongoing global effort for years to come that will require multiple iterations with constantly evolving requirements. On one side, standalone traditional instruments such as reclosers, power-quality meters, transient recorders, and phasor measurement units (PMUs) are robust, standards-based, and embedded, but are designed to perform one or more specific/fixed tasks defined by the vendor (i.e. the user generally cannot extend or customize them). In addition, special technologies and costly components must be developed to build these instruments, making them very expensive and slow to adapt. On the other side, the rapid adoption of the PC in the past 30 years catalyzed a revolution in instrumentation for test, measurement, and automation. Computers are powerful, open-source, I/O expandable, and programmable, but not robust and not embedded enough for field deployment.

One major development resulting from the ubiquity of the PC is the concept of virtual instrumentation, which offers several benefits to engineers and scientists who require increased productivity, accuracy, and performance. Virtual instrumentation bridges traditional instrumentation, with computers offering the best of both worlds: measurement and quality, embedded processing power, reliability and robustness, open-source programmability, and field-upgradability.

Virtual instrumentation is the foundation for smart-grid-ready instrumentation. Engineers and scientists working on smart grid applications where needs and requirements change very quickly need flexibility to create their own solutions. Virtual instruments, by virtue of being PC-based, inherently take advantage of the benefits from the latest technology incorporated into off-the-shelf PCs, and they can be adapted via software and plug-in hardware to meet particular application needs without having to replace the entire device.

Figure 2: Sequential vs. parallel implementation of a tap filter utilizing an FPGA with 2,016 DSP slices at 600 million samples per second (MSPS).
While software tools provide the programming environment to customize the functionality of a smart-grid-ready instrument, there is a need for an added layer of robustness and reliability that a standard off-the-shelf PC cannot offer. One of the most empowering technologies that adds this required level of reliability, robustness, and performance is the Field Programmable Gate Array (FPGA).

FPGAs

At the highest level, FPGAs are reprogrammable silicon chips. Using prebuilt logic blocks and programmable routing resources, you can configure these chips to implement custom hardware functionality without ever having to pick up a breadboard or soldering iron. You develop digital computing tasks in software and compile them down to a configuration file or bitstream that contains information on how the components should be wired together. In addition, FPGAs are completely reconfigurable and instantly take on a brand new “personality” when you recompile a different configuration of circuitry. In the past, FPGA technology was only available to engineers with a deep understanding of digital hardware design. The rise of high-level design tools, however, is changing the rules of FPGA programming, with new technologies that convert graphical block diagrams or even C code into digital hardware circuitry (Figure 1).

FPGA chip adoption across all industries is driven by the fact that FPGAs combine the best parts of ASICs and processor-based systems. FPGAs provide hardware-timed speed and reliability, but they do not require high volumes to justify the large upfront expense of custom ASIC design. Reprogrammable silicon also has the same flexibility of software running on a processor-based system, but is not limited by the number of processing cores available. Unlike processors, FPGAs are truly parallel in nature so different processing operations do not have to compete for the same resources. Each independent processing task is assigned to a dedicated section of the chip, and can function autonomously without any influence from other logic blocks. As a result, the performance of one part of the application is not affected when additional processing is added (Figure 2).

Figure 3: Moore’s Law comparing FPGA and CPU performance.
FPGA circuitry is truly a “hard” implementation of program execution. Processor-based systems often involve several layers of abstraction to help schedule tasks and share resources among multiple processes. The driver layer controls hardware resources and the operating system manages memory and processor bandwidth. For any given processor core, only one instruction can execute at a time, and processor-based systems are continually at risk of time-critical tasks pre-empting one another. FPGAs, which do not use operating systems, minimize reliability concerns with true parallel execution and deterministic hardware dedicated to every task. Taking advantage of hardware parallelism, FPGAs exceed the computing power of computer processors and digital signal processors (DSPs) by breaking the paradigm of sequential execution and accomplishing more per clock cycle.