Engineers have one very keen sense in life, and that is to ferret out usefulness from perceived buzz. Trained as such, unless we are reading the weather report, the word “cloud” sends our buzz-o-meters into an alarm-sounding red zone. Nevertheless, let’s take a look at how the cloud might apply to us. Clearly, it has taken hold in the consumer world, and has possible compelling uses in many engineering domains.

alt
Figure 1. Architecture of an onsite compile farm with one server and multiple workers.

Cloud computing was not simply invented, but has evolved rapidly from the primordial ooze comprised of the rapid improvements in performance of compute power, expanding network bandwidth, and networking infrastructure. The early mainframes started off as single-program, high-horsepower computing machines. As compute capacity of mainframes improved, machine administrators faced the practical problem of utilizing multiple machines at capacity, which drove innovation in job scheduling, machine virtualization, and individual user terminals.

With this technology, multiple users could utilize the mainframe with the appearance that they had their own computing environment — running their programs in a virtualized sandbox. As mainframes began to look more and more like a bank of server PCs stitched together through networking, the infrastructure of LAN and the Internet became widely available. The cloud is simply the next step in the evolution where many physically co-located PCs are stitched together into a larger computing unit that can then be virtualized to provide many users, regardless of location and the power of their terminal, the illusion that they have lots of computers or one giant computer at their disposal.

What the Cloud Is, and Isn’t

What is the difference between the cloud and the Internet? Is it the same as a network-connected mainframe? The National Institute of Standards and Technology (NIST) defines cloud computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.1 Let’s dissect this definition a little to understand the defining characteristics of cloud computing.

“On-demand network access” means that the cloud can be quickly accessed from anywhere, from any device with a network data connection. Of course, a computer with a wired or Wi-Fi connection is an obvious access point. It becomes more interesting when you think of other data connections abstracted to look like a simple Internet connection, like a smart phone on the cell network or a WiMAX mobile networking device.

“Shared pool,” also called multi-tenancy, works like any other pool of shared resources. Many users can use reduced resources during slow times and ramp up during peak times, perhaps while other users are ramping down. The pool supports the ebbs and flows of the various tenants more efficiently than if each user provisioned an environment that supports only their own needs. It is like the ultimate commune of computing.

“Computing resources” does not just mean “How many boxes do you want?” Resources in the cloud are modular and highly configurable. Users can configure their cloud environments with different processors, RAM sizes, operating systems, applications, storage levels, download/upload bandwidths, infrastructure layers, and services.

“Rapidly provisioned and released” is the key component for the scalability of cloud computing, allowing the pool of resources to quickly reconfigure based on the needs and loads of the users. Many self-hosted operations report that their servers run at 10 to 20% utilization under normal operating conditions in order to account for periodic extraordinary usage patterns. Quick provisioning dynamically adjusts for more efficient usage and a significant cost savings in many circumstances.

“Minimal management” means that cloud providers take on some management burden like maintaining and upgrading the hardware, backing the system, and providing APIs for programming, provisioning, and running applications. The management and maintenance of the cloud for users and programmers is easier because once updated, the cloud updates the user inherently, and depending on the layer of cloud technology you use, the cloud providers offload many of the IT, redundancy, uptime, and management issues. Additionally, innovation is happening that allows the cloud to manage itself in many cases through automatic routines and procedures.

Using the Cloud

alt
Figure 2. NI LabVIEW FPGA Cloud Compile Service main administration console shown receiving and scheduling compile jobs on dynamically configurable worker computers.

Now that we understand a little about what the cloud is in the academic sense, let’s explore its uses in engineering. Stemming from mainframes, one would be correct in assuming that cloud computing can tackle computationally complex problems by exploiting multiprocessing techniques. A high-fidelity simulation, for instance, that might eat up a single PC for days can be done fairly quickly in a correctly configured cloud setup.

On-demand storage is another popular use for the cloud. If you have infinite data storage and extreme computational power, why not just run entire applications in the cloud and access them from anywhere in the world? Even so, in some cases, there are still security concerns and lingering technical details that are limiting factors, but also sources of innovation.

The promise of cloud computing is the virtually unlimited scalability of the processor cycles one has at their disposal. Of course, not all intensive tasks can be parallelized, but in theory, the cloud represents infinite open lanes of instruction crunching.

In practice, companies are already seeing benefits. National Instruments is putting field-programmable gate array (FPGA) compilations in the cloud (Figure 1). In fact, the current beta program already allows users to simply select the NI Hosted Services compilation server and subsequently use the LabVIEW FPGA tools in the same familiar way (Figure 2). The only difference is that compilation happens on an optimized, high-RAM dedicated computer in the cloud rather than locally maintained servers, or worse, bogging down the development computer. The transfer of files and statuses is handled in the background on high-security Web service connections to a set of other cloud machines that handles authentication, license checking, scheduling, and of course, the specialized and computationally intensive work of an FPGA compile.

The cloud represents both something special and something familiar for engineers. They have been using server banks and mainframes since before Web 2.0 sold it out; however, they must pay homage, and indeed embrace, the never-before-seen levels of dynamic, on-demand, reconfigurable, ubiquitous computing resources of the cloud.

This article was written by Rick Kuhlman, NI LabVIEW FPGA Product Manager at National Instruments, Austin, TX. For more information, visit http://info.hotims.com/34453-121.

Reference

1NIST.gov - Computer Security Division - Computer Security Resource Center