To provide complete monitoring of building motion during excitation, unit cameras imaged key points within the building’s structure, including the base of support columns and the junctures of the floors and ceilings with the support columns (Figure 1). A pattern of reflective markers within each field of view provided the analysis system with measurement points that it could use to calculate both motion and distortion within the structure. Each pattern allowed the determination of local effects, and correlation of data among the synchronized units allowed the determination of global effects.
One of the challenges the vision system needed to overcome was the effect of environmental illumination. The greater the contrast between the marker and the background, the faster and more reliably the vision system could identify the markers and measure their position. In some configurations, however, the environmental illumination could wash out marker visibility (Figure 2a). To resolve this problem, the researchers mounted infrared filters in the cameras to eliminate ambient light from the image, and used infrared lamps to ensure controlled illumination of the markers. The result was a significant increase in marker visibility (Figure 2b) and, thus, enhanced measurement speed and accuracy.
High Precision and Accuracy
The image analysis system identified markers within an image by looking for “blobs” — groups of contiguous pixels having the same color. Having identified a blob, the analysis software then determined the blob’s center by examining transitions in the color’s intensity within the blob (Figure 3). This centroid determination had a theoretical precision of 0.02 sub-pixels, which corresponded to a measurement precision on the order of 0.01 mm. To create the mapping from pixels to real-world placement, the researchers used a pinhole-camera model and geometric analysis of the camera and marker placements.
The host CPU’s processing power limited the number of markers that a unit could work with. Within that limit, however, the cameras and analysis units could identify markers on the structure anywhere within their field of view and calculate their absolute coordinates at a frequency of 60 Hz. For finer time resolution, the system could work with any 2-Mpixel rectangular area within the field of view at a frequency, and achieve a measurement speed of 120 Hz. These speeds were more than adequate to measure the typical vibration frequencies of building seismic responses.
In order to test the accuracy of this innovative optical measurement system, EUCENTRE compared the movement results for the bases of the building’s support columns against the known movement of the shake table to which they were attached. Under low acceleration conditions, the column bases should undergo simple rigid translation, so that their movements should correspond to the table’s movement. As shown in Figure 4, the results were a nearly perfect match.
Machine vision has thus added an important new tool to earthquake retrofit evaluation. Because it relies simply on the placement of markers, optical instrumentation is relatively quick and inexpensive to install. Further, measurement sites are readily modified and augmented, allowing researchers to explore unanticipated regions of interest that may arise during testing. The high performance of modern, high-definition digital cameras and image processing systems provides speed, precision, and accuracy comparable to traditional methods. The result is that machine vision provides a relatively low-cost, easy-to-install method for the measurement and analysis of structural motion.
This article was written by Jean-Pierre Luevano, International Sales Manager at DALSA (Montreal, Canada), and Marco Diani, Owner of Image S (Milan, Italy). For more information, visit http://info.hotims.com/28060-148.