Imaging

Researchers Measure Stress in 3D-Printed Metal Parts

Lawrence Livermore National Laboratory researchers have developed an efficient method to measure residual stress in metal parts produced by powder-bed fusion additive manufacturing (AM).The 3D-printing process produces metal parts layer by layer using a high-energy laser beam to fuse metal powder particles. When each layer is complete, the build platform moves downward by the thickness of one layer, and a new powder layer is spread on the previous layer.While the method produces quality parts and components, residual stress is a major problem during the fabrication process. Large temperature changes near the last melt spot, and the repetition of this process, result in localized expansion and contraction.An LLNL research team, led by engineer Amanda Wu, has developed an accurate residual stress measurement method that combines traditional stress-relieving methods (destructive analysis) with modern technology: digital image correlation (DIC). The process provides fast and accurate measurements of surface-level residual stresses in AM parts.The team used DIC to produce a set of quantified residual stress data for AM, exploring laser parameters. DIC is a cost-effective, image analysis method in which a dual camera setup is used to photograph an AM part once before it’s removed from the build plate for analysis and once after. The part is imaged, removed, and then re-imaged to measure the external residual stress.SourceAlso: Learn about Design and Analysis of Metal-to-Composite Nozzle Extension Joints.

Posted in: Cameras, Imaging, Photonics, Lasers & Laser Systems, Manufacturing & Prototyping, Rapid Prototyping & Tooling, Materials, Metals, Test & Measurement, Measuring Instruments, News

Read More >>

Moving Cameras Track Objects Automatically

University of Washington electrical engineers have developed a way to automatically track people across moving and still cameras by using an algorithm that trains the networked cameras to learn one another’s differences. The cameras first identify a person in a video frame, then follow that same person across multiple camera views.“Tracking humans automatically across cameras in a three-dimensional space is new,” said lead researcher Jenq-Neng Hwang, a UW professor of electrical engineering. “As the cameras talk to each other, we are able to describe the real world in a more dynamic sense.”Imagine a typical GPS display that maps the streets, buildings and signs in a neighborhood as your car moves forward, then add humans to the picture. With the new technology, a car with a mounted camera could take video of the scene, then identify and track humans and overlay them into the virtual 3-D map on your GPS screen. The UW researchers are developing this to work in real time, which could help pick out people crossing in busy intersections, or track a specific person who is dodging the police.“Our idea is to enable the dynamic visualization of the realistic situation of humans walking on the road and sidewalks, so eventually people can see the animated version of the real-time dynamics of city streets on a platform like Google Earth,” Hwang said.SourceAlso: Learn about Machine Vision for High-Precision Volume Measurement.

Posted in: Electronics & Computers, Cameras, Video, Visualization Software, Imaging, News

Read More >>

NASA Technologists Advance Next-Generation 3D Imaging

Building, fixing, and refueling space-based assets or rendezvousing with a comet or asteroid will require a robotic vehicle and a super-precise, high-resolution 3D imaging lidar that generates the real-time images needed to guide the vehicle to a target traveling at thousands of miles per hour.A team of technologists at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, is developing a next-generation 3D scanning lidar — dubbed the Goddard Reconfigurable Solid-state Scanning Lidar (GRSSLi) — that could provide the imagery required to execute these orbital dances.Equipped with a low-power, eye-safe laser, a micro-electro-mechanical scanner, and a single photodetector, GRSSLi will "paint" a scene with the scanning laser. Its detector will sense the reflected light to create a high-resolution 3-D image at kilometer distances — a significant increase in capability over current imaging lidars that are effective only at meter distances.Just as important, the instrument is equipped with onboard "vision" algorithms that interpret the three-dimensional image returned by the lidar. The softwar estimates location and attitude of a target relative to the lidar.SourceAlso: Learn about NASA's Asteroid Redirect Mission.

Posted in: Visualization Software, Imaging, Photonics, Lasers & Laser Systems, Aerospace, RF & Microwave Electronics, News

Read More >>

High-Res Line Camera Measures Magnetic Fields in Real Time

Scientists have developed a high‑resolution magnetic line camera to measure magnetic fields in real time. Field lines in magnetic systems such as generators or motors that are invisible to the human eye can be made visible using this camera. It is especially suitable for industrial applications in quality assurance during the manufacture of magnets.

Posted in: Cameras, Imaging, Manufacturing & Prototyping, Sensors, Test & Measurement, Measuring Instruments, News

Read More >>

University Opens New Ballistics and Impact Dynamics Lab

Wichita State University’s National Institute for Aviation Research recently opened a new ballistics and impact dynamics research lab in the former Britt Brown Arena at the Kansas Coliseum. The new ballistics lab, part of NIAR’s environmental Test Labs, uses a custom built ballistic firing device to propel 22-50 caliber rounds into components inside a concrete containment building. The test is designed to simulate the impact of a structural failure on the aircraft.

Posted in: Cameras, Imaging, Test & Measurement, Monitoring, Aerospace, Aviation, Data Acquisition, Defense, News

Read More >>

Ultra-Thin 3D Display Promises Greater Energy Efficiency

An ultra-thin LCD screen, developed by a group of researchers from the Hong Kong University of Science and Technology, holds three-dimensional images without a power source, making the display technology a compact, energy-efficient way to display visual information.In a traditional LCD, liquid crystal molecules are sandwiched between polarized glass plates. Electrodes pass current through the apparatus, influencing the orientation of the liquid crystals inside and manipulating the way they interact with the polarized light. The new displays ditch the electrodes, simultaneously making the screen thinner and decreasing its energy requirements. Once an image is uploaded to the screen via a flash of light, no power is required to keep it there. Because these so-called bi-stable displays draw power only when the image is changed, they are particularly advantageous in applications where a screen displays a static image for most of the time, such as e-book readers or battery status monitors for electronic devices. “Because the proposed LCD does not have any driving electronics, the fabrication is extremely simple. The bi-stable feature provides a low power consumption display that can store an image for several years,” said researcher Abhishek Srivastava.The researchers, however, went further than creating a simple LCD display; they engineered their screen to display images in 3D. SourceAlso: Learn about a Rapid Prototyping Lab (RPL) Generic Display Engine.

Posted in: Electronics & Computers, Imaging, Displays/Monitors/HMIs, Energy Efficiency, Energy, News

Read More >>

Automated Imaging System Analyzes Underground Root Systems

Researchers from the Georgia Institute of Technology and Penn State University have developed an automated imaging technique for measuring and analyzing the root systems of mature plants. The technique, believed to be the first of its kind, uses advanced computer technology to analyze photographs taken of root systems in the field. The imaging and software are designed to give scientists the statistical information they need to evaluate crop improvement efforts.“We’ve produced an imaging system to evaluate the root systems of plants in field conditions,” said Alexander Bucksch, a postdoctoral fellow in the Georgia Tech School of Biology and School of Interactive Computing. “We can measure entire root systems for thousands of plants to give geneticists the information they need to search for genes with the best characteristics.”Imaging of root systems has, until now, largely been done in the laboratory, using seedlings grown in small pots and containers. Such studies provide information on the early stages of development, and do not directly quantify the effects of realistic growing conditions or field variations in water, soil, or nutrient levels.The technique developed by Georgia Tech and Penn State researchers uses digital photography to provide a detailed image of roots from mature plants in the field. Individual plants to be studied are dug up and their root systems washed clean of soil. The roots are then photographed against a black background using a standard digital camera pointed down from a tripod. A white fabric tent surrounding the camera system provides consistent lighting.The resulting images are then uploaded to a server running software that analyzes the root systems for more than 30 different parameters, including the diameter of tap roots, root density, the angles of brace roots, and detailed measures of lateral roots.SourceAlso: Learn about Strobing to Enhance Display Legibility.

Posted in: Electronics & Computers, Cameras, Imaging, Software, Test & Measurement, Measuring Instruments, News

Read More >>

Imaging System Obtains More Color Information than Human Eye

Researchers at the University of Granada have designed a new imaging system capable of obtaining up to twelve times more color information than the human eye and conventional cameras, which implies a total of 36 color channels. The important scientific development will facilitate the easy capture of multispectral images in real time.The technology could be used in the not-too-distant future to create new assisted vehicle driving systems, to identify counterfeit bills and documents, or to obtain more accurate medical images than those provided by current options.The scientists, from the Color Imaging Lab group at the Optics Department, University of Granada, have designed the new system using a new generation of sensors, in combination with a matrix of multispectral filters to improve their performance.Transverse Field Detectors (TFDs) extract the full color information from each pixel in the image without the need for a layer of color filter on them.In order to do so, the TFDs take advantage of a physical phenomenon by virtue of which each photon penetrates at a different depth depending on its wavelength, i.e., its color. In this way, by collecting these photons at different depths on the silice surface of the sensor, the different channels of color can be separated.SourceAlso: Learn about Imaging Space System Architectures.

Posted in: Cameras, Imaging, Sensors, Detectors, Medical, News, Automotive

Read More >>

Underwater Robot Skims for Port Security

MIT researchers unveiled an oval-shaped submersible robot, a little smaller than a football, with a flattened panel on one side that it can slide along an underwater surface to perform ultrasound scans.Originally designed to look for cracks in nuclear reactors’ water tanks, the robot could also inspect ships for the false hulls and propeller shafts that smugglers frequently use to hide contraband. Because of its small size and unique propulsion mechanism — which leaves no visible wake — the robots could, in theory, be concealed in clumps of algae or other camouflage. Fleets of them could swarm over ships at port without alerting smugglers and giving them the chance to jettison their cargo.Sampriti Bhattacharyya, a graduate student in mechanical engineering, built the main structural components of the robot using a 3-D printer. Half of the robot — the half with the flattened panel — is waterproof and houses the electronics. The other half is permeable and houses the propulsion system, which consists of six pumps that expel water through rubber tubes.Two of those tubes vent on the side of the robot opposite the flattened panel, so they can keep it pressed against whatever surface the robot is inspecting. The other four tubes vent in pairs at opposite ends of the robot’s long axis and control its locomotion.SourceAlso: Learn about Underwater Localization for Transit and Reconnaissance Autonomy.

Posted in: Imaging, Manufacturing & Prototyping, Rapid Prototyping & Tooling, Motion Control, Power Transmission, Machinery & Automation, Robotics, News

Read More >>

Army Researchers Enable Night Lethality

In science fiction, technology problems are solved with the stroke of a writer's pen. In reality, science and technology research takes time, and a lot of effort.

Posted in: Electronics & Computers, Imaging, Displays/Monitors/HMIs, Sensors, Defense, News

Read More >>

White Papers

Spectrum Analyzer Fundamentals - Theory and Operation of Modern Spectrum Analyzers
Sponsored by Rohde and Schwarz A and D
Introduction to Hypervisor Technology
Sponsored by Curtiss-Wright Controls Embedded Computing
100% Non-Cytotoxic Rechargeable Batteries for Medical Devices
Sponsored by Cymbet
When Does A Solar Energy System Make Sense for Remote Site Power?
Sponsored by SunWize
Estimating the Effort and Cost of a DO-254 Program
Sponsored by Logic Circuit
Epoxies and Glass Transition Temperature
Sponsored by Master Bond

White Papers Sponsored By: