Thanks to real-time public reporting and today’s roadside sensors, transportation agencies are increasingly able to inform drivers about weather conditions, traffic speed, and the class of vehicles nearby.

To communicate the situation on the road, cities use digital roadway signs, advisory radio, and, more recently, roadside units that broadcast the information directly into connected vehicles.

This move toward intelligent transportation systems (ITS) supports safer, faster roadways.

Agencies can take the data a step further and automate actual decisions to improve driving efficiency.

In an intelligent traffic system, for example, the reported information could be used to change the timing of traffic signals. Similarly, lane directions could be reversed in order to manage an influx of traffic going in a single direction. (Think: An exit from a sporting event)

A new machine vision tool extracts and reports valuable driving data from the standard traffic cameras already in place.

Southwest Research Institute (SwRI) , a nonprofit organization based in San Antonio, TX, has announced the release of ActiveVision. The machine-vision tool will help transportation agencies to autonomously detect and report traffic condition changes like real-time weather conditions and anomalies affecting congestion.

Designed for integration with intelligent transportation systems (ITS), ActiveVision can be configured with existing traffic cameras to analyze roadway conditions – no human monitoring required.

“The goal is to help transportation officials enhance their ITS capabilities with advanced algorithms that autonomously scan vast amounts of visual data, extracting and reporting actionable data,” said Dan Rossiter, an SwRI research analyst leading ActiveVision development.

Rossiter spoke with Tech Briefs about how ActiveVision may add even more intelligence to today's smart traffic systems.

Tech Briefs: How do you envision ActiveVision's place within an Intelligent Transportation System?

Dan Rossiter: ActiveVision provides cost-effective access to actionable data using existing traffic cameras. By extracting and reporting this actionable data to transportation agencies, we remove the need for constant manual monitoring of often hundreds or thousands of camera feeds and allow the agencies to instead focus on responding to the incidents that require their immediate attention.

Tech Briefs: Was this data not used previously for intelligent traffic purposes?

Rossiter: Traditionally, the only data extracted from these traffic cameras was the video feed itself. Manual observation of the feeds by operators was required in order to know what was going on.

If there is one video feed, manual monitoring is sometimes a practical solution. In practice, however, these agencies tend to often have hundreds or even thousands of traffic camera feeds and generally do not have the manpower necessary for manual observation. By programmatically detecting the actionable data on these feeds when it occurs, agencies can spend their available resources addressing the actionable data rather than spending those same resources sifting through these many video feeds trying to find that actionable data.

Tech Briefs: What kinds of data is being extracted from these existing traffic cameras?

Rossiter: In Phase I, ActiveVision capabilities are focused on weather detection. The system is capable of extracting qualitative driver visibility measures, indicating whether visibility as represented on one or many video feeds is "good" or "bad." Additionally, the system can detect what kind of weather is occurring, reporting when it is raining, foggy, actively snowing, or if there is snow on the ground.

Tech Briefs: What is special about your system compared to other machine-vision options that exist for smart-traffic applications?

Rossiter: ActiveVision is entering the market with the ability to sense both changes in driver visibility and the ability to detect specific weather conditions, including snow, fog, and rain. These capabilities are completely camera vendor agnostic, allowing it to work with any traffic camera on the market today.

Impaired driver visibility and reduced roadway traction are both leading causes of roadway collisions. By providing a way for traffic agencies to quickly identify and act to these conditions we are reducing the time between the condition occurring and the traveling public being alerted, thereby maximizing public safety.

Tech Briefs: What are the most useful capabilities of ActiveVision?

Rossiter: "Vision Zero" has long been a goal for transportation agencies – a vision of zero fatalities on their roadways. ActiveVision provides a cost-effective alternative to traditional traffic sensing capabilities, bringing this vision closer to a reality for both state departments of transportation and smaller municipalities.

These added capabilities are provided to the transportation agencies without any change to operations. Cameras that support pan, tilt, and zoom movements can continue to be used as they were previously, and our algorithms will adapt to these camera movements and continue to extract actionable data. This is novel in an industry where many applications of computer vision require that the camera be positioned at a known "preset" of X, Y, Z before sensing can function properly.

Tech Briefs: How does the ActiveVision approach change when you have a more complex camera setup? For example, one that pans and tilts?

Rossiter: Our approach is not reliant on consistent positioning of the camera, which allows ActiveVision to function with pan/tilt/zoom cameras as well as stationary cameras. We started this research with a plan to support standard traffic cameras without imposing restrictions on how the cameras are used outside of ActiveVision, and we met that goal. When a camera is repositioned, our algorithms re-learn their environment and continue to sense based on the new position of the camera.

Tech Briefs: Where has this system been tested?

Rossiter: The system has been tested using traffic camera data sourced from our state Department of Transportation partners across the United States, including multiple states along the East Coast and in the Midwest. This broad testbed has given us the chance to train our algorithms on traffic camera feeds in a broad range of environments observing a wide range of weather conditions.

Tech Briefs: What's next? Where this will be used and implemented?

Rossiter: These weather-related capabilities have been implemented in what we've dubbed ActiveVision "Phase I." This is available today, and we are happy to speak with interested parties about providing a demo at any time (This email address is being protected from spambots. You need JavaScript enabled to view it.).

ActiveVision is an evolving platform that will continue to grow in capabilities through the coming years both using internal funding and by re-investing revenue from ActiveVision directly into new capabilities. With that in mind, we are currently securing internal funding for ActiveVision "Phase II," which will add many traditional traffic sensing capabilities.

This next round of capabilities is slated to include wrong-way driver detection, road vehicle counts, traffic speed detection, and other conditions. These capabilities will further strengthen the value of ActiveVision to transportation agencies and empower them to increase public safety and roadway efficiency.

What do you think? Share your comments and questions below.