intuVision® R & D

 

intuVision has always been on the forefront of video content analysis technology with extensive R&D. We are dedicated to continuously advancing the intelligent video content extraction algorithms used in our products and custom solutions. 

Video Analytics R&D

Our strong R&D team prototypes and implements the most advanced algorithms into our products to provide the best technology to our customers.

Visit this page each month to see newer R&D updates and results.

Glare Removal

At night, the headlights of a vehicle illuminate the scene and can be captured as a standalone object or as a large extension of the vehicle in traditional video analytics solutions. Even in situations where the object is not seen by the camera, the headlight beam of a passing car may trigger false alarms. intuVision VA provides an intelligent headlight glare removal option that can mitigate such issues.

The images below demonstrate the new technique. As can be seen, the vehicle is correctly detected as a foreground object, while the pixels illuminated by the headlight are removed from the final processed image.

intuVision video analytics, example of patented algorithms removing headlight detections, ensuring only optimal results.

From left to right: 1) A vehicle, with the headlight turned on, enters the camera view. Note that only the vehicle is detected (a result of the glare removal technique). 2) The pixels illuminated by the headlight beam and the vehicle pixels get a high probability of belonging to the foreground. 3) The detected headlight beam pixels are filtered out. 4) The final processed image, devoid of any headlight beam false positives. The vehicle is correctly detected (indicated by the green bounding box).

More Information

Robust Tracking Under Illumination Changes

Traditional background subtraction methods make use of the RGB color space to model the scene background. This approach has an inherent limitation related to handling illumination changes, as the RGB channels are affected by illumination. As the scene gets brighter, the RGB values increase in unison, and vice versa. Thus, as clouds pass over a scene, the scene darkens and the RGB model detects a change, resulting in false positive detections.

In addition to a RGB background model, we also support a model based on the CIEL*a*b* color space. This color space approximates the human visual system, with the a* and b* channels capturing color information based on the opponent color axis (like the human eye), while L* captures a lightness value, similar to human perception. As the illumination and color information is decorrelated, we detect changes in illumination and control the response of the background model.

The image below demonstrates our effective handling of the illumination changes. The left image shows the response of the CIEL*a*b* model when there is bright sunlight, while the right shows the response when the scene darkens due to cloud cover. As can be seen, the background model correctly handles significant illumination changes and does not create false positive detections, specifically in the lower left corner of the image, where the greatest illumination change occurs.

intuVision patented video analytics algorithms adapting to changes in scene and scene lighting.

Object Tracking with Dynamic Backgrounds

intuVision VA video analytics technology uses advanced algorithms to differentiate between the motion of typical foreground objects that are of interest and unwanted motion due to dynamic backgrounds. These algorithms are what allow intuVision VA to detect a boat, while ignoring the rippling of the water. Motion due to dynamic backgrounds is mostly inconsistent or repetitive when observed over multiple frames. Foreground objects on the other hand tend to have consistent motion and hence they produce a highly salient motion. We use this observation to detect foreground objects while eliminating false alarms due to the background motion as illustrated below:

The graphic consists of 6 images from a video surveillance feed. The first is a black and white shoreline video with a boat approaching. The second is the motion pixels from the first image which shows the original motion data from the intelligent video. The third is a motion image, the fourth is a salient motion image.  The fifth shows only the salient motion that is over a specified threshold, which is only the boat.  The sixth image shows the final tracking results of just the boat surrounded in red, the result of our improved video analytics algorithm.

Accurate Object Tracking with Shadow Removal

Shadows of objects, whether they are moving (e.g. people and vehicles) or stationary (e.g. trees or buildings), can cause problems in the tracking detection of objects of interest. Most existing shadow detection and removal algorithms require cumbersome calibration or training and are not easy to set up and use. intuVision has developed novel algorithms to remove shadows from un-calibrated video cameras without any training phase, making it easy to use and deploy. Some examples of automated shadow removal are shown below:

Shows video surveillance feed of a parking lot. The two images on the left have both the objects and their shadows in green bounding boxes, indicating an unsuccessful intelligent video analysis, while the same images are on the right with only the objects boxed in, showing our successful video analytics algorithms at work.

Intelligent Video Analytics Publications by intuVision

The intuVision personnel has several publications in the areas of computational video, video surveillance, biometric systems, integrated solutions and event understanding. The following is the articles list, if you want to read the whole paper, please contact us.

    "Contextual Video Clip Classification"
    S. Guler, A. Morde, I. Pushee, X. Ma, J. Silverstein, S. McAuliffe
    IEEE, Applied Imagery Pattern Recognition, Washington, DC, October, 2012.
    "Learning a Background Model for Change Detection"
    A. Morde, X. Ma, S. Guler
    IEEE, Workshop on Change Detection, Providence, RI, June, 2012
    "GPU Enabled Smart Video Node"
    S. Guler, J. Silverstein, I. Pushee, X. Ma, A. Morde
    IEEE, Advanced Video and Signal-Based Surveillance, Klagenfurt, Austria, August, 2011
    "Who, What, When, Where, Why and How in Video Analysis: An Application Centric View "
    S. Guler, J. Silverstein, I. Pushee, X. Ma, A. Morde
    IEEE, Advanced Video and Signal-Based Surveillance, Boston, MA, August, 2010
    "Border Security and Surveillance System with Smart Cameras and Motes in a Sensor Web "
    S. Guler, T. Cole, J. Silverstein, I. Pushee, S. Fairgrieve
    SPIE Defense and Security Conference, April, 2010
    "Automated person categorization for video surveillance using soft biometrics "
    M. Demirkus, K. Garg, S. Guler
    SPIE Defense and Security Conference, April, 2010
    "Smart Sensing and Tracking with Video and Mote Sensor Collaboration "
    S. Guler,T. Cole
    IEEE International Conference--Technologies for Homeland Security, May 11-12, 2009
    "Inhibitory Surround and Grouping Effects in Human and Computational Multiple Object Tracking"
    O. Yilmaz, S. Guler, H. Ogmen.
    SPIE Electronic Imaging Conference, Visualization and Perception, January, 2008, San Jose, CA, USA
    "Stationary Objects in Multiple Object Tracking"
    S. Guler, J.A. Silverstein, and I. Pushee
    IEEE, Advanced Video and Signal Surveillance, London U.K., September, 2007.
    "Video Scene Assessment with an Unattended Sensor Network" 
    S. Guler, J.A. Silverstein, and K. Garg
    SPIE Europe, Security and Defence, Florence, Italy September, 2007 SPIE Europe, Security and Defence, Florence, Italy September, 2007
    "Abandoned Object detection In Crowded Places" 
    S. Guler and M. K. Farrow
    PETS Workshop, CVPR  Conf., New York City, NY., June 18-23, 2006.
    "Robust People Tracking with Metadata" 
    S. Guler
    Proc. 32nd AIPR Conf. e IEEE, Washington D.C., Oct. 17-19, 2005.
    "Video Tracking with Biometrics Access Control" 
    S. Guler
    Proc. Biometrics Consortium Conf., Washington D.C., Sept. 19-22, 2005.