<%@LANGUAGE="JAVASCRIPT" CODEPAGE="65001"%> ATLAS project ATLASCAR Perception

Perception

Perception of the ATLASCAR1 is based of a wide set of sensors. In order to produce a synergistic effect in the combination of these, a multi modal multidimensional map is generated, by geometrically registering and time synchronizing the data streaming from several sensors.

Several algorithms have been developed to perceive the road, other cars, general obstacles, etc.

 

The next figures show a typical 3D world representation of a road scene using stereo camera data.

Up to the moment, several other ADAS related algorithms have been developed.

Vision based Road Detection using Road Flood Fill

This algorithm makes use of line connectivity in order to flood fill the region inside the road.

Road Lane Markers Extraction using Simple Statistical Parameters

In order to describe lane markers, this algorithm employs simple statistical parameters such as  marker’s average width, width variance. These parameters are then compared to some reference values in order to ascertain whether the blob is a lane marker or not.

Vision based Road Detection using Template Matching

Using classical template matching techniques this algorithm is able to pick up central dashed line markers.

Detections are then clustered and filtered in order to obtain a single lane marker.

Color segmentation based obstacle detection and avoidance

This algorithm employs a convex hull operation after log polar transformation in order to generate the free space around the robot.

In this image, the free space region is painted in blue. Orange road maintenance area pins are detected using HSV color space segmentation.

Inverse Perspective Mapping for multi-camera data fusion

In order to accurately merge information from several cameras, the Inverse Perspective Mapping operation is employed. This 4 camera sensing platform is motorized by a pan and tilt unit.

Here are some results of the roads’ birds eye view merging information from 2 cameras, with several pan and tilt positions.

Laser Assisted Inverse Perspective Mapping for multi-camera and laser data fusion

This algorithm uses a Laser Range Finder to validate the flat world hypothesis assumed by the Inverse Perspective Mapping algorithm.

Vehicle detection based on Haar Features

These results were obtained by training a Haar like features cascade to detect vehicle’s rears.

LIDAR based obstacle detection and Tracking

This algorithm implements a Laser Rangefinder Kalman Filter obstacle detection and Tracking. The Next picture shows a scene and a laser scan where people are being tracked.

Path planning using trajectory nodes in 2D

Using the sensor information registered to a 2D bird’s eye world, we are able to employ a non-holonomic model of the vehicle to generate a set of possible trajectories.

These trajectories are then compared both to visual and laser information using several criteria that eventually enable the algorithm to choose the most adequate trajectory.