Image Analysis Using Feature Extraction Technique

Horani, Modar, and Mark Paulik

Interpretation of a scene to extract high level content is one of the most fundamental capabilities of a human being. We routinely take this for granted, a fact that becomes evident when we attempt to teach a machine to accomplish the same task. When the machine in which this ability is incorporated is a mobile robot, it has the potential to be used in a wide variety of civilian and military applications such as fire fighting, enforcing perimeter security, bomb disposal, exploration, etc.

An important step in the process is to transform digital images, acquired via a camera, from pixel-based intensity values to a feature-based representation. These features can then be used by the robot to establish its location, track its position, and recognize an object. When the object is a lane line, its recognition is directly useful for road navigation.

The Scale Invariant Feature Transform (SIFT) is one promising technique that enables the extraction of key distinct features from the image. The advantage of this technique is that the extracted features are invariant to a variety of common practical image artifacts created by different viewing positions. For instance, an object might have a known shape but be smaller, it might appear to be rotated because the angle of approach is slightly different or perhaps the camera has been rotated, etc. The SIFT algorithm is even partially capable of dealing with a change in ambient illumination. In this work, we implement the SIFT algorithm and test it on a wide variety images to validate its performance in practical scenarios common to autonomous robot navigation.