Lane Tracking via Computer Vision

David Rose
3 min readNov 8, 2017

As part of the first term of my Udacity Self-Driving Car program many topics relating to computer vision and how they integrate into driving a car were though, involving many projects that may be encountered in real-life attempts at programming an autonomous car.

Here I will walk through the steps I used and the techniques involved.

Some techniques involved:

  • Image calibration and transformation
  • Image thresholding via gradients
  • Regression to track lane curvature

Calibrating and Transforming Perspective

The lens of every came increases a slight portion of distortion within the images it captures, especially around the corners. Using a printed chessboard image and then tracking the corners with a function from OpenCV, the image can be transformed slightly to straighten out the lines, and then apply that transformation to subsequent images of the camera. Below is an example of image un-distortion:

Next I need to transform the image from having a driver POV to a birds eye POV. This can be accomplished with the OpenCV function warpPerspective.

Binary Transformation for explicit lane identification

I performed a transformation of color channels to HLS and then pulled out the S layer as it seemed to perform the best in making the lane line stand out. Then using gradient thresholds to generate a binary image (values are either 0 or 255). The threshold values I used were 10 and 100. Below is an example of my output for this step.

Plot a Histogram of Pixel Values

Now if we plot a histogram of the pixel values on the X-axis, it starts to really stand out where the lanes reside in the image.

Sliding Window Detection

Breaking up the image height-wise into a series of sliding windows I can continually find the pixel locations and begin to mark the lane locations. In the below image you can see the result of this effort. Though there were some issues drawing the window rectangles, it performed well enough in practice. It uses a total of 9 windows on each lane to then track midpoint concentration of pixels, to best fit the lane line.

Overlay a Polygon to Fill Lanes

Once I have these lines digital lines fitted over the image of lane lines, I can use another OpenCV function to fill in a polygon and ‘draw’ a virtual lane into the birds-eye image. This function, fillPoly creates a basic polygon on a blank image, then I can use the addWeighted function to overlay it (with transparency) on to the original lane image, by first placing it on birds-eye then warping back to original.

Final Steps

Now I just take the lane plotted back on to the video feed, with detected lane line curves also overlaid onto the image, along with the curves printed in the top-left corner (radius in meters).

--

--