This is the Advanced Lane Finding project from Udacity Self-Driving Car Engineer Nanodegree.
The goal of this project is to develop a pipeline to process a video stream from a forward-facing camera mounted on the front of a car and output an annotated video which identifies:
- The positions of the lane lines
- The location of the vehicle relative to the center of the lane
- The radius of curvature of the road
The pipeline created for this project processes images in the following steps:
- Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
- Apply a distortion correction to raw images.
- Use color transforms, gradients, etc., to create a thresholded binary image.
- Apply a perspective transform to rectify binary image ("birds-eye view").
- Detect lane pixels and fit to find the lane boundary.
- Determine the curvature of the lane and vehicle position with respect to the center.
- Warp the detected lane boundaries back onto the original image.
- Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.
Python 3.5 and the following dependencies:
Check out the writeup template for this project and use it as a starting point for creating your own writeup.