For a Research Project course in the Automated Driving department, we are working on "Visual Lane Following for Scaled Automated Vehicles". The vehicle is based on the NVIDIA-backed open-source JetRacer platform, including an NVIDIA Jetson Nano. Specifically, I am working on lane detection using classical and deep learning techniques.
In my 2nd semester at RWTH Aachen University, I took a course on Automated and Connected Driving Challenges(ACDC). We were presented with problems in the automated driving sector and gained invaluable insights into developing solutions for them. I studied Sensor Data Processing techniques such as semantic and instance segmentation, and point cloud segmentation. Additionally, I learned about Object Fusion, V2X communication, and gained practical experience with Computer Vision and ROS. The ACDC Research Project, an extension of the course, allows us to contribute to various research topics throughout the semester. I am working in a team of four students on Visual Lane Followinf for Scaled Automated Vehicles.
The project has 2 major components - Computer Vision (Lane Detection/Drivable area segmentation) and Control Algorithm for controlling the vehicle automatically. I am working on lane detection using Classical and Deep learning techniques. In Classical vision techniques, I am utilizing Sobel and Canny Edge filters to detect the lanes in the image. Using YOLOPv2, I can segment the drivable area accurately. Some images are shown below with results from the Deep learning model, and the classical algorithm, but I will update this page after the semester with a comprehensive update -
Original Image
Segmented Image (Drivable Area)
Original Image
Masked lane image (Post Canny Edge based edge detection)