Abstract:
In the last few years, autonomous driving has become a major topic in both car industry and academia. Many challenging problems remain to be solved, however, in order for autonomous cars to be on the market. In this thesis, I will describe the structure of a simple robotic system, called drAlver, which is able to navigate autonomously using computer vision algorithms. To reach this goal the problem has been decomposed into tasks. The first task is road line detection, two kinds of detector have been proposed. A basic line detector and an advanced line detector. The latter method shows more effective performances overcoming some limitations of the first one. The second one is the detection of cars, pedestrians, cyclists and traffic signs which is solved with a CNN, in particular using the YOLOv3 architecture. The above tasks are not solved on the robot's board due to its limited computational power. Instead, the robot is paired with a computer via a wireless connection. The thesis work is also related to the engineering and the development of the hardware and the software as well as the communication structure among computer and robot. The hardware is based on Raspberry Pi Model B which is able to control motors. To sense the surrounding environment, the robot captures images using a webcam positioned on its top. The communication between different modules is based on queues resulting in an asynchronous and parallelized system.