Thesis topics

So you’re thinking of making the culmination of your studies about self-driving?

Good, because we need all the help we can get. See all the available thesis topics and don’t hesitate to contact us if your idea for a self-driving topic is not listed!

Snow plow driver assistance

Bachelor or Masters
Tambet Matiisen
Edgar Sepp
Link

Curbs and road edges can be hard to detect for snow plow operators. Mistakes can be costly both in terms of repairing the road and machinery. We proposed an assistance system for snow plow operators that notifies if the plow is too close to the road edge. The system is based on centimeter-level accurate RTK GPS system and snow plow area maps from Tartu Geohub. The goal of this project is to validate this idea on a real snow plow machine.

Disengagement probability estimation

Testing of autonomous vehicles on public streets is performed with safety drivers. The task of a safety driver is to monitor the car behavior and take over the control if the car behaves erratically. This is called disengagement and all disengagements are meticulously logged and analyzed.

The task in this project is to estimate the probability of disengagement on a specific section of a road. From the individual disengagement probabilities of road sections a total disengagement probability for a certain route can be calculated. If the total probability of disengagement along a route is below a certain threshold, a driverless vehicle is dispatched. If not, then an autonomous vehicle with a safety driver is dispatched.

In theory calculating a disengagement probability for a road section is simple - just no_of_disengagements_in_this_section / no_of_times_the_vehicle_passed_this_section. But in practice the vehicle has passed this section maybe only a handful of times. The key idea is that we should group or cluster the road sections to increase the denominator.

End-to-end driving with TransFuser

There are two approaches to autonomous driving - modular approach and end-to-end approach. Modular approach divides the driving task into smaller subtasks like perception, planning and control. End-to-end approach trains one big network to drive the car using imitation learning from human driving data. While the end-to-end approach is conceptually simpler, it can also be hard to test and debug.

The task in this project is to train an end-to-end driving network based on TransFuser architecture. The data for training the model will be provided by ADL.

Digital twin of an Estonian town

Bachelor or Masters
Tambet Matiisen
Allan Mitt
Link

Digital twin is a virtual copy of a town that can be used for varied purposes. Autonomous Driving Lab has created a digital twin of Tartu that is used for testing the autonomous car before letting it drive on public streets. The task in this project is to create a similar digital twin of an Estonian town of your choice. The methodology will be based on Allan Mitt’s master’s thesis.

Temporal object detection

Object detection task is commonly formulated within a single image - find all cars or all traffic lights on that image. Single image makes it impossible to classify temporal features of these objects, for example turning signals of the car are blinking or if the current light in the traffic light is blinking. To be able to do that, the neural network performing the detection needs to be temporally aware - it needs to either have multiple video frames as input, use recurrent neural networks to retain history or use a transformer network to attend over previous frames.

Lidar 3D object detection

Self-driving cars need to detect objects in 3D space and lidar is a useful sensor for that. Lidar sends out a laser beam that reflects back from obstacles. If you take the time it took for the signal to reflect back, divide it with the speed of light and take half of it, then you get the distance to the obstacle. Modern automotive lidars have 16-128 vertically positioned beams rotating continuously at 10-20 Hz, creating point cloud representation of the world around the car. This point cloud can be used to detect other cars and pedestrians.

Absolute positioning of drones using aerial images

In war zones the GPS is not usable and drones need to rely on other means for navigation. Typical solution is to use a combination of inertial measurement unit (IMU) and visual odometry to relatively position the drone with respect to the starting point. Such relative positioning systems tend to drift over time, the positioning error can be hundreds of meters after flying a few kilometers. To fly longer distances such drones need additional absolute positioning - the drone camera image needs to be matched with an aerial map of the location.

Teleoperation situational awareness testing

Bachelor or Masters
Tambet Matiisen
Karl Kruusamäe
Link
Remote control a.k.a. teleoperation is used to assist self-driving cars in situations that they cannot handle yet, e.g. roadworks or accidents. Simple direct teleoperation means that the remote driver sees video feed from car cameras and can control the car using remote steering wheel and pedals, similar to racing games. We have set up such a teleoperation system for our lab car together with Estonian company Clevon. We want to evaluate the situational awareness of the remote driver and its ability to perform common maneuvers.

Evaluation of RTK base station accuracy

Autonomous vehicles need to know precisely their position in the world. If normal phone GPS gives 2-5 meter accuracy, the RTK positioning system in our lab car can achieve 2-10 centimeter accuracy. RTK relies on correction information from nearby base stations. Currently we are using the ESTPOS base station network provided by the Estonian Land Board (Maa-amet). We would like to evaluate how complicated it is to set up your own base station network and if its accuracy is comparable to the ESTPOS network.

Prototyping vision-based localization using milestone-board information

Context: Autonomous vehicles, like other robots, need to localize themselves in order to navigate. While satellite navigation systems (GNSS) such as GPS can provide such vehicles with localization information, the GNSS information might not always be available. One robust technique for vehicles to localize is using particle filters (e.g. [1], [2], [3]), given a map of the environment.

The project: During the project, you will prototype the use of milestone-board information for map-based localization. The investigation includes the following steps:
You will begin by first creating a scenario (i.e. as a Python or Matlab program, for example) that contains some highways, roads, cities, and virtual milestone boards.
You’ll then implement a particle filter (which is one of the most intuitive techniques for autonomous localization) for localization within the map (i.e. inside the created scenario). This also includes exploring multiple possible routes that lead to the same city.

Vehicle localization in OpenStreetMap using distances to cities as the measurements

Context: Autonomous vehicles, like other robots, need to localize themselves in order to navigate. While satellite navigation systems (GNSS) such as GPS can provide such vehicles with localization information, the GNSS information might not always be available. One robust technique for vehicles to localize is using particle filters (e.g. [1], [2], [3]), given a map of the environment.

The project: During the project, you will investigate the use of distances to cities, essentially milestone board information, to implement map-based-localization in Open Street Map for Estonia. The investigation includes the following steps:
(i) You will begin by familiarizing yourself with OpenStreetMap and creating virtual milestone boards on a set of testing routes.
(ii) You’ll then implement a particle filter (which is one of the most intuitive techniques for autonomous localization) for localization with the map, using the information from the milestone boards as your measurements.

Air-flow sensing for perception in autonomous driving

Bachelor or Masters
Naveed Muhammad
Link
Context: The usual sensing modalities that are available to autonomous vehicles i.e. vision, radar and lidar have their limitations. For example, none of them is very suitable for estimating the length of a leading vehicle. Additional sensing modalities, such as wind flow sensing, might have the potential to meaningfully complement the usual sensing modalities in autonomous driving.

Description: You will build upon the innovative work done on flow-sensing applications in autonomous driving by Roman Matvejev [1] and Matis Ottan [2]. It is possible to investigate one of the following avenues: (i) working in simulations and proposing new applications of flow sensing in autonomous driving, (ii) investigating new feature extraction and classification/regression methods etc., (iii) expanding out of simulations with physical validation of flow sensing in autonomous driving.

Vision-based localization on city scale using Open Street Map

Bachelor or Masters
Naveed Muhammad
Link
Autonomous vehicles, like other robots, need to localize themselves in order to navigate. While satellite navigation systems (GNSS) such as GPS can provide such vehicles with localization information, the GNSS information might not always be available. One robust technique for vehicles to localize is using particle filters (e.g. [1], [2], [3]), given a map of the environment.

During the project, you will investigate the use of vision sensing for map-based localization. The investigation includes the following steps:
1. Recognition of street names (and possibly also road direction signs) using vision
2. Matching of the perceived street-name information with streets in a given map (such as the Open Street Map)
3. Implementing a particle filter for localization within the map using the above-mentioned matches
4. Incorporation of the developed solution into the Autoware Mini software stack — https://adl.cs.ut.ee/lab/software

Map-based localization for autonomous vehicles using lidar

Bachelor or Masters
Naveed Muhammad
Link
Context: Autonomous vehicles, like other robots, need to localize themselves in order to navigate. While satellite navigation systems (GNSS) such as GPS can provide such vehicles with localization information, the GNSS information might not always be available. One robust technique for vehicles to localize is using particle filter [1, 2, 3], given a map of the environment. Recently, deep-learning approaches to map-based localization have also been proposed in the literature [4, 5].

Project: During the project, you will investigate lidar sensors for map-based localization. You can start by experimenting with lidar data acquired using the ADL Lexus vehicle and then move to real-time implementation on-board the vehicle (within our Autoware Mini software stack - https://adl.cs.ut.ee/lab/software). The investigation includes aspects such as:

(i) Creating a map, using data from the Estonian Land Board, and implementation of a map-based localization using particle filtering. The validation can be done in two ways:
- In the CARLA simulation of Tartu city.
- Using real lidar data from the streets of Tartu (we already have this data at the lab).

(ii) Using the above implementation, investigate aspects such as: what is the lower bound on how rich the lidar sensor data should be, in order to perform localization? The vehicle is equipped with multi-beam lidars from Velodyne and Ouster. So the natural question is if we need all the laser beams? Do we need 16, 8, or is even a single beam enough?

(iii) Incorporation of your proposed solution into Autoware Mini.

Implementation of Test Scenarios for ADS in the Carla Simulator

The testing of Automated Driving Systems (ADS), like our Institute's Bolt car, is difficult and costly. Using simulators is an alternative but has the challenge that it is not easy to transfer test results from the simulated ADS (the so-called ego car) to the real-world ADS. The exact details of the thesis topic depend on the previous knowledge and interests of the student.