Title

Generative Poisoning Attacks on Neural Network Models in Autonomous Driving

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Computer Science and Info Sys

Date of Award

Spring 2021

Abstract

Poisoning attacks have been described as a grave danger to neural networks, they tend to change the complete meaning of the model and also define how the model interprets the data provided to it. Certain inputs are retrained after which poisoned data can be induced into the model. As the popularity of self-driving cars are growing exponentially, we need to look into every major flaw in the system and have security measures in place for the safety of the passengers. There have been limited studies on progressive poisoning attacks especially when it comes to autonomous driving vehicles. In this work, we first generate the poisoned data and also propose a generative method that will increase the generation of the poisoned data. This method of poisoning helps us determine the extent of inaccuracy and unexpected side effects when the poisoned data is fed to the model.

Advisor

Omar Elariss

Subject Categories

Computer Sciences | Physical Sciences and Mathematics

COinS