On the Robustness of Clustering Algorithms to Adversarial Attacks

DSpace/Manakin Repository

Show simple item record

dc.contributor.advisor Pelillo, Marcello it_IT
dc.contributor.author Cina', Antonio Emanuele <1995> it_IT
dc.date.accessioned 2019-06-20 it_IT
dc.date.accessioned 2019-11-20T07:09:30Z
dc.date.available 2019-11-20T07:09:30Z
dc.date.issued 2019-07-10 it_IT
dc.identifier.uri http://hdl.handle.net/10579/15430
dc.description.abstract Machine learning is becoming more and more used by businesses and private users as an additional tool for aiding in decision making and automation processes. However, over the past few years, there has been an increased interest in research related to the security or robustness of learning models in presence of adversarial examples. It has been discovered that wisely crafted adversarial perturbations, unaffecting human judgment, can significantly affect the performance of the learning models. Adversarial machine learning studies how learning algorithms can be fooled by crafted adversarial examples. In many ways it is a recent research area, mainly focused on the analysis of supervised models, and only few works have been done in unsupervised settings. The adversarial analysis of this learning paradigm has become imperative as in recent years unsupervised learning has been increasingly adopted in multiple security and data analysis applications. In this thesis, we are going to show how an attacker can craft poisoning perturbations on the input data for reaching target goals. In particular, we are going to analyze the robustness of two fundamental applications of unsupervised learning, feature-based data clustering and image segmentation. We are going to show how an attacker can craft poisoning perturbations against the two applications. We choose 3 very well known clustering algorithms (K-Means, Spectral and Dominant Sets clustering) and multiple datasets for analyzing the robustness provided by them against adversarial examples, crafted with our designed algorithms. it_IT
dc.language.iso es it_IT
dc.publisher Università Ca' Foscari Venezia it_IT
dc.rights © Antonio Emanuele Cina', 2019 it_IT
dc.title On the Robustness of Clustering Algorithms to Adversarial Attacks it_IT
dc.title.alternative Implicación global del niño en el aprendizaje de una lengua it_IT
dc.type Master's Degree Thesis it_IT
dc.degree.name Informatica - computer science it_IT
dc.degree.level Laurea magistrale it_IT
dc.degree.grantor Dipartimento di Scienze Ambientali, Informatica e Statistica it_IT
dc.description.academicyear 2018/2019_sessione_estiva it_IT
dc.rights.accessrights openAccess it_IT
dc.thesis.matricno 854866 it_IT
dc.subject.miur L-LIN/02 DIDATTICA DELLE LINGUE MODERNE it_IT
dc.description.note Un enfoque neurocientífico hasta el metodo Artigal it_IT
dc.degree.discipline it_IT
dc.contributor.co-advisor it_IT
dc.date.embargoend it_IT
dc.provenance.upload Antonio Emanuele Cina' (854866@stud.unive.it), 2019-06-20 it_IT
dc.provenance.plagiarycheck Marcello Pelillo (pelillo@unive.it), 2019-07-08 it_IT


Files in this item

This item appears in the following Collection(s)

Show simple item record