dc.contributor.advisor |
Lucchese, Claudio |
it_IT |
dc.contributor.author |
Woldeyohannes, Habtamu Desalegn <1981> |
it_IT |
dc.date.accessioned |
2021-04-12 |
it_IT |
dc.date.accessioned |
2021-07-21T08:04:54Z |
|
dc.date.available |
2021-07-21T08:04:54Z |
|
dc.date.issued |
2021-05-10 |
it_IT |
dc.identifier.uri |
http://hdl.handle.net/10579/19100 |
|
dc.description.abstract |
Nowadays, Machine Learning models are used in many real world AI-based systems. On the other hand, those models are at risk for cyber attacks, which are commonly known as Adversarial attacks. This cyber threat questioned the usefulness and validity of such machine learning models prediction intercepted by attackers. Adversarial Robustness Toolbox (ART) is an open source machine learning security library, which is developed at IBM Research using Python programming language. ART implements many the state of the art adversarial attacks and defense mechanism for conventional machine learning and deep learning models. Just like as a development tool, we can use this library to train and debug machine learning models against different adversarial attacks (i.e evasion, poisoning, extraction, and Inference attacks); and additionally techniques to defend models and to measure model robustness.
In this paper we focus only on the work of data poisoning and evasion attacks supported in current version of “Adversarial Robustness Toolbox v1.5.x”, by evaluating the performance results of those attacks against classical machine learning methods to approach classification tasks in considering adversarial environment. To be specific, We evaluate those attack methods ART supports against four supervised learning methods (i.e Support Vector Machines, Decision Tree, Random Forests, and Gradient Boosted Decision Trees), two machine learning framework (i.e. scikit-learn and lightGBM), two publicly available datasets (i.e. Census Income dataset and MNIST handwritten digit database for tabular and image data type respectively) for classification problems (i.e. binary and multi-label classification respectively).
Keywords: Adversarial Robustness Toolbox, ART Attacks, Adversarial Examples |
it_IT |
dc.language.iso |
en |
it_IT |
dc.publisher |
Università Ca' Foscari Venezia |
it_IT |
dc.rights |
© Habtamu Desalegn Woldeyohannes, 2021 |
it_IT |
dc.title |
Review on “Adversarial Robustness Toolbox (ART) v1.5.x.”:
ART Attacks against Supervised Learning Algorithms Case Study |
it_IT |
dc.title.alternative |
Adversarial Machine Learning: A review of the “Adversarial Robustness Toolbox (ART)” |
it_IT |
dc.type |
Master's Degree Thesis |
it_IT |
dc.degree.name |
Informatica - computer science |
it_IT |
dc.degree.level |
Laurea magistrale |
it_IT |
dc.degree.grantor |
Dipartimento di Scienze Ambientali, Informatica e Statistica |
it_IT |
dc.description.academicyear |
2019-2020, sessione straordinaria LM |
it_IT |
dc.rights.accessrights |
openAccess |
it_IT |
dc.thesis.matricno |
877159 |
it_IT |
dc.subject.miur |
INF/01 INFORMATICA |
it_IT |
dc.description.note |
|
it_IT |
dc.degree.discipline |
|
it_IT |
dc.contributor.co-advisor |
|
it_IT |
dc.date.embargoend |
|
it_IT |
dc.provenance.upload |
Habtamu Desalegn Woldeyohannes (877159@stud.unive.it), 2021-04-12 |
it_IT |
dc.provenance.plagiarycheck |
Claudio Lucchese (claudio.lucchese@unive.it), 2021-04-26 |
it_IT |