ExplainableAI: on explaining forest of decision trees by using generalized additive models

DSpace/Manakin Repository

Show simple item record

dc.contributor.advisor Lucchese, Claudio it_IT
dc.contributor.author De Zan, Martina <1994> it_IT
dc.date.accessioned 2021-03-29 it_IT
dc.date.accessioned 2021-07-21T07:10:35Z
dc.date.available 2021-07-21T07:10:35Z
dc.date.issued 2021-05-10 it_IT
dc.identifier.uri http://hdl.handle.net/10579/18604
dc.description.abstract In recent years, decision support systems have become more and more perva- sive in our society, playing an important role in our everyday life. But these systems, often called black-box models, are extremely complex and it may be impossible to understand or explain how they work in a human interpretable way. This lack of explainability is an issue: ethically because we have to be sure that our system is fair and reasonable; practically because people tend to trust more what they understand. However, substituting black-box model with a more interpretable one in the process of decision making may be impossible: interpretable model may not work as well as the original one or training data may be no longer available. In this thesis we focus on forests of decision trees, which are particular cases of black-box models. If fact, trees are interpretable models, but forests are composed by thousand of trees that cooperate to take decisions, making the final model too complex to comprehend its behavior. In this work we show that Generalized Additive Models (GAMs) can be used to explain forests of decision trees with a good level of accuracy. In fact, GAMs are linear combination of single-features or pair-features mod- els, called shape functions. Since shape functions can be only one- or two- dimensional functions, they can be easily visualized and interpreted by user. At the same time, shape functions can be arbitrarily complex, making GAMs as powerful as other more complex models. it_IT
dc.language.iso en it_IT
dc.publisher Università Ca' Foscari Venezia it_IT
dc.rights © Martina De Zan, 2021 it_IT
dc.title ExplainableAI: on explaining forest of decision trees by using generalized additive models it_IT
dc.title.alternative ExplainableAI: on explaining forest of decision trees by using generalized additive models it_IT
dc.type Master's Degree Thesis it_IT
dc.degree.name Informatica - computer science it_IT
dc.degree.level Laurea magistrale it_IT
dc.degree.grantor Dipartimento di Scienze Ambientali, Informatica e Statistica it_IT
dc.description.academicyear 2019-2020, sessione straordinaria LM it_IT
dc.rights.accessrights openAccess it_IT
dc.thesis.matricno 846036 it_IT
dc.subject.miur INF/01 INFORMATICA it_IT
dc.description.note it_IT
dc.degree.discipline it_IT
dc.contributor.co-advisor it_IT
dc.date.embargoend it_IT
dc.provenance.upload Martina De Zan (846036@stud.unive.it), 2021-03-29 it_IT
dc.provenance.plagiarycheck Claudio Lucchese (claudio.lucchese@unive.it), 2021-04-26 it_IT


Files in this item

This item appears in the following Collection(s)

Show simple item record