On the Robustness of Prunnig Algorithms to Adversarial Attacks

DSpace/Manakin Repository

Show simple item record

dc.contributor.advisor Vascon, Sebastiano it_IT
dc.contributor.author Tajwar, Muhammad <1996> it_IT
dc.date.accessioned 2023-10-02 it_IT
dc.date.accessioned 2024-02-21T12:16:41Z
dc.date.available 2024-02-21T12:16:41Z
dc.date.issued 2023-10-16 it_IT
dc.identifier.uri http://hdl.handle.net/10579/25264
dc.description.abstract Pruning is a technique in machine learning used to simplify models, reduce over-fitting, and improve efficiency. It works by reducing the complexity of a model through the removal of certain components. In the context of neural networks, traditional weight pruning involves setting the smallest weights to zero, effectively eliminating their contribution to the network’s output.Structural pruning, on the other hand, takes this concept further by removing not just individual weights, but entire neurons, connections, or layers. This leads to a change in the network architecture itself, potentially resulting in a model that’s more efficient, easier to understand, and simpler to implement on hardware. The challenge lies in achieving the right balance, removing enough components to gain efficiency without sacrificing too much model performance.In the context of pruning, a dependency graph can help determine which parts of a neural network can be removed without disrupting the remaining architecture. The graph visualizes how different operations and layers of the network rely on each other. By examining these dependencies, we can identify nodes or connections that, if removed, would not affect the overall data flow, or would only minimally impact the model’s performance. This makes dependency graphs a valuable tool for optimizing the process of structural pruning in neural networks.On one hand, a pruned network could potentially be more robust against adversarial attacks. Its reduced complexity might limit the avenues an attacker can exploit. Additionally, the increased interpretability could help in identifying and understanding potential vulnerabilities, pruning could also potentially make a model more vulnerable if important defensive features are pruned away. Also, the change in data flow and dependencies from the pruning process could open up new vulnerabilities. it_IT
dc.language.iso en it_IT
dc.publisher Università Ca' Foscari Venezia it_IT
dc.rights © Muhammad Tajwar, 2023 it_IT
dc.title On the Robustness of Prunnig Algorithms to Adversarial Attacks it_IT
dc.title.alternative On the Robustness of Prunnig Algorithms to Adversarial Attacks it_IT
dc.type Master's Degree Thesis it_IT
dc.degree.name Informatica - computer science it_IT
dc.degree.level Laurea magistrale it_IT
dc.degree.grantor Dipartimento di Scienze Ambientali, Informatica e Statistica it_IT
dc.description.academicyear LM_2022/2023_sessione-autunnale it_IT
dc.rights.accessrights openAccess it_IT
dc.thesis.matricno 888394 it_IT
dc.subject.miur INF/01 INFORMATICA it_IT
dc.description.note it_IT
dc.degree.discipline it_IT
dc.contributor.co-advisor it_IT
dc.date.embargoend it_IT
dc.provenance.upload Muhammad Tajwar (888394@stud.unive.it), 2023-10-02 it_IT
dc.provenance.plagiarycheck Sebastiano Vascon (sebastiano.vascon@unive.it), 2023-10-16 it_IT


Files in this item

This item appears in the following Collection(s)

Show simple item record