Mitigating Carlini & Wagner attacks with Encoding Generative Adversarial Network.
Loading...
Identifiers
Publication date
Reading date
Collaborators
Advisors
Tutors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Share
Center
Department/Institute
Abstract
Deep Learning models are experiencing a significant surge in popularity, expanding into various domains, including critical applications like object recognition in autonomous vehicles, where any failure could have fatal consequences. Given the importance of these models, it is crucial to address potential attacks that could impact their performance and jeopardize user safety. The specialized branch of Machine Learning dedicated to this study is known as Adversarial Machine Learning. In this study, we will assess the effectiveness of Carlini & Wagner attacks. Additionally, we emphasize the importance of implementing proactive security measures to defend Deep Learning models. To enhance the model's resilience against potential threats, we employ a defense network called Encoding Generative Adversarial Networks. This comprehensive analysis will not only provide valuable insights into the vulnerability of models to different attacks but also contribute to the development of more robust and advanced strategies to protect Deep Learning models in critical applications. These findings are essential for increasing the security and reliability of artificial intelligence in environments that demand exceptional accuracy and dependability.
Description
Bibliographic citation
Guillermo Tell-Gónzalez, Jose David Fernández-Rodríguez, Miguel A. Molina-Cabello, Rafaela Benítez-Rochel, Ezequiel López-Rubio: Mitigating Carlini & Wagner attacks with Encoding Generative Adversarial Network. CAEPIA 2024: 141-145.










