Mitigating Carlini & Wagner attacks with Encoding Generative Adversarial Network.

dc.contributor.authorTell-Gónzalez, Guillermo
dc.contributor.authorFernández-Rodríguez, Jose David
dc.contributor.authorMolina-Cabello, Miguel Ángel
dc.contributor.authorBenítez-Rochel, Rafaela
dc.contributor.authorLópez-Rubio, Ezequiel
dc.date.accessioned2024-07-05T10:42:57Z
dc.date.available2024-07-05T10:42:57Z
dc.date.created2024
dc.date.issued2024
dc.departamentoLenguajes y Ciencias de la Computación
dc.description.abstractDeep Learning models are experiencing a significant surge in popularity, expanding into various domains, including critical applications like object recognition in autonomous vehicles, where any failure could have fatal consequences. Given the importance of these models, it is crucial to address potential attacks that could impact their performance and jeopardize user safety. The specialized branch of Machine Learning dedicated to this study is known as Adversarial Machine Learning. In this study, we will assess the effectiveness of Carlini & Wagner attacks. Additionally, we emphasize the importance of implementing proactive security measures to defend Deep Learning models. To enhance the model's resilience against potential threats, we employ a defense network called Encoding Generative Adversarial Networks. This comprehensive analysis will not only provide valuable insights into the vulnerability of models to different attacks but also contribute to the development of more robust and advanced strategies to protect Deep Learning models in critical applications. These findings are essential for increasing the security and reliability of artificial intelligence in environments that demand exceptional accuracy and dependability.es_ES
dc.description.sponsorshipUniversidad de Málaga. Campus de Excelencia Internacional Andalucía Tech.es_ES
dc.identifier.citationGuillermo Tell-Gónzalez, Jose David Fernández-Rodríguez, Miguel A. Molina-Cabello, Rafaela Benítez-Rochel, Ezequiel López-Rubio: Mitigating Carlini & Wagner attacks with Encoding Generative Adversarial Network. CAEPIA 2024: 141-145.es_ES
dc.identifier.urihttps://hdl.handle.net/10630/31918
dc.language.isoenges_ES
dc.relation.eventdateJunio 2024es_ES
dc.relation.eventplaceLa Coruña, Españaes_ES
dc.relation.eventtitleConferencia de la Asociación Española para la Inteligencia Artificial (CAEPIA'24)es_ES
dc.rights.accessRightsopen accesses_ES
dc.subjectInteligencia artificiales_ES
dc.subjectRedes neuronales (Informática)es_ES
dc.subjectSeguridad informáticaes_ES
dc.subject.otherConvolutional neural networkses_ES
dc.subject.otherGenerative Adversarial Networkes_ES
dc.subject.otherAdversarial attackes_ES
dc.titleMitigating Carlini & Wagner attacks with Encoding Generative Adversarial Network.es_ES
dc.title.alternativeMitigating Carlini & Wagner attacks with EGANes_ES
dc.typeconference outputes_ES
dspace.entity.typePublication
relation.isAuthorOfPublicationbd8d08dc-ffee-4da1-9656-28204211eb1a
relation.isAuthorOfPublication6280dc3f-86b0-49c7-9979-9d2e9e9f8e22
relation.isAuthorOfPublicationae409266-06a3-4cd4-84e8-fb88d4976b3f
relation.isAuthorOfPublication.latestForDiscoverybd8d08dc-ffee-4da1-9656-28204211eb1a

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
CAEPIA_IEEE___Mitigating_Carlini___Wagner_attacks_with_the_EGAN_network.pdf
Size:
516.92 KB
Format:
Adobe Portable Document Format
Description: