Abstract

    Open Access Perspective Study Article ID: TCSIT-5-117

    A useful taxonomy for adversarial robustness of Neural Networks

    Leslie N Smith*

    Adversarial attacks and defenses are currently active areas of research for the deep learning community. A recent review paper divided the defense approaches into three categories; gradient masking, robust optimization, and adversarial example detection. We divide gradient masking and robust optimization differently: (1) increasing intra-class compactness and inter-class separation of the feature vectors improves adversarial robustness, and (2) marginalization or removal of non-robust image features also improves adversarial robustness. By reframing these topics differently, we provide a fresh perspective that provides insight into the underlying factors that enable training more robust networks and can help inspire novel solutions. In addition, there are several papers in the literature of adversarial defenses that claim there is a cost for adversarial robustness, or a trade-off between robustness and accuracy but, under this proposed taxonomy, we hypothesis that this is not universal. We follow this up with several challenges to the deep learning research community that builds on the connections and insights in this paper.

    Keywords:

    Published on: Aug 5, 2020 Pages: 37-41

    Full Text PDF Full Text HTML DOI: 10.17352/tcsit.000017
    CrossMark Publons Harvard Library HOLLIS Search IT Semantic Scholar Get Citation Base Search Scilit OAI-PMH ResearchGate Academic Microsoft GrowKudos Universite de Paris UW Libraries SJSU King Library SJSU King Library NUS Library McGill DET KGL BIBLiOTEK JCU Discovery Universidad De Lima WorldCat VU on WorldCat

    Indexing/Archiving

    Pinterest on TCSIT