Part IV. Defense

Building on the previous chapters, this section examines the approaches to defending DNNs in real-world systems against adversarial input.

Chapter 9 begins by examining how we model the adversarial threat—this is critical to evaluating any defense. The chapter will then look at how the robustness of neural networks can be evaluated, both empirically and theoretically.

Chapter 10 considers some of the most recent thinking in the area of how to strengthen DNN algorithms against adversarial input, as well as open source projects committed to developing better defenses against adversarial attacks. We’ll consider DNN assurance from a broader perspective to examine whether it’s possible to establish the sets of inputs over which a DNN can safely operate. The chapter includes code examples to illustrate defenses and defense evaluation, building on examples first presented in Chapter 6. We’ll also take a more holistic approach to Information Assurance (IA) and consider the effect that the broader processing chain and organizational procedures might have on reducing the risk of adversarial input.

Finally, Chapter 11 looks at how DNNs are likely to evolve in forthcoming years and the effect that this may have on the ease with which they can be fooled.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset