Part II. Generating Adversarial Input

Part I provided an introduction to adversarial input and its motivations and the fundamentals of deep learning as applied to image and audio data. This section investigates the various mathematical and algorithmic techniques for generating adversarial data.

We’ll begin with a conceptual explanation of the ideas that underpin adversarial input in Chapter 5. This chapter considers why DNNs can be fooled by small changes to input that do not affect human understanding of the image or audio. We’ll look at how the amount of change required to render an input adversarial can be measured mathematically, and how aspects of human perception might affect the ways in which images and audio can be altered without those changes being noticed.

Chapter 6 then goes into greater depth, explaining specific computational methods for generating adversarial input based on research in this area. We’ll explore the mathematics behind some of these methods and the differences in approaches. The chapter also provides some code examples to illustrate the methods in action, based on the neural networks introduced in the code in 3 and 4.

At the end of this section, you’ll understand why DNNs can be fooled and the principles and methods that are required for this trickery. This will form the basis for exploring real-world threats in Part III.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset