Part I. An Introduction to Fooling AI

This section provides an introduction to deep neural networks (DNNs), exploring how these can, and why they might be, tricked by adversarial input.

To begin, Chapter 1 takes a look at the concept of adversarial input and a little history. We’ll peek at some of the fascinating research that has provided insights into DNNs and how they can be fooled. Chapter 2 then goes on to explore the potential impact of adversarial input, examining real-world motivations for fooling the AI that is the foundation of systems such as social media sites, voice control audio, and autonomous vehicles.

The final chapters in this section give an introduction to DNNs for image, audio, and video, for those of you who are unfamiliar with this area or would like a refresher. They will provide the necessary foundation for understanding the concepts in the remainder of the book. Chapter 3 explains the basic principles of machine and deep learning. Chapter 4 explains typical ways in which these principles are extended and applied to understand image, audio, and video. Both of these chapters finish with code examples that will be revisited later in the book when we examine how adversarial input is created and defended against.

At the end of this section, you will have an understanding of adversarial examples, the motivations for creating them, and the systems at risk of attack. Part II will then examine how adversarial input is created to trick image and audio DNNs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset