Generating images that can fool a neural network using adversarial attack

To understand how to perform an adversarial attack on an image, let's understand how regular predictions are made using transfer learning first and then we will figure out how to tweak the input image so that the image's class is completely different, even though we barely changed the input image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset