How it works...

In step 1, we loaded a sample image and resized its height and width to height_resized and width_resized, respectively. In step 2, we localized a face in the image by using the image_detect_faces() function from the image.libfacedetection library. It returns the left x,y coordinates of the detected face and its width and height. Then, in step 3, we drew a bounding box around the face. The rect() function draws a rectangle onto an image using pixel coordinates. In the previous three steps, we implemented face localization in an image. In step 4, we utilized this face localization technique to prepare a dataset that we will use to train our face recognizer/classifier model.

In step 5, we built a generator with data augmentation. In step 6, we loaded the FaceNet model and inspected its input and output layers. In step 7, we built our face recognition model. We added a dense layer with 128 units. The last layer consisted of three units with a softmax activation function. The last layer of the model has three units because we had three class labels. We defined the loss function and IOU metric of our model and then compiled and trained it. In step 8, we tested our face recognition system on a sample image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset