In this recipe, we will learn how we can capture images the camera. We'll use the CvPhotoCamera
class, which is a part of OpenCV, and apply our retro effect from the previous recipe.
For this recipe, you will need a real iOS device, because we're going to take photos. The source code can be found in the Recipe08_TakingPhotosFromCamera
folder in the code bundle that accompanies this book.
The following are the steps required to apply our filter to a photo, taken with camera app:
ViewController
interface should implement the protocol from CvPhotoCameraDelegate
, and should have a member of the CvPhotoCamera*
type.viewDidLoad
method as usual.applyEffect
method.Let's implement the described steps:
CvPhotoCamera
and CvVideoCamera
. The first one was designed to get static images, and we'll get familiar with it in this recipe. We should add support for a certain protocol in our Controller class for working with a camera. In our case, we use the delegate of CvPhotoCamera
. The ViewController
class accesses the image through the delegation from CvPhotoCameraDelegate
:@interface ViewController : UIViewController<CvPhotoCameraDelegate> { CvPhotoCamera* photoCamera; UIImageView* resultView; RetroFilter::Parameters params; } @property (nonatomic, strong) CvPhotoCamera* photoCamera; @property (nonatomic, strong) IBOutlet UIImageView* imageView; @property (nonatomic, strong) IBOutlet UIToolbar* toolbar; @property (nonatomic, weak) IBOutlet UIBarButtonItem* takePhotoButton; @property (nonatomic, weak) IBOutlet UIBarButtonItem* startCaptureButton; -(IBAction)takePhotoButtonPressed:(id)sender; -(IBAction)startCaptureButtonPressed:(id)sender; - (UIImage*)applyEffect:(UIImage*)image; @end
CvPhotoCamera*
property in order to work with a camera. We do also add two buttons to the UI. Thus, we add two corresponding properties and two methods with IBAction
macros. As done before, you should connect these properties and actions with the corresponding GUI elements with Assistant editor and storyboard files.viewDidLoad
method, we should initialize camera parameters.photoCamera = [[CvPhotoCamera alloc] initWithParentView:imageView]; photoCamera.delegate = self; photoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionFront; photoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetPhoto; photoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
-(IBAction)startCaptureButtonPressed:(id)sender; { [photoCamera start]; [self.view addSubview:imageView]; [takePhotoButton setEnabled:YES]; [startCaptureButton setEnabled:NO]; }
CvPhotoCameraDelegate
, we should implement two methods inside the ViewController
class:- (void)photoCamera:(CvPhotoCamera*)camera capturedImage:(UIImage *)image; { [camera stop]; resultView = [[UIImageView alloc] initWithFrame:imageView.bounds]; UIImage* result = [self applyEffect:image]; [resultView setImage:result]; [self.view addSubview:resultView]; [takePhotoButton setEnabled:NO]; [startCaptureButton setEnabled:YES]; } - (void)photoCameraCancel:(CvPhotoCamera*)camera; { }
-(IBAction)takePhotoButtonPressed:(id)sender; { [photoCamera takePicture]; }
applyEffect
function that wraps the call to the RetroFilter
class on the Objective-C side, as discussed in the previous recipe.In order to work with a camera on an iOS device using OpenCV classes, you need to initialize the CvPhotoCamera
object first and set its parameters. This is done in the viewDidLoad
method that is called once when the View is loaded onscreen. In the initialization code, we should specify what GUI component will be used to preview the camera capture. In our case, we'll use UIImageView
as we did before.
Our main UIImageView
component will be used to show the video preview from the camera and help users to take a good photo. Because our app also needs to display the final result on the screen, we create another UIImageView
to display the processed image. In order to do it, we can create the second component right from the code:
resultView = [[UIImageView alloc] initWithFrame:imageView.bounds]; UIImage* result = [self applyEffect:image]; [resultView setImage:result]; [self.view addSubview:resultView];
In this code, we create the UIImageView
component with the same size as that of manually added imageView
property. After that, we use the addSubview
method of the main View to add newly created components to our GUI. If we want see the camera preview results again, we should use the same method for the imageView
property:
[self.view addSubview:imageView];
There are three important parameters for camera: defaultAVCaptureDevicePosition
, defaultAVCaptureSessionPreset
, and defaultAVCaptureVideoOrientation
. The first one is designed to choose between front and back cameras of the device. The second one is used to set the image resolution. The third parameter allows you to specify the device orientation during the capturing process.
There are many possible values for the resolution; some of them are as follows:
AVCaptureSessionPresetHigh
AVCaptureSessionPresetMedium
AVCaptureSessionPresetLow
AVCaptureSessionPreset352x288
AVCaptureSessionPreset640x480
For capturing static, high-resolution images, we recommend using the value of AVCaptureSessionPresetPhoto
. The resulting resolution depends on your device, but it will be the largest possible resolution.
In order to start the capture process, we should call the start
method of the camera object. In our sample, we'll do it in the button's action. After clicking on the button, the user will see the camera image on the screen and will be able to click on the Take photo button that calls the takePicture
method.
The CvPhotoCameraDelegate
camera protocol contains only one important method—capturedImage
. It is executed when somebody calls the takePicture
function and allows you to get the current frame as the function argument.
If you want to stop the camera capturing process, you should call the stop
method.