© Radoslava Leseva Adams and Hristo Lesev 2016

Radoslava Leseva Adams and Hristo Lesev, Migrating to Swift from Flash and ActionScript, 10.1007/978-1-4842-1666-8_12

12. Working with the Camera and Images

Radoslava Leseva Adams and Hristo Lesev2

(1)London, UK

(2)Kazanlak, Bulgaria

When cameras were added to mobile devices and image-editing apps started appearing, we were all transformed into artists overnight. Taking a photo of an interesting object, applying a filter, and immediately sharing the resulting creation is so easy, it would probably inspire Andy Warhol to recreate his Marilyn Diptych. On the following pages we will see how to take pictures with the camera and edit them with the power of Swift.

In this chapter you will do the following:

  • Learn about the Photos framework.

  • See how to programmatically take photos with the camera and browse the gallery.

  • Learn how to use the CoreImage framework to apply filters and edit photos.

  • Learn how to save an image back to the gallery.

  • Build an app that does all of the above.

When you are done, you will have an app that takes photos; lets users browse the gallery; and changes the color, brightness, and saturations of selected photos and saves them back to the gallery.

Setting Up the App and Designing the UI

The app we are about to develop will let the user take a photo, make changes to it, and save the modified photo to the photo gallery.

Following is a list of the main features we will focus on:

  • Taking a photo by using the camera user interface (UI) API (application programming interface) of the Photos framework.

  • Choosing a photo from the gallery with the help of the default gallery browsing UI on iOS.

  • Editing the brightness, contrast, and saturation of the photo by applying a filter.

  • Saving the edited copy back to the photo gallery.

In Xcode create a Single View iOS application project (FileNewProject…, then iOSApplicationSingle View Application) and name it ImageEditor.

Open Main.storyboard and position the UI elements to look like the right half of Figure 12-1. Add a Navigation Bar and drop a Bar Button Item on its right side, then add a Button, three Labels, three Sliders, and an Image view to the main view of the app. Constrain the layout to make it adaptive (see Chapter 5 for a reminder of how to use Auto Layout and Size Classes to help with that).

A371202_1_En_12_Fig1_HTML.jpg
Figure 12-1. The app's user interface

Title the three labels “Brightness,” “Contrast,” and “Saturation.” The sliders next to each label will change the corresponding properties of the image we will load in the image view.

To add a camera icon to the bar button, select it and in the Attributes inspector set System Item to Camera.

Select the Image View and in the Attributes inspector set its Mode to Aspect Fit. This will ensure that the photo we choose to display will fit in the bounding box of the image view.

Adjust the range of each slider in the Attributes inspector:

  • Select the brightness slider and set its Minimum Value to -1, Maximum Value to 1 and Current to 0.

  • Select the contrast slider and set its Current to 1.

  • Select the saturation slider and set its Current to 1.

Open ViewController.swift and add imports for the Photos and CoreImage frameworks. The Photos framework has the definitions of the default view controllers for taking pictures with a camera and for browsing the photo gallery. The CoreImage framework gives us plenty of image processing tools. Now get ready to add some actions.

Start with the button: create an action for its Touch Up Inside event and name it saveToGallery. For the Bar Button Item create an action named showChooseImageOptions. Add an action for the Value Changed event of each of the Sliders and call them brightnessValueChanged, contrastValueChanged, and saturationValueChanged, respectively. Then add an outlet for the Image View and call it imageView. Your code should look similar to that in Listing 12-1.

Listing 12-1. Adding an Outlet and Actions to the ViewController Class
import UIKit
import Photos
import CoreImage


class ViewController: UIViewController {

    @IBOutlet weak var imageView: UIImageView!

    override func viewDidLoad() {
        super.viewDidLoad()
    }


    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
    }


    @IBAction func brightnessValueChanged(sender: UISlider) {
        // TODO: Adjust the filter's brightness property.
    }


    @IBAction func contrastValueChanged(sender: UISlider) {
        // TODO: Adjust the filter's contrast property.
    }


    @IBAction func saturationValueChanged(sender: UISlider) {
        // TODO: Adjust the filter's saturation property.
    }


    @IBAction func saveToGallery(sender: AnyObject) {
        // TODO: Save a copy of the filtered image back to the photo gallery
    }


    @IBAction func showChooseImageOptions(sender: UIBarButtonItem) {
        // TODO: Ask the user from where he wants to pick an image
    }
}

Taking Pictures and Browsing the Gallery

Apple knows how important it is for developers to be able to use the device’s camera in their apps. To allow easy access to it without the bother of setting up a UI, the iOS SDK provides a ready-made view controller. It is called UIImagePickerController and is part of the Photo framework.

The UIImagePickerControllerlets the user choose the source of an image: the device’s built-in camera or the gallery. Once an image is chosen, it can be obtained programmatically via a method of the UIImagePickerControllerDelegate protocol. We will now see how to use the image picker controller in our app.

First we need to make our ViewController class conform to the UIImagePickerControllerDelegate protocol by adding the protocol to ViewController‘s inheritance list. In order to be used as a delegate for the image picker controller, ViewController will need to conform to one more protocol: UINavigationControllerDelegate.

Caution

This could be a potential point of confusion. Note that conforming to UINavigationControllerDelegate is required, in order to make ViewController a delegate for the image picker, not for the app’s navigation bar.

We set ViewController as the delegate for the image picker controller in viewDidLoad. Then we implement one of the methods of UIImagePickerControllerDelegate, which receives a copy of the image that the user chose from the gallery or took with the camera: imagePickerController(_:didFinishPickingImage:_). You can see the implementation in Listing 12-2.

Note

For more information about delegation and how it works, see Chapter 5. We will look at protocols and conformance in Chapter 21.

Listing 12-2. Implementing UIImagePickerControllerDelegate Protocol and Setting Up the Image Picker
class ViewController: UIViewController, UINavigationControllerDelegate,
    UIImagePickerControllerDelegate {


    //Create an instance of the image picking controller:
    let imagePicker = UIImagePickerController()


    override func viewDidLoad() {
        super.viewDidLoad()


        //Set the delegate, which will recieve the chosen image:
        imagePicker.delegate = self
        //Hide any editing options from the default UI:
        imagePicker.allowsEditing = false
    }


    func imagePickerController(picker: UIImagePickerController,
        didFinishPickingImage image: UIImage, editingInfo: [String : AnyObject]?) {
        // TODO: Manipulate the image
    }


    //The rest of the code comes here
}

The imagePickerController(_:didFinishPickingImage:_) is called when the user has finished taking a photo with the camera or has picked one from the gallery. A copy of the image gets passed in the image parameter of the function. The parameter is of type UIImage. This means that the received image is ready to be used in the UI of our application.

To show the image picker UI on the screen we have to set it up first. Since we want to be able to obtain images from both the camera and the photo gallery, we will implement the setup logic in two separate functions. The functions are called openCameraVC and openGalleryVC, respectively. You can see the implementation in Listing 12-3.

Listing 12-3. Implementing openCameraVC and openGalleryVC Functions
func openCameraVC(action: UIAlertAction) ->Void {
    //Tell the image picker that we want to take a picture with the camera:
    imagePicker.sourceType = UIImagePickerControllerSourceType.Camera
    //Show imagePicker on the screen:
    presentViewController(imagePicker, animated: true,
        completion: nil)
}


func openGalleryVC(action: UIAlertAction) ->Void {
    //Tell the image picker that we want to browse the photo gallery:
    imagePicker.sourceType = UIImagePickerControllerSourceType.SavedPhotosAlbum
    //Show imagePicker on the screen:
    presentViewController(imagePicker, animated: true,
        completion: nil)
}

These two functions are very similar. First we set the sourceType property of the imagePicker to the location from which we want to obtain an image. Then we show the image picker view controller on the screen.

In order to let the user choose between taking a new photo and using one from the gallery, we will use an action sheet, which will show up when the user taps the camera button on the navigation bar. We will implement this in the showChooseImageOptions method we stubbed out earlier.

First we create an instance of the UIAlertController class, set its preferedStyle property to UIAlertControllerStyle.ActionSheet, and set up a message that the user will see.

Note

UIActionSheet used to be a popular choice for showing action sheets, but it was deprecated in iOS 8. Since then, UIAlertController is what is used for the same purpose. For more details on using alerts, see Chapter 6.

Next we set alert actions to handle the three options. The first two actions, cameraAction and galleryAction, will result in calls to openCameraVC and openGalleryVC, respectively. The third alert action is called closeAction and is used to dismiss the action sheet. To do that we set its style property to UIAlertActionStyle.Cancel and provide an empty action callback.

Before we present the actions on the action sheet , it is a good idea to check if all of the options are actually available on the device. For example, the iOS simulator does not support camera, so in that case we can only offer the user to pick an image from the gallery. To perform the check we make a call to isSourceTypeAvailable—a static method of the UIImagePickerController class.

You can see the implementation of the showChooseImageOptions in Listing 12-4.

Listing 12-4. Setting Up an Action Sheet with Three Options
@IBAction func showChooseImageOptions(sender: UIBarButtonItem) {
    //Create an action sheet with options:
    let actionSheet = UIAlertController(title: "", message: "Get an image",
        preferredStyle: UIAlertControllerStyle.ActionSheet)


    //Configure a new action for taking a photo with the camera:
    let cameraAction = UIAlertAction(title: "Use the camera", style:
        UIAlertActionStyle.Default,handler: openCameraVC)


    //Configure a new action for picking an image from the gallery:
    let galleryAction = UIAlertAction(title: "From the gallery", style:
        UIAlertActionStyle.Default, handler: openGalleryVC)


    //Configure an empty action to close the action sheet:
    let closeAction = UIAlertAction(title: "Done", style:
        UIAlertActionStyle.Cancel){ (action) -> Void in
    }


    //Check if the device has a camera:
    if UIImagePickerController.isSourceTypeAvailable(
        UIImagePickerControllerSourceType.Camera) {
        //Add the cameraAction to the action sheet
        actionSheet.addAction(cameraAction)
    }


    //Check if a gallery is available:
    if UIImagePickerController.isSourceTypeAvailable(
        UIImagePickerControllerSourceType.SavedPhotosAlbum) {
        //Add the galleryAction to the action sheet
        actionSheet.addAction(galleryAction)
    }


    //Add the close action to the sheet:
    actionSheet.addAction(closeAction)


    //For devices with bigger screens
    //set the pop over controller to be the bar button instance.
    //This is needed by iOS to calculate the screen position
    //of the action sheet controler.
    if let popoverController = actionSheet.popoverPresentationController {
        popoverController.barButtonItem = sender
    }


    //Show the action sheet on the screen:
    presentViewController(actionSheet, animated: true, completion: nil)
}

If I were you, I would already be eager to see the result of all the code we wrote so far. In order to do that, we need to show an image on the screen: we will assign the UIImage that gets passed to imagePickerController(_:didFinishPickingImage:_) to the image property of imageView (see Listing 12-5).

Listing 12-5. Showing an Image on the Screen
func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage,
    editingInfo: [String : AnyObject]?) {


    //Dismiss the image picker view controller:
    dismissViewControllerAnimated(true, completion: nil)


    //Show the image in the image view:
    imageView.image = image
}

Let us run the app and pick an image from the gallery, as shown in Figure 12-2.

A371202_1_En_12_Fig2_HTML.jpg
Figure 12-2. Picking an image from the gallery

Or you can take a photo with the camera as shown in Figure 12-3.1

A371202_1_En_12_Fig3_HTML.jpg
Figure 12-3. Taking a photo with the camera

In the next section we will see how we can edit the image.

Editing an Image by Applying a Filter

When it comes to editing an image Apple has provided us with an arsenal of frameworks and tools for the task. Their vast number can empower you, as well as make you wonder which one is best for what task. Each of the available frameworks comes with pros and cons in different areas of the image manipulation world. You can choose to edit an image pixel by pixel, use high-level frameworks such as CoreGraphics, CoreImage, ImageProcessingKit, or opt for third-party libraries.2

The best choice for what we want our app to do, manipulate an image’s brightness, contrast, and saturation, would be the filters in the CoreImage framework. It offers outstanding performance, a great abstraction layer over the low-level image-editing operations, and comes with a huge number of ready-to-use filters.

A filter in the CoreImage framework is treated as a black box: it takes input data, processes it, and spits out an output image. All built-in filters are of type CIFilter. Their input comes in the form of one or more images and parameters in key-value format. Based on the parameters, a filter can combine the input images and make alterations. When it is ready, it produces a result of type CIImage. We can create a filter by passing a filter name to CIFilter’s initializer.

To control the brightness, the contrast, and the saturation of our input image we will use the CIColorControls filter . It has the following input parameters:

  • inputImage—a CIImage instance, which contains the original image data.

  • inputSaturation—a floating point number, which controls saturation.

  • inputBrightness—a floating point number, which controls brightness.

  • inputContrast—a floating point number, which controls contrast.

The time needed for filtering an image heavily depends on the image size. For large-resolution files applying a filter could take several seconds to complete, which is not ideal. We would like the user to see results in real time, when they move a slider. To achieve that, we will implement a couple of tricks.

  • The first trick involves the use of a special rendering context that will offload the work from the CPU and will use the graphics processing unit (GPU) of the device to do the calculations.

  • The second trick is to make our app lazy and not actually apply a filter to the original image while it is being edited. In order to show the effects of a filter in real time, we will display a downsampled copy of the image as a preview and apply the filter to it. The heavy work of modifying the original file will be done only when the user decides to save the changes to the image. We will then show an activity indicator to let the user know that a potentially lengthy operation is in progress.

Let us start by declaring an instance of the CIFilter class and calling it colorFilter. Then we will need one UIImage instance for the original image and another one for the preview. We will also need an instance of the view controller, which will display the activity indicator. You can see all declarations in Listing 12-6.

Listing 12-6. Declaring Instances
//The filter we will use to controll the brightness, contrast and saturation of the image
let colorFilter = CIFilter(name: "CIColorControls")


//The original full sized image
var originalImage:UIImage?
//A downscaled image for preview
var previewImage:UIImage?


//An allert controller with activity indicator
//to show that the app is busy saving an image
var savingImageVC:UIAlertController?


//A context to render the filter on the GPU
var filterContext:CIContext = CIContext(EAGLContext: EAGLContext(API: .OpenGLES2),
    options:[kCIContextWorkingColorSpace : NSNull()])

Note the use of the filterContext variable: by passing the EAGLContext(API: .OpenGLES2) as an argument we instruct the framework to perform calculations on the GPU instead of on the CPU. This is our first performance trick done.

As a next step we will take the image that the user has chosen with the image picker, make a downsampled copy of it, and feed it to the filter.

Find the imagePickerController(_:didFinishPickingImage:_) method. In it we will downsample the chosen image by drawing a scaled-down copy of it and converting it to a UIImage. We need to do the drawing inside a graphics context, which is managed by these two calls: UIGraphicsBeginImageContextWithOptionsand UIGraphicsEndImageContext. To create a scaled-down copy of our image we call drawInRect and specify the size of the rectangle, in which the image should fit. We use CGSizeApplyAffineTransformto calculate the desired size of the image, so that it matches the size of imageView, where we show the preview. Finally, we store the two copies of the image as follows: the original is kept in the originalImage variable and the previewImage variable stores the smaller copy. We pass that smaller copy as an input to the filter. To set an input parameter we use the filter’s setValue(_:forKey) function and specify the name of the parameter and its value. You can see the implementation in Listing 12-7.

Listing 12-7. Create a Preview for the Selected Image and Set It as an Input to the Filter
func imagePickerController(picker: UIImagePickerController, didFinishPickingImage image: UIImage,
    editingInfo: [String : AnyObject]?) {


        //Dismiss the image picker view controller
        dismissViewControllerAnimated(true, completion: nil)


        //Calculate a scale factor for the preview image
        //scale the original image so that it fits inside the imageView frame
        let scaleFactor:CGFloat = CGFloat(imageView.frame.size.height / image.size.height)


        //Calculate the size of the preview image
        let size = CGSizeApplyAffineTransform(image.size,
            CGAffineTransformMakeScale(scaleFactor, scaleFactor))


        //Create a context to draw the image inside
        UIGraphicsBeginImageContextWithOptions(size, false, CGFloat(0.0))
        //Draw the image downsampled to the new size
        image.drawInRect(CGRect(origin: CGPointZero, size: size))
        //Get the downsampled image from the context
        let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
        //Destroys the context
        UIGraphicsEndImageContext()


        //Store the original image from the gallery
        originalImage = image


        //Store the downsampled image and use it for preview
        previewImage = scaledImage


        //Set the preview image as an input image to the filter
        colorFilter!.setValue(CIImage(image:previewImage!), forKey:kCIInputImageKey)


        applyFilter()
}

At the end of the Listing 12-7 you might have noticed a call to a function, which we have not implemented yet: applyFilter. Its role is to get the output from the filter and display it on the screen. You can see the implementation in Listing 12-8.

Listing 12-8. Implementing the applyFilter Function
func applyFilter() {
        //Get the filtered (output) image
        if let ciiOutput = colorFilter?.valueForKey(kCIOutputImageKey) as? CIImage {
            //Create an CGImage using the provided context
            let cgiOutput = filterContext.createCGImage(ciiOutput, fromRect: ciiOutput.extent)


            //Create an UIImage from the CGImage
            let result = UIImage(CGImage: cgiOutput)


            //Show the output image in the image view
            imageView?.image = result
        }
}

Let us now set values for the brightness, contrast, and saturation properties. We will do this in the corresponding actions for each of the three sliders that we added in the beginning of the chapter (see Listing 12-9).

Listing 12-9. Setting the Brightness, the Saturation, and the Contrast Values
@IBAction func brightnessValueChanged(sender: UISlider) {
    //Change the brightness:
    colorFilter!.setValue(sender.value, forKey:"inputBrightness")
    //Apply the filter and show the result:
    applyFilter()
}


@IBAction func contrastValueChanged(sender: UISlider) {
    //Change the contrast:
    colorFilter!.setValue(sender.value, forKey:"inputContrast")
    //Apply the filter and show the result:
    applyFilter()
}


@IBAction func saturationValueChanged(sender: UISlider) {
    //Change the saturation:
    colorFilter!.setValue(sender.value, forKey:"inputSaturation")
    //Apply the filter and show the result:
    applyFilter()
}

Now run the app, pick an image from the gallery or take one with the camera and move the sliders. You will see how the filter is applied in real time (Figure 12-4).

A371202_1_En_12_Fig4_HTML.jpg
Figure 12-4. Editing an image

One last thing before the end of the chapter: I promised you that we would save the edited image back to the gallery. Let us do that now. In ViewController.swift find the saveToGallery method and add the following lines of code to it (Listing 12-10):

Listing 12-10. saveToGallery Implementation
@IBAction func saveToGallery(sender: AnyObject) {

        //Check if there is an image to save
        guard let origImg = originalImage else {
            //If there is no image exit this function
            return
        }


        //Show the activity indicator
        savingImageVC = displaySavingImageActivity()


        //Set the original image as an input image to the filter
        colorFilter!.setValue(CIImage(image:origImg), forKey:kCIInputImageKey)


        applyFilter()

        //Save the filtered result to the gallery
        UIImageWriteToSavedPhotosAlbum(imageView.image!, self,
            #selector(ViewController.image(_:didFinishSavingWithError:contextInfo:)), nil)
}

When the user decides to save the image, the changes will be applied to the original file, instead of just to the preview copy of it. As this is a potentially lengthy operation, we first call displaySavingImageActivity to show an activity indicator and inform the user that the app is busy filtering a large image. You can see its implementation in Listing 12-12.

The function that saves the filtered image to the photo gallery is UIImageWriteToSavedPhotosAlbum. It can notify us about whether saving was successful in a callback function, which is passed as a parameter. Listing 12-11 shows the implementation of the callback. The first thing we do when the callback is executed is to hide the activity view controller. Then we set the preview image as the input of the filter again, so the user can continue to play safely with the sliders.

Listing 12-11. Finish Image Saving Handler
func image(image: UIImage, didFinishSavingWithError error: NSError?,
    contextInfo:UnsafePointer<Void>) {
        //Hide the activity indicator
        savingImageVC!.dismissViewControllerAnimated(true, completion: nil)


        //Set back the preview image as an input image to the filter
        colorFilter!.setValue(CIImage(image:previewImage!), forKey:kCIInputImageKey)
}

The last function to implement is the one that shows the activity indicator when the image is being saved. Inside we create an instance of the UIAlertController class, where we put an UIActivityIndicatorView and animate it (Listing 12-12).

Listing 12-12. Creating an Alert Controller with Activity Indicator
func displaySavingImageActivity() -> UIAlertController {
        //Create an alert controller
        let alertController = UIAlertController(title: "Saving image", message: nil,
            preferredStyle: .Alert)


        //Create an activity indicator
        let indicator = UIActivityIndicatorView(frame: alertController.view.bounds)
        indicator.color = UIColor.blackColor()
        indicator.autoresizingMask = [.FlexibleWidth, .FlexibleHeight]


        //Add the activity indicator as a subview of the alert controller's view
        alertController.view.addSubview(indicator)
        indicator.userInteractionEnabled = false
        indicator.startAnimating()


        //Show the alertController on the screen
        presentViewController(alertController, animated: true, completion: nil)


        return alertController
}

With this our image-editing application is ready and we can call it a day. Congratulations.

Summary

In this chapter you saw how to use the built-in iOS functionality for taking photos, browsing the gallery, and editing images. You learned about the Photos and CoreImage frameworks and implemented a simple photo-editing app.

In the next chapter you will see how to build an app to persist data to files on the device and how to store and retrieve data remotely from the cloud.

Footnotes

1 Thanks to Mushana the cat for agreeing to appear in the shot!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset