12. Capture

The PhoneGap Capture API allows an application to capture audio, video, and image files using the appropriate built-in application on a mobile device. The device’s default camera application is used to capture pictures and videos, while the device’s default voice recorder application is used for capturing audio clips.

PhoneGap’s implementation of the Capture API is based on the W3C Media Capture API (www.w3.org/TR/media-capture-api). For whatever reason, though, the PhoneGap team has omitted support for many of the options supported by the W3C API. So, as you’ll see later, while the API is based upon a standard, with PhoneGap many of the API options just don’t work or haven’t even been implemented.


Camera vs. Capture

You may be asking yourself why PhoneGap implemented both Camera and Capture APIs considering that there’s some overlap between the two in that they can both capture images. Essentially, the Camera API was implemented before PhoneGap adopted the W3C Capture API. It is likely PhoneGap just kept the Camera API for backward compatibility with existing applications.

While both APIs capture images, the APIs operate in different ways. The Camera API can capture only images but supports alternate sources for the image files, while the Capture API will only allow you to interact directly with the capture application and allow multiple captures with a single API call.


Using the Capture API

As with most PhoneGap APIs, the Capture API is accessed through a call to one of the capture methods while passing in both success and failure functions plus an options object that controls aspects of the capture event. Each of the parameters passed to the capture functions will be explained later in the chapter.

To capture one or more audio files, an application would make a call similar to the following:

navigator.device.capture.captureAudio(onCaptureSuccess,
  onCaptureError, captureOptions);

To capture one or more image files, an application would use the following:

navigator.device.capture.captureImage(onCaptureSuccess,
  onCaptureError, captureOptions);

To capture one or more video files, an application would use the following:

navigator.device.capture.captureVideo(onCaptureSuccess,
  onCaptureError, captureOptions);

In these examples, the onCaptureSuccess function is called after the capture application (either the device’s camera application or audio recorder) has finished capturing the appropriate media type. When the function is called, the API passes in an array containing information about the media files that were captured by the call to the Capture API. The function should then loop through the array and process each of the media files generated during the capture, as shown in the following example:

function onCaptureSuccess(fileList) {
  var len, i;
  //See how many files are listed in the array
  len = fileList.length;
  //Make sure we had a result; it should always be
  //greater than 0, but you never know!
  if(len > 0) {
    //Media files were captured, so let's process them
    for( i = 0, len; i < len; i += 1) {
      //=========================================
      //Do something with the returned file list
      //=========================================
    }
} else {
  //This will probably never execute
  alert("Error: No files returned.");
  }
}

The file list array passed to the function supports the following properties:

name: The short name for the file (a file name plus extension)

fullPath: The full file path for the file (a file path, file name, and extension)

type: The file’s Multipurpose Internet Mail Extensions (MIME) type

lastModifiedDate: The date and time the file was last modified

size: The file’s size in bytes

An application can use these properties to locate and manipulate each file returned from the capture event, typically rendering the file within the application’s UI or uploading them to a server for processing or storage.

In the following example, the file list is parsed, and the application’s UI is updated to include an ordered list of file short names that can be clicked to open the file.

function onCaptureSuccess(fileList) {
  var i, len, htmlStr;
  len = fileList.length;
  if(len > 0) {
    //Get a handle to the results area of the screen/page
    res = document.getElementById("captureResults");
    htmlStr = '<p>Results:</p><ol>';
    for( i = 0, len; i < len; i += 1) {
      htmlStr += '<li><a href="file:/' +
      fileList[i].fullPath + '">' + fileList[i].name +
      '</a></li>';
  }
  htmlStr += '</ol>';
  //Set the results content
  res.innerHTML = htmlStr;
 }
}

There’s a function an application can call to obtain information about a media file:

mediaFile.getFormatData(successCallback, errorCallback);

Information about the media file is obtained in the successCallback function through the MediaFileData object passed to the function. Unfortunately, as you look at the PhoneGap API documentation, there is very limited support for this capability today.

Calls to the Capture API will create media files for each capture event. These files will be left wherever the capture application places them before passing the file list back to the PhoneGap application that called the Capture API. When your application is done processing the captured files, you may want to delete the files to save space and keep the user from seeing media files that are no longer useful.

The onCaptureError callback function is executed whenever there is an error with a particular capture event. The function is passed an error object that can be queried to determine the cause of the error. The Capture API includes several constants that can be evaluated against to determine the specifics of the error:

CaptureError.CAPTURE_INTERNAL_ERR: The camera or microphone failed to capture an image or sound.

CaptureError.CAPTURE_APPLICATION_BUSY: The camera or audio capture application is busy serving another capture request.

CaptureError.CAPTURE_INVALID_ARGUMENT: The application made an invalid use of the API (an invalid or missing parameter, for example).

CaptureError.CAPTURE_NO_MEDIA_FILES: The application user exited the camera or audio capture application before completing a capture.

CaptureError.CAPTURE_NOT_SUPPORTED: The specified capture operation is not supported.

The following is an example of an onCaptureError callback function that uses these properties:

function onCaptureError(e) {
   var msgText;
   //Build a message string based on the
   //error code returned by the API
   switch(e.code) {
     case CaptureError.CAPTURE_INTERNAL_ERR:
       msgText = "Internal error, the camera or microphone
         failed to capture image or sound.";
       break;
     case CaptureError.CAPTURE_APPLICATION_BUSY:
       msgText = "The camera application or audio capture
         application is currently serving other capture
          request.";
       break;
     case CaptureError.CAPTURE_INVALID_ARGUMENT:
       msgText = "Invalid parameter passed to the API.";
       break;
     case CaptureError.CAPTURE_NO_MEDIA_FILES:
       msgText = "User likely canceled the capture process.";
       break;
     case CaptureError.CAPTURE_NOT_SUPPORTED:
       msgText = "The requested operation is not supported
         on this device.";
       break;
       default:
         //Create a generic response, just in case the
         //following switch fails
         msgText = "Unknown Error (" + e.code + ")";
   }
   //Now tell the user what happened
   console.log(msgText);
   alert(msgText);
}

In my work with the Capture API, I discovered that iOS applications returned the correct error object when the user canceled a capture, but Android devices I tested on did not. The Android devices regularly returned an unknown error and triggered the default portion of the switch statement shown in the example. For that reason, an application might not really be able to tell what happened when a capture failed.

Configuring Capture Options

Each of the supported capture methods accepts an optional captureOptions object that controls aspects of how the capture is performed. The available properties supported by captureOptions are as follows:

duration

limit

mode

All options are not supported across all capture types. Table 12-1 illustrates where each option applies to the different capture types.

Table 12-1 Capture Options

Image

A valid captureOptions object would be defined using the following code:

var captureOptions = {duration: 5, limit: 3};

This example creates a captureOptions object that configures a maximum capture recording duration of five seconds and a maximum of three captures during the capture event.

duration

The duration property applies to only audio and video capture and is designed to control the length (in seconds) of a particular media capture. It allows an application to specify the maximum number of seconds an audio or video clip can be. When used in an application, the user can record media clips shorter than, but no longer than, the number of seconds set for this property.

Looking at the current PhoneGap API documentation, the duration captureOption is not supported on Android and BlackBerry, and it is supported only on iOS for audio capture. Because of this limitation, it’s probably best not to use this option in your PhoneGap applications.

limit

The inappropriately named limit captureOption defines the number of captures performed with the call to the particular capture method. It would make more sense if they called this quantity, but since it’s part of the W3C specification, they had to support the options as defined.

According to the documentation, the limit value is supposed to define a maximum number of captures performed, indicating that the application user could perform less than the maximum. In my testing, it doesn’t work that way; if a user takes less than limit captures, the onCaptureError function is called indicating that the capture process has been aborted.

If this option is used in an application, a value of 1 or greater must be defined.

mode

The mode property is supposed to define the recording mode for each of the supported capture types. When a device supports multiple file formats for a particular capture type (such as JPEG and PNG for image captures, for example), the mode property is supposed to allow you to specify which is used for a capture event. Unfortunately, this particular feature has issues on PhoneGap.

For an application to use this feature, it would need to be able to determine programmatically what modes are supported on the device before making the call to the Capture API. To make things easier for the developer, PhoneGap even includes the following properties, which are supposed to return the list of supported modes:

supportedAudioModes

supportedImageModes

supportedVideoModes

Unfortunately, none of the properties is populated by recent versions of the PhoneGap framework because the information is not exposed through an API on most mobile device platforms.

Capture at Work

Now that we’ve worked through all of the options for the Capture API, it’s time to show a complete example of how to use the API plus illustrate how the capture function actually works on mobile devices. To highlight the capabilities of the Capture API, I created Example 12-1, the application shown in Figure 12-1. It essentially provides a single interface that can be used to demonstrate most of the options supported by the Capture API. Because of the limitations of the mode capture option described previously, the application provides an interface only for the duration and limit options for the Capture API.

Image

Figure 12-1 Capture API demo running on an iPhone

The application uses jQuery Mobile (www.jquerymobile.com) to provide a simple but elegant interface for the application. It uses the default theme to create a simple header bar, the standard iOS buttons, and a cleaner interface for the slider controls used in the application.

An application user selects a capture type using the picker control at the top of the form and then makes selections for limit and duration; then the user clicks the Capture button to begin capturing media files. At this point, the button’s onClick event calls the doCapture function to start the capture process.

The doCapture function retrieves the current settings from the capture type picker and the number of items and duration fields, and it then makes a call to the appropriate capture method, passing in a captureOptions object to tell the method what to do.

The application uses the onCaptureSuccess and onCaptureError functions highlighted earlier in the chapter to update the UI with capture results and to let the user know when problems occur. Example 12-1 shows the complete listing.


Example 12-1

<!DOCTYPE html>
<html>
   <head>
     <title>Example 12-1</title>
     <meta name="viewport" content="width=device-width,
       height=device-height initial-scale=1.0,
       maximum-scale=1.0, user-scalable=no;" />
     <meta http-equiv="Content-type" content="text/html;
       charset=utf-8">
     <link rel="stylesheet" href="jquery.mobile1.0b3.min.css" />
     <script type="text/javascript" charset="utf-8"
       src="jquery1.6.4.min.js"></script>
     <script type="text/javascript" charset="utf-8"
       src="jquery.mobile1.0b3.min.js"></script>
     <script type="text/javascript" charset="utf-8"
       src="phonegap.js"></script>
     <script type="text/javascript" charset="utf-8">
       var results;

       function onBodyLoad() {
         //Add the PhoneGap deviceready event listener
         document.addEventListener("deviceready", onDeviceReady,
           false);
       }

       function onDeviceReady() {
         //Get a handle to the results area of the page
         //we'll need it later
         res = document.getElementById("captureResults");
       }

       function doCapture() {
         //Clear out any previous results
         res.innerHTML = "Initiating capture...";
         //Get some values from the page
         var numItems =
           document.getElementById("numItems").value;
         var capDur =
           document.getElementById("duration").value;
         //Figure out which option is selected
         var captureType =
           document.getElementById("captureType").selectedIndex;
         switch(captureType) {
           case 0:
             //Capture Audio
             navigator.device.capture.captureAudio(
               onCaptureSuccess, onCaptureError,
               {duration: capDur, limit: numItems});
             break;
           case 1:
             //Capture Image
             navigator.device.capture.captureImage(
               onCaptureSuccess, onCaptureError,
               {limit: numItems});
             break;
           case 2:
             //Capture Video
             navigator.device.capture.captureVideo(
               onCaptureSuccess, onCaptureError,
               {duration: capDur, limit: numItems});
             break;
        }
       }

       function onCaptureSuccess(fileList) {
         var i, len, htmlStr;
         len = fileList.length;
         //Make sure we had a result; it should always be
         //greater than 0, but you never know.
         if(len > 0) {
           htmlStr = "<p>Results:</p><ol>";
           for( i = 0, len; i < len; i += 1) {
             //alert(fileList[i].fullPath);
             htmlStr += '<li><a href="file:/' +
               fileList[i].fullPath + '">' + fileList[i].name +
               '</a></li>';
           }
           htmlStr += "</ol>";
           //Set the results content
           res.innerHTML = htmlStr;
       }
     }

     function onCaptureError(e) {
       var msgText;
       //Clear the results text, nothing to show
       res.innerHTML = "";
       //Now build a message string based upon the
       //error returned by the API
       switch(e.code) {
         case CaptureError.CAPTURE_INTERNAL_ERR:
           msgText = "Internal error, the camera or microphone
             failed to capture image or sound.";
           break;
         case CaptureError.CAPTURE_APPLICATION_BUSY:
           msgText = "The camera application or audio capture
             application is currently serving other capture
             request.";
           break;
         case CaptureError.CAPTURE_INVALID_ARGUMENT:
           msgText = "Invalid parameter passed to the API.";
           break;
         case CaptureError.CAPTURE_NO_MEDIA_FILES:
           msgText = "User likely cancelled the capture
             process.";
           break;
         case CaptureError.CAPTURE_NOT_SUPPORTED:
           msgText = "The requested operation is not supported
             on this device.";
           break;
         default:
           //Create a generic response, just in case the
           //following switch fails
           msgText = "Unknown Error (" + e.code + ")";
       }
       //Now tell the user what happened
       navigator.notification.alert(msgText, null,
         "Capture Error");
     }
   </script>
  </head>
  <body onload="onBodyLoad()">
    <div data-role="header">
      <h1>Capture Demo</h1>
    </div>
    <div data-role="content">
      <label for="captureType">Capture Type:</label>
      <select id="captureType" name="captureType">
        <option value="0">Audio</option>
        <option value="1">Image</option>
        <option value="2">Video</option>
      </select>
      <label for="numItems">Number of Items</label>
      <input type="range" name="numItems" id="numItems"
        value="1" min="1" max="5" />
      <label for="duration">Duration</label>
      <input type="range" name="duration" id="duration"
        value="5" min="1" max="10" />
      <input type="button" id="captureButton" value="Capture"
        onclick="doCapture();">
      <div id="captureResults"></div>
    </div>
  </body>
</html>


The first thing you’ll notice when you use the API in your applications is that on some devices there’s a fairly long delay after calling the capture method before the device’s default capture application launches to perform the capture. Because of this delay, your application may need to include a Loading Capture Application window or something to let the user know what’s going on during this delay.

You will also notice inconsistencies in the implementation of capture functionality across different Android devices; some examples of this will be provided later in the chapter. Additionally, even though the API documentation doesn’t indicate this, on the BlackBerry platform, the limit option is ignored, so no matter what setting you use, the BlackBerry will perform only one capture per call to the Capture API.

Let’s look at some examples of Example 12-1 in action.

In Figure 12-2, the application is running on an iPhone device and is configured to capture an image in addition to grabbing three images when the user clicks the Capture button. Since image capture is being performed, the duration option has no effect on the capture process.

Image

Figure 12-2 Example 12-1 configured for image capture

When the user clicks the Capture button, the iOS camera application will load and prompt the user to take three pictures, one at a time. As each image is captured, iOS will prompt the user to use the current image or discard it and take a different picture, as shown in Figure 12-3. The user must click the Use button to accept the current picture and either take another picture or return to the calling program.

Image

Figure 12-3 Example 12-1 image preview on iOS

When the images are returned to the calling program, it will update the UI to show the list of image files, as shown in Figure 12-4. In this example, it’s showing links to the image files that can be clicked to open the images for viewing. For audio and video captures, the links may open but won’t display properly because of some limitations in the device OS.

Image

Figure 12-4 Example 12-1 image capture results

With the application configured for audio clip capture, the sound recorder application will load, as shown in Figure 12-5. When finishing the recording of the audio clip, the user must click the Done button to return information about the captured audio files to the calling program.

With the application configured for video capture, the video recorder application will load, as shown in Figure 12-6. When finishing recording of the clip, the user must click the Use button to return information about the captured video files to the calling program.

As you can see from these iOS examples, the process is pretty straightforward, but even on iOS there are inconsistencies. In some cases, the user clicks a Use button to return to the calling program, but in other cases it’s a Done button. Additionally, if you’re doing multiple captures, there’s no visual indication of how many captures are being performed and how many have been completed. For this reason, I recommend that you do only a single capture at a time (use the default for limit, which is a single capture) to make it clearer to your application user what’s going on.

Image

Figure 12-5 Example 12-1 audio capture

Image

Figure 12-6 Example 12-1 video capture

Figure 12-7 shows the same application running on an Android device. As you can see from the figure, the application looks (almost) the same as it does on iOS; this is made possible by jQuery Mobile, which takes care of the UI so you don’t have to. In this example, the application is configured for image capture and will grab two images when the user clicks the Capture button.

Image

Figure 12-7 Example 12-1 running on an Android device

When the user clicks the Capture button, the camera application for the specific device will load and prompt the user to take two pictures, one at a time. As each image is captured, Android will prompt the user to use the current image or discard it and take a different picture, as shown in Figure 12-8. For the device I used for testing, the user must click the highlighted paperclip button to accept the current picture and either take another picture or return to the calling program. Other Android devices may have a different UI for the camera application that could include different buttons or different button labels.

Image

Figure 12-8 Example 12-1 Android image preview

Where this gets interesting is when you attempt to capture an audio file on an Android device. When the user clicks the Capture button, the default Android Voice Recorder application will launch, as shown in Figure 12-9. When the user clicks the Record button in the bottom middle of the voice recorder application screen, the application will record an audio clip using the device microphone (or a headset microphone if one is plugged into the device).

Image

Figure 12-9 Example 12-1 Android audio capture

When the user clicks the Stop button (the button with the square on it in the bottom-right corner of Figure 12-9) to end the recording, the voice recorder application will display a screen similar to the one shown in Figure 12-10 (the screen will vary depending on the Android OS version and possibly the device manufacturer). The problem here is that for the particular devices I used for testing, there is no way to indicate to the voice recorder application that you’re done recording and want to return to the calling program. On other devices such as the Motorola Droid smartphone, it will show a “Use this recording” button and pass the recorded file back to the PhoneGap application.

Image

Figure 12-10 Android voice recorder audio clip options

The PhoneGap application can use the Capture API to launch the voice recorder application, but there’s no way within the voice recorder application to pass information about the captured media files back to the PhoneGap application. On the device I used for testing, shown in Figure 12-10, you can play the audio clip, re-record the clip, share or delete the clip, and even access a listing of captured audio files, but there’s no way to get the captured audio clip back to the PhoneGap application. Other manufacturers’ devices may show more appropriate options to the user.

When capturing video on an Android device, the application will launch the video recorder application to capture the video. When the recording process is complete, the video recorder application will display the preview screen, as shown in Figure 12-11. In this case, the application is running on a LG Thrill device. When satisfied with the video clip, the user must click the paperclip icon highlighted in the figure to return information about the video clip(s) to the calling program.

Image

Figure 12-11 Video preview on an Android LG Thrill device

On Samsung Infuse 4G Android devices, the preview window is different, showing only the save and discard options shown in Figure 12-12.

Image

Figure 12-12 Video preview on an Android Samsung Infuse 4G device

The application will run unmodified on newer BlackBerry devices. The only issue affecting developers is that the BlackBerry platform ignores the limit option, so no matter what your application expects, on a BlackBerry only one capture event will occur for every call to the Capture API.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset