Hour 17. Using Advanced Touches and Gestures


What You’ll Learn in This Hour:

Image The multitouch gesture-recognition architecture

Image How to detect taps

Image How to detect swipes

Image How to detect pinches

Image How to detect rotations

Image How to use the built-in shake gesture


A multitouch screen allows applications to use a wide variety of natural finger gestures for operations that would otherwise be hidden behind layers of menus, buttons, and text. From the very first time you use a pinch to zoom in and out on a photo, map, or web page, you realize that’s exactly the right interface for zooming. Nothing is more human than manipulating the environment with your fingers.

iOS provides advanced gesture-recognition capabilities that you can easily implement within your applications. This hour shows you how.

Multitouch Gesture Recognition

While working through this book’s examples, you’ve gotten used to responding to events, such as Touch Up Inside, for onscreen buttons. Gesture recognition is a bit different. Consider a “simple” swipe. The swipe has direction, it has velocity, and it has a certain number of touch points (fingers) that are engaged. It is impractical for Apple to implement events for every combination of these variables; at the same time, it is extremely taxing on the system to just detect a “generic” swipe event and force you, the developer, to check the number of fingers, direction, and so on each time the event is triggered.

To make life simple, Apple has created gesture-recognizer classes for almost all the common gestures that you may want to implement in your applications, as follows:

Image Tapping (UITapGestureRecognizer): Tapping one or more fingers on the screen

Image “Long” pressing (UILongPressGestureRecognizer): Pressing one or more fingers to the screen for a specific period of time

Image Pinching (UIPinchGestureRecognizer): Pinching to close or expand something

Image Rotating (UIRotationGestureRecognizer): Sliding two fingers in a circular motion

Image Swiping (UISwipeGestureRecognizer): Swiping with one or more fingers in a specific direction

Image Panning (UIPanGestureRecognizer): Touching and dragging

Image Screen-edge panning (UIScreenEdgePanGestureRecognizer): Touching and dragging, but starting from the edge of the screen

Image Shaking: Physically shaking the iOS device

In early versions of iOS, developers had to read and recognize low-level touch events to determine whether, for example, a pinch was happening: Are there two points represented on the screen? Are they moving toward each other?

Today you define what type of recognizer you’re looking for, add the recognizer to a view (UIView), and you automatically receive any multitouch events that are triggered. You even receive values such as velocity and scale for gestures such as pinch. Let’s see what this looks like translated into code.


Tip

Shaking is not a multitouch gesture and requires a slightly different approach. Note that it doesn’t have its own recognizer class.


Adding Gesture Recognizers

You can add gesture recognizers to your projects in one of two ways: either through code or visually using the Interface Builder editor. Although using the editor makes life much easier for us, it is still important to understand what is going on behind the scenes. Consider the code fragment in Listing 17.1.

LISTING 17.1 Example of the Tap Gesture Recognizer


1:   var tapRecognizer: UITapGestureRecognizer
2:   tapRecognizer=UITapGestureRecognizer(target: self, action:"foundTap:")
3:   tapRecognizer.numberOfTapsRequired=1
4:   tapRecognizer.numberOfTouchesRequired=1
5:   tapView.addGestureRecognizer(tapRecognizer)


This example implements a tap gesture recognizer that will look for a single tap from a single finger within a view called tapView. If the gesture is seen, the method foundTap is called.

Line 1 kicks things off by declaring an instance of the UITapGestureRecognizer object, tapRecognizer. In line 2, tapRecognizer is initialized with initWithTarget:action. (Remember that the initWith part is left out in Swift versions of initialization methods.) Working backward, the action is the method that will be called when the tap occurs. Using the action foundTap:, we tell the recognizer that we want to use a method called foundTap to handle our taps. The target we specify, self, is the object where foundTap lives. In this case, it will be whatever object is implementing this code (probably a view controller).

Lines 3 and 4 set two variable properties of the tap gesture recognizer:

Image numberOfTapsRequired: The number of times the object needs to be tapped before the gesture is recognized

Image numberOfTouchesRequired: The number of fingers that need to be down on the screen before the gesture is recognized

Finally, line 5 uses the UIView method addGestureRecognizer to add the tapRecognizer to a view called tapView. As soon as this code is executed, the recognizer is active and ready for use, so a good place to implement the recognizer is in a view controller’s viewDidLoad method.

Responding to the event is simple: Just implement the foundTap method. An appropriate method stub for the implementation looks like this:

func foundTap(sender: AnyObject) {
    outputLabel.text="Tapped"
}

What happens when the gesture is found is entirely up to you. One could simply respond to the fact the gesture took place, use the parameter provided to the method to get additional details about where the tap happened on the screen, and so on.

All in all, not too bad, don’t you think? What’s even better? In most cases, you can do almost all of this setup entirely within Interface Builder, as shown in Figure 17.1. The tutorial in this hour shows how to do exactly that.

Image

FIGURE 17.1 Gesture recognizers can be added through Interface Builder.

Using Gesture Recognizers

As people become more comfortable with touch devices, the use of gestures becomes almost natural—and expected. Applications that perform similar functions are often differentiated by their user experience, and a fully touch-enabled interface can be the deciding factor between a customer downloading your app and passing it by.

Perhaps the most surprising element of adding gestures to applications is just how easy it is. I know I say that often throughout the book, but gesture recognizers are one of those rare features that “just works.” Follow along and find out what I mean.

Implementation Overview

In this hour’s application, which we’ll name Gestures, you implement five gesture recognizers (tap, swipe, pinch, rotate, and shake), along with the feedback those gestures prompt. Each gesture updates a text label with information about the gesture that has been detected. Pinch, rotate, and shake take things a step further by scaling, rotating, or resetting an image view in response to the gestures.

To provide room for gesture input, the application displays a screen with four embedded views (UIView), each assigned a different gesture recognizer directly within the storyboard scene. When you perform an action within one of the views, it calls a corresponding action method in our view controller to update a label with feedback about the gesture, and depending on the gesture type, updates an onscreen image view (UIImageView), too.

Figure 17.2 shows the final application.

Image

FIGURE 17.2 The application detects and acts upon a variety of gestures.


Caution: Auto Layout: Our Frenemy

We have to be a bit clever in this application because image views that we add in Interface Builder are subject to Apple’s constraint system. Ideally, we want to be able to take advantage of the Auto Layout system to position our image view in a nice default position, regardless of our screen size (exactly what you learned in the preceding hour). Once the application launches, however, we don’t want any of the constraints enforced because we want to be able to resize and rotate the image view using our gestures.

You can take care of this in any number of ways, including programmatically finding and removing constraints with the removeConstraints NSLayoutConstraint method. The method we take, however, is to add an image view in Interface Builder so that we can position it visually and then replace it with our own constraint-free image view right after the application launches. It’s a relatively simple way to take advantage of Auto Layout for the initial interface object layout and then gain the flexibility of working with a constraint-free object as the application executes.


Setting Up the Project

Start Xcode and create a new single-view iOS application called Gestures. This project requires quite a few outlets and actions, so be sure to follow the setup closely. You’ll also be making connections directly between objects in Interface Builder. So, even if you’re used to the approach we’ve taken in other projects, you might want to slow down for this one.

Adding the Image Resource

Part of this application’s interface is an image that can rotate or scale up and down. We use this to provide visual feedback to users based on their gestures. Included with this hour’s project is an Images folder and a file named flower.png. Open the Images.xcassets asset library in your project and drag the Images folder into the column on the left of the library.

Planning the Variables and Connections

For each touch gesture that we want to sense, we need a view where it can take place. Often, this would be your main view. For the purpose of demonstration, however, we will add four UIViews to our main view that will each have a different associated gesture recognizer. Surprisingly, none of these require outlets, because we’ll connect the recognizers to them directly in Interface Builder.

We do, however, need two outlets, outputLabel and imageView, instances of the classes UILabel and UIImageView, respectively. The label is used to provide text feedback to the user, while the image view shows visual feedback to the pinch and rotate gestures.

When the application senses a gesture within one of the four views, it needs to invoke an action method that can interact with the label and image. We will connect the gesture recognizers to methods called foundTap, foundSwipe, foundPinch, and foundRotation.


Note

Notice that we don’t mention the shake gesture here? Even though we will eventually add shake recognition to this project, it will be added by implementing a very specific method in our view controller, not through an arbitrary action method that we define upfront.


Adding a Variable Property for the Image View Size

When our gesture recognizers resize or rotate the image view in our user interface (UI), we want to be able to reset it to its default position and size. To make this happen, we need to “know” in our code what the default position for the image was. View positioning and sizing is described using a data structure (not an object) called a CGRect that contains four values: x and y coordinates (origin.x and origin.y), and width and height (size.width and size.height). We will add a variable property to the project that, when the application first launches, stores the size and location of the image view (the CGRect of the view) we added in Interface Builder. We’ll name this originalRect.

Open your ViewController.swift file and add the following line after the class statement:

var originalRect: CGRect!

The originalRect variable property is declared and ready to be used in our implementation, but first we need an interface.

Designing the Interface

Open the Main.storyboard file, change to an appropriate simulated device (or use the Auto Layout/Size Class techniques you learned in the preceding hour), and make room in your workspace. It’s time to create our UI.

To build the interface, start by dragging four UIView instances to the main view. Size the first to a small rectangle in the upper-left portion of the screen; it will capture taps. Make the second a long rectangle beside the first (for detecting swipes). Size the other two views as large rectangles below the first two (for pinches and rotations). Use the Attributes Inspector (Option-Command-4) to set the background color of each view to be something unique.


Tip

The views you are adding are convenient objects that we can attach gestures to. In your own applications, you can attach gesture recognizers to your main application view or the view of any onscreen object.



Tip

Gesture recognizers work based on the starting point of the gesture, not where it ends. In other words, if a user uses a rotation gesture that starts in a view but ends outside the view, it will work fine. The gesture won’t “stop” just because it crosses a view’s boundary.

For you, the developer, this is a big help for making multitouch applications that work well on a small screen.


Next, drag labels into each of the four views. The first label should read Tap Me!. The second should read Swipe Me!. The third label should read Pinch Me!. The fourth label should read Rotate Me!.

Drag a fifth UILabel instance to the main view, and center it at the top of the screen. Use the Attributes Inspector to set it to align center. This will be the label we use to provide feedback to the user. Change the label’s default text to Do something!.

Finally, add a UIImageView layout, and then position it in an appropriately attractive location at the bottom center of the scene; use the Auto Layout constraints if you so desire (see Figure 17.3). Remember that we will not actually be using this image view to display gesture feedback; we want it solely for positioning. So, there is no need to set a default image for the image view.

Image

FIGURE 17.3 Size and position the UIImageView similar to what is shown here.

With the view finished, in most projects we start connecting our interface to our code through outlets and actions—but not this hour. Before we can create our connections, we need to add the gesture recognizers to the storyboard.


Tip

We’re about to do a bunch of dragging and dropping of objects onto the UIViews that you just created. If you often use the document outline to refer to the objects in your view, you may want to use the Label field of the Document group in the Identity Inspector (Option-Command-3) to give them more meaningful names than the default View label they appear with. You can also edit the names directly in the document outline by clicking to select them, then pressing return.

Labels are arbitrary and do not affect the program’s operation at all.


Adding Gesture Recognizers to Views

As you learned earlier, one way to add a gesture recognizer is through code. You initialize the recognizer you want to use, configure its parameters, and then add it to a view and provide a method it will invoke if a gesture is detected. Alternatively, you can drag and drop from the Interface Builder Object Library and barely write any code. We’re going to do this now.

Make sure that Main.storyboard is open and that the document outline is visible.

The Tap Recognizer

Our first step is to add an instance of the UITapGestureRecognizer object to our project. Search the Object Library for the tap gesture recognizer and drag and drop it onto the UIView instance in your project that is labeled Tap Me!, as shown in Figure 17.4. The recognizer will appear as an object at the bottom of the document outline, regardless of where you drop it.

Image

FIGURE 17.4 Drag the recognizer onto the view that will use it.


Caution: Everything Is a View

Be careful not to drag the recognizer onto the label within the view. Remember that every onscreen object is a subclass of UIView, so you could potentially add a gesture recognizer to the label rather than to the intended view. You might find it easier to target the views in the document outline rather than in the visual layout.


Through the simple act of dragging the tap gesture recognizer into the view, you’ve created a gesture-recognizer object and added it to that view’s gesture recognizers. (A view can have as many as you want.)

Next, you need to configure the recognizer so that it knows what type of gesture to look for. Tap gesture recognizers have two attributes to configure:

Image Taps: The number of times the object needs to be tapped before the gesture is recognized

Image Touches: The number of fingers that need to be down on the screen before the gesture is recognized

In this example, we’re defining a tap as one finger tapping the screen once, so we define a single tap with a single touch. Select the tap gesture recognizer, and then open the Attributes Inspector (Option-Command-4), as shown in Figure 17.5.

Image

FIGURE 17.5 Use the Attributes Inspector to configure your gesture recognizers.

Set both the Taps and Touches fields to 1 (or just go nuts); this is a perfect time to play with the recognizer. Just like that, the first gesture recognizer is added to the project and configured. We still need to connect it to an action a bit later, but now we need to add the other recognizers.


Tip

If you look at the connections on the UITapGestureRecognizer object or the view that you dropped it onto, you’ll see that the view references an outlet collection called Gesture Recognizers. An outlet collection is an array of outlets that make it easy to refer to multiple similar objects simultaneously. If you add more than one gesture recognizer to a view, the recognizer is referenced by the same outlet collection.


The Swipe Recognizer

You implement the swipe gesture recognizer in almost the same manner as the tap recognizer. Instead of being able to choose the number of taps, however, you can determine in which direction the swipes can be made—up, down, left, or right—as well as the number of fingers (touches) that must be down for the swipe to be recognized.

Again, use the Object Library to find the swipe gesture recognizer (UISwipeGestureRecognizer) and drag a copy of it in into your view, dropping it on top of the view that contains the Swipe Me! label. Next, select the recognizer and open the Attributes Inspector to configure it, as shown in Figure 17.6. For this tutorial, I configured the swipe gesture recognizer to look for swipes to the right that are made with a single finger.

Image

FIGURE 17.6 Configure the swipe direction and the number of touches required.


Note

If you want to recognize and react to different swipe directions, you must implement multiple swipe gesture recognizers. It is possible, in code, to ask a single swipe gesture recognizer to respond to multiple swipe directions, but it cannot differentiate between the directions.


The Pinch Recognizer

A pinch gesture is triggered when two fingers move closer together or farther apart within a view, and it is often used to make something smaller or larger, respectively. Adding a pinch gesture recognizer requires even less configuration than taps or swipes because the gesture itself is already well defined. The implementation of the action that interprets a pinch, however, will be a bit more difficult because we are also interested in “how much” a user pinched (called the scale of the pinch) and how fast (the velocity), rather than just wanting to know that it happened. More on that in a few minutes.

Using the Object Library, find the pinch gesture recognizer (UIPinchGestureRecognizer) and drag it onto the view that contains the Pinch Me! label. No other configuration is necessary.


Tip

If you look at the Attributes Inspector for a pinch, you’ll see that you can set a scale attribute that corresponds to a scale variable property on the object. The scale, by default, starts at 1. Imagine you move your fingers apart to invoke a pinch gesture recognizer. If you move your fingers twice as far apart as they were, the scale becomes 2 (1 × 2). If you repeat the gesture, moving them twice as far apart again, it becomes 4 (2 × 2). In other words, the scale changes using its previous reading as a starting point.

Usually you want to leave the default scale value to 1, but be aware that you can reset the default in the Attributes Inspector if need be.


The Rotation Recognizer

A rotation gesture is triggered when two fingers move opposite one another as if rotating around a circle. Imagine turning a doorknob with two fingers on the top and bottom and you’ll get the idea of what iOS considers a valid rotation gesture. As with a pinch, the rotation gesture recognizer requires no configuration; all the work occurs in interpreting the results—the rotation (in radians) and the speed (velocity) of the rotation.

Find the rotation gesture recognizer (UIRotationGestureRecognizer) and drag it onto the view that contains the Rotate Me! label. You’ve just added the final object to the storyboard.


Tip

Just like the pinch gesture recognizer’s scale, the rotation gesture recognizer has a rotation variable property that you can set in the Attributes Inspector. This value, representing the amount of rotation in radians, starts at 0 and changes with each successive rotation gesture. If you want, you can override the initial starting rotation of 0 radians with any value you choose. Subsequent rotation gestures start from the value you provide.


Creating and Connecting the Outlets and Actions

To respond to gestures and access our feedback objects from the main view controller, we need to establish the outlets and actions we defined earlier.

Let’s review what we need, starting with the outlets:

Image The image view (UIImageView): imageView

Image The label for providing feedback (UILabel): outputLabel

And the actions:

Image Respond to a tap gesture: foundTap

Image Respond to a swipe gesture: foundSwipe

Image Respond to a pinch gesture: foundPinch

Image Respond to a rotation gesture: foundRotation

Prepare your workspace for making the connections. Open the Main.storyboard file and switch to the assistant editor mode with ViewController.swift visible. Because you will be dragging from the gesture recognizers in your scene, make sure that the document outline is showing (Editor, Show Document Outline) or that you can tell the difference between them in the object dock below your view.

Adding the Outlets

Control-drag from the Do Something! label to just below the variable property originalRect that you added earlier. When prompted, create a new outlet called outputLabel, as shown in Figure 17.7. Repeat the process for the image view, naming it imageView.

Image

FIGURE 17.7 Connect the label and image view.

Adding the Actions

Connecting the gesture recognizers to the action methods that we’ve identified works as you probably imagine, but with one difference. Usually when you connect an object to an action, you’re connecting a particular event on that object—such as Touch Up Inside, for buttons. In the case of a gesture recognizer, you are actually making a connection from the recognizer’s “selector” to a method. Recall in the earlier code example that the selector is just the name of the method that should be invoked if a gesture is recognized.


Tip

Some gesture recognizers (tap, swipe, and long press) can also trigger segues to other storyboard scenes by using the Storyboard Segues section in the Connections Inspector. You learned about multiscene storyboards in Hour 11, “Implementing Multiple Scenes and Popovers.”


To connect the gesture recognizer to an action method, just Control-drag from the gesture recognizer entry in the document outline to the ViewController.swift file. Do this now with the tap gesture recognizer, targeting just below the variable properties you defined earlier. When prompted, configure the connection as an action with the name foundTap, as shown in Figure 17.8.

Image

FIGURE 17.8 Connect the gesture recognizer to a new action.

Repeat this process for each of the other gesture recognizers—connecting the swipe recognizer to foundSwipe, the pinch recognizer to foundPinch, and the rotation recognizer to foundRotation. To verify your connections, select one of the recognizers (here, the tap recognizer) and view the Connections Inspector (Option-Command-6). You should see the action defined in Sent Actions and the view that uses the recognizer referenced in the Referencing Outlet Collections section, as shown in Figure 17.9.

Image

FIGURE 17.9 Confirm your connections in the Connections Inspector.


Tip

Hover your mouse over a given connection in the Connection Inspector to see that item highlighted in your scene (shown in Figure 17.9). This is a quick way of verifying that your gestures are connected to the right views.


We’re done with our interface and done adding gesture recognizers to our project; now let’s make them do something.

Implementing the Application Logic

To begin the implementation, we address our image view problem: We need to replace the image view that gets added through Interface Builder with one we create programmatically. We also grab the position and size of the image view from its frame variable property (a CGRect) and store it in the originalRect variable property. Where will this happen? In the view controller method viewDidLoad, which is called as soon as the interface loads.

Replacing the Image View

Make sure that the standard editor mode is selected, and then open the ViewController.swift file and update the viewDidLoad the method, as shown in Listing 17.2.

LISTING 17.2 Implementing the viewDidLoad Method


 1:  override func viewDidLoad() {
 2:      super.viewDidLoad()
 3:
 4:      originalRect=imageView.frame
 5:
 6:      var tempImageView: UIImageView
 7:      tempImageView=UIImageView(image:UIImage(named: "flower.png"))
 8:      tempImageView.frame=originalRect
 9:      view.addSubview(tempImageView)
10:
11:      self.imageView=tempImageView
12:  }


Line 4 grabs the frame from the image view that we added in Interface Builder. This is a data structure of the type CGRect and consists of four floating-point values: origin.x, origin.y, size.width, and size.height. The original values are stored in originalRect.

Lines 6–7 declare and initialize a new UIImageView (tempImageView) using the flower.png image that we added to our project earlier.

In Line 8, we set the frame of the new image view to the frame of the original image view, conveniently stored in originalRect. That finishes up the configuration of the constraint-free image view; it is added to the view controller’s main view (the scene) with the addSubview method in Line 9.

As a final step in swapping the image views, line 11 reassigns the imageView variable property to the new tempImageView. We can now access the new image view through the variable property that originally pointed to the image view added in Interface Builder.

Now, let’s move on to the gesture recognizers, beginning with the tap recognizer. What you’ll quickly discover is that after you’ve added one recognizer, the pattern is very, very similar for the others. The only difference is the shake gesture, which is why we’re saving that for last.

Responding to the Tap Gesture Recognizer

Responding to the tap gesture recognizer is just a matter of implementing the foundTap method. Update the method stub in the view controller (ViewController.swift) with the implementation shown in Listing 17.3.

LISTING 17.3 Implementing the foundTap Method


@IBAction func foundTap(sender: AnyObject) {
    outputLabel.text="Tapped"
}


This method doesn’t need to process input or do anything other than provide some indication that it has run. Setting the outputLabel’s text variable property to "Tapped" should suffice nicely.

Ta da! Your first gesture recognizer is done. We’ll repeat this process for the other four, and we’ll be finished before you know it.


Tip

If you want to get the coordinate where a tap gesture (or a swipe) takes place, you add code like this to the gesture handler (replacing <the view> with a reference to the recognizer’s view):

var location: CGPoint = (sender as UITapGestureRecognizer).locationInView(view)

This creates a simple structure named location, with members x and y, accessible as location.x and location.y.


Responding to the Swipe Recognizer

We respond to the swipe recognizer in the same way we did with the tap recognizer, by updating the output label to show that the gesture was recognized. Implement the foundSwipe method as shown in Listing 17.4.

LISTING 17.4 Implementing the foundSwipe Method


@IBAction func foundSwipe(sender: AnyObject) {
    outputLabel.text="Swiped"
}


So far, so good. Next up, the pinch gesture. This requires a bit more work because we’re going to use the pinch to interact with our image view.

Responding to the Pinch Recognizer

Taps and swipes are simple gestures; they either happen or they don’t. Pinches and rotations are slightly more complex, returning additional values to give you greater control over the user interface. A pinch, for example, includes a velocity variable property (how quickly the pinch happened) and scale (a fraction that is proportional to change in distance between your fingers). If you move your fingers 50% closer together, the scale is .5, for example. If you move them twice as far apart, it is 2.

You’ve made it to the most complex piece of code in this hour’s lesson. The foundPinch method accomplishes several things. It resets the UIImageView’s rotation (just in case it gets out of whack when we set up the rotation gesture), creates a feedback string with the scale and velocity values returned by the recognizer, and actually scales the image view so that the user receives immediate visual feedback.

Implement the foundPinch method as shown in Listing 17.5.

LISTING 17.5 Implementing the foundPinch Method


 1: @IBAction func foundPinch(sender: AnyObject) {
 2:     var recognizer: UIPinchGestureRecognizer
 3:     var feedback: String
 4:     var scale: CGFloat
 5:
 6:     recognizer=sender as UIPinchGestureRecognizer
 7:     scale=recognizer.scale
 8:     imageView.transform = CGAffineTransformMakeRotation(0.0)
 9:
10:     feedback=String(format: "Pinched, Scale: %1.2f, Velocity: %1.2f",
11:         Float(recognizer.scale),Float(recognizer.velocity))
12:     outputLabel.text=feedback
13:     imageView.frame = CGRectMake(self.originalRect.origin.x,
14:                         originalRect.origin.y,
15:                         originalRect.size.width*scale,
16:                         originalRect.size.height*scale)
17: }


Let’s walk through this method to make sure that you understand what’s going on. Lines 2–4 declare a reference to a pinch gesture recognizer (recognizer), a string object (feedback), and a CGFloat value (scale). These are used to interact with our pinch gesture recognizer, store feedback for the user, and hold the scaling value returned by the pinch gesture recognizer, respectively.

Line 6 takes the incoming sender object of the type AnyObject and casts it as a UIPinchGestureRecognizer, which can then be accessed through the recognizer variable. The reason we do this is simple. When you created the foundPinch action by dragging the gesture recognizer into your ViewController.swift file, Xcode wrote the method with a parameter named sender of the generic “handles any object” type AnyObject. Xcode does this even though the sender will always be, in this case, an object of type UIPinchGestureRecognizer. Line 6 just gives us a convenient way of accessing the object as the type it really is.

Line 7 sets scale to the recognizer’s scale variable property.

Line 8 resets the imageView object to a rotation of 0.0 (no rotation at all) by setting its transform variable property to the transformation returned by the Core Graphics CGAffineTransformMakeRotation function. This function, when passed a value in radians, returns the necessary transformation to rotate a view.

Lines 10–11 initialize the feedback string to show that a pinch has taken place and output the values of the recognizer’s scale and velocity variable properties—after converting them from CGFloat data structures to floating-point values. Line 12 sets the outputLabel in the UI to the feedback string.

For the scaling of the image view itself, lines 13–16 do the work. All that needs to happen is for the imageView object’s frame to be redefined to the new size. To do this, we can use CGRectMake to return a new frame rectangle based on a scaled version of the CGRect stored in the original image view position: originalRect. The top-left coordinates (origin.x, origin.y) stay the same, but we multiply size.width and size.height by the scale factor to increase or decrease the size of the frame according to the user’s pinch.

Building and running the application will now let you enlarge (even beyond the boundaries of the screen) or shrink the image using the pinch gesture within the pinchView object, as shown in Figure 17.10.

Image

FIGURE 17.10 Enlarge or shrink the image in a pinch (ha ha).


Note

If you don’t want to cast the sender variable to use it as a gesture recognizer, you can also edit Xcode’s method declarations to include the exact type being passed. Just change the method declaration from

@IBAction func foundPinch(sender: AnyObject) {

to

@IBAction func foundPinch(sender: UIPinchGestureRecognizer) {

If you do so, you’ll be able to access sender directly as an instance of UIPinchGestureRecognizer.


Responding to the Rotation Recognizer

The last multitouch gesture recognizer that we’ll add is the rotation gesture recognizer. Like the pinch gesture, rotation returns some useful information that we can apply visually to our onscreen objects, notably velocity and rotation. The rotation returned is the number of radians that the user has rotated his or her fingers, clockwise or counterclockwise.


Tip

Most of us are comfortable talking about rotation in “degrees,” but the Cocoa classes usually use radians. Don’t worry. It’s not a difficult translation to make. If you want, you can calculate degrees from radians using the following formula:

Degrees = Radians × 180 / Pi

There’s not really any reason we need this now, but in your own applications, you might want to provide a degree reading to your users.


I’d love to tell you how difficult it is to rotate a view and about all the complex math involved, but I pretty much gave away the trick to rotation in the foundPinch method earlier. A single line of code will set the UIImageView’s transform variable property to a rotation transformation and visually rotate the view. Of course, we also need to provide a feedback string to the user, but that’s not nearly as exciting, is it?

Add the foundRotation method in Listing 17.6 to your ViewController.swift file.

LISTING 17.6 Adding the foundRotation Method


 1: @IBAction func foundRotation(sender: AnyObject) {
 2:     var recognizer: UIRotationGestureRecognizer
 3:     var feedback: String
 4:     var rotation: CGFloat
 5:
 6:     recognizer=sender as UIRotationGestureRecognizer
 7:     rotation=recognizer.rotation
 8:
 9:     feedback=String(format: "Rotated, Radians: %1.2f, Velocity: %1.2f",
10:         Float(recognizer.rotation),Float(recognizer.velocity))
11:     outputLabel.text=feedback
12:     imageView.transform = CGAffineTransformMakeRotation(rotation)
13: }


Again, we begin by declaring a reference to a gesture recognizer (recognizer), a string (feedback), and a CGFloat value (rotation), in lines 2–4.

Line 6 takes the incoming sender object of the type AnyObject and casts it as a UIRotationGestureRecognizer, which can then be accessed through the recognizer variable.

Line 7 sets the rotation value to the recognizer’s rotation variable property. This is the rotation in radians detected in the user’s gesture.

Lines 9–10 create the feedback string showing the radians rotated and the velocity of the rotation, and line 11 sets the output label to the string.

Line 12 handles the rotation itself, creating a rotation transformation and applying it to the imageView object’s transform variable property.


Note

The foundPinch method can also be implemented by updating the transform variable property for imageView and using the CGAffineTransformMakeScale method. In essence, you could replace lines 13–16 of foundPinch with a single line:

imageView.transform = CGAffineTransformMakeScale(scale,scale)

Why did we update the frame of the imageView instead? Two reasons. First, because it gives you experience with two approaches to manipulating a view. Second, because setting a transformation for the image view doesn’t really change the view’s underlying frame; it changes the appearance instead. If you really want to the view’s size and location to change (not just the appearance of its size and location), applying a transformation isn’t the way to go.


Run and test your application now. You should be able to freely spin the image view using a rotation gesture in the rotate view, as shown in Figure 17.11.

Image

FIGURE 17.11 Spin the image view using the rotation gesture.

Although it might seem like we’ve finished, we still need to cover one more gesture: a shake.


Try it Yourself: Trying Transformations in Xcode Playground

Xcode Playground can be used for many different things—including testing out the transformations you’ve learned this hour. To transform and display an image in the Playground, first we must create a Playground and add an image resource:

1. Begin by creating a new iOS Playground.

2. Within the Playground, choose View, Toolbar from the menu.

3. Show the File Inspector (Option-Command-1).

4. Look for the Resource Path setting within the Playground Settings section. Click the arrow at the end of the path.

5. A folder window opens in the OS X Finder. This folder holds any resources you want to access from the Playground. Copy the flower.png file from the project Images folder into the resources folder you identified.

6. Close the folder.

Now, you’re ready to work with the flower.png image, just as you would within an application. Unfortunately, there’s still one difference from a real application, but you’ll see it’s pretty easy to overcome. Add following code into the playground:

import UIKit

var testView: UIView = UIView(frame:CGRectMake(0,0,300,300))

var testImageView: UIImageView = UIImageView(image:UIImage(named: "flower.png"))
testImageView.frame=CGRectMake(50.0,50.0,100.0,100.0)
testImageView.transform = CGAffineTransformMakeRotation(0.8)

testView.addSubview(testImageView)

The three lines in the middle should look pretty familiar. The testImageView is created from the flower.png file you added. A frame is set to size and position the image view, and, finally, a rotation transformation is applied (the 0.8 should rotate about 90 degrees to the right).

Unfortunately, the Playground won’t work quite right with these lines alone because the image view needs to be inside another view to work. To that end, the first thing we do in the Playground is create a UIView called testView sized large enough to hold the image view. After we’re done setting up the image view, we add it to the test view with the addSubView method.

You should be able to click the “eye” icon to the right of the last line and see the transformed image view within the test view. You can use this same approach to test any transformations you’d like. No fuss, no muss.


Implementing the Shake Recognizer

Dealing with a shake is a bit different from the other gestures covered this hour. We must intercept a UIEvent of the type UIEventTypeMotion. To do this, our view controller or view must be the first responder in the responder chain and must implement the motionEnded:withEvent method.

Let’s tackle these requirements one at a time.

Becoming a First Responder

For our view controller to be a first responder, we have to allow it through a method called canBecomeFirstResponder that does nothing but return YES, and then ask for first responder status when the view controller loads its view. Start by adding the new method canBecomeFirstResponder, shown in Listing 17.7, to your ViewController.swift implementation file.

LISTING 17.7 Enabling the Ability to Be a First Responder


override func canBecomeFirstResponder() -> Bool {
    return true
}


Next, we need our view controller to become the first responder by sending the message becomeFirstResponder as soon as it has displayed its view. Update the ViewController.swift viewDidLoad method to do this, as shown in Listing 17.8.

LISTING 17.8 Asking to Become a First Responder


override func viewDidLoad() {
    super.viewDidLoad()
    self.becomeFirstResponder()

    originalRect=imageView.frame
    var tempImageView: UIImageView
    tempImageView=UIImageView(image:UIImage(named: "flower.png"))
    tempImageView.frame=originalRect
    view.addSubview(tempImageView)

    self.imageView=tempImageView
}


Our view controller is now prepared to become the first responder and receive the shake event. All we need to do now is implement motionEnded:withEvent to trap and react to the shake gesture itself.

Responding to a Shake Gesture

To react to a shake, implement the motionEnded:withEvent method, as shown in Listing 17.9.

LISTING 17.9 Responding to a Shake Gesture


1:  override func motionEnded(motion: UIEventSubtype, withEvent event: UIEvent) {
2:      if motion==UIEventSubtype.MotionShake {
3:          outputLabel.text="Shaking things up!"
4:          imageView.transform=CGAffineTransformIdentity
5:          imageView.frame=originalRect
6:      }
7:  }


First things first: In line 2, we check to make sure that the motion value we received (an object of type UIEventSubtype) is, indeed, a motion event. To do this, we just compare it to the constant UIEventSubtypeMotionShake. If they match, the user just finished shaking the device.

Lines 3–5 react to the shake by setting the output label, rotating the image view back to its default orientation, and setting the image view’s frame back to the original size and location stored in our originalRect variable property. In other words, shaking the device will reset the image to its default state. Pretty nifty, huh?

Building the Application

You can now run the application and use all the gestures that we implemented this hour. Try scaling the image through a pinch gesture. Shake your device to reset it to the original size. Scale and rotate the image, tap, swipe—everything should work exactly as you’d expect and with a surprisingly minimal amount of coding. Although not a useful app in and of itself, this tutorial does illustrate many techniques that you can use in your own applications.

Gestures have become an integral part of applications and user expectations in iOS. Adding them to your applications will increase their viability and improve the overall user experience.

Further Exploration

In addition to the four gestures discussed this hour, you should be able to immediately add three other recognizers to your apps: UILongPressGestureRecognizer, UIPanGestureRecognizer, and UIScreenEdgePanGestureRecognizer. The UIGestureRecognizer class is the parent to all the gesture recognizers that you’ve learned about in this lesson and offers additional base functionality for customizing gesture recognition.

We humans do a lot with our fingers, such as draw, write, play music, and more. Each of these possible gestures has been exploited to great effect in third-party applications. Explore the App Store to get a sense of what’s been done with the iOS multitouch gestures.

You also might want to learn more about the lower-level handling of touches on iOS. See the “Event Handling” section of the Data Management iOS documentation for more information.

Be sure to look at the SimpleGestureRecognizers tutorial project, found within the Xcode documentation. This project provides many additional examples of implementing gestures on the iOS platform and demonstrates how gestures can be added through code. Although the Interface Builder approach to adding gesture recognizers can cover many common scenarios, it’s still a good idea to know how to code them by hand.

Summary

In this hour, we’ve given the gesture recognizer architecture a good workout. Using the gesture recognizers provided through iOS, you can easily recognize and respond to taps, swipes, pinches, rotations, and more—without any complex math or programming logic.

You also learned how to make your applications respond to shaking: Just make them first responders and implement the motionEnded:withEvent method. Your ability to present your users with interactive interfaces just increased dramatically.

Q&A

Q. Why don’t the rotation/pinch gestures include configuration options for the number of touches?

A. The gesture recognizers are meant to recognize common gestures. Although it is possible that you could manually implement a rotation or pinch gesture with multiple fingers, it wouldn’t be consistent with how users expect their applications to work and isn’t included as an option with these recognizers.

Workshop

Quiz

1. The rotation value of the UIRotationGestureRecognizer is returned in what?

a. Integers

b. Radians

c. Degrees

d. Seconds

2. Which gesture recognizer is often used for enlarging or shrinking content?

a. UITabGestureRecognizer

b. UIRotationGestureRecognizer

c. UIPinchGestureRecognizer

d. UIScaleGestureRecognizer

3. Which of the following attributes can you set for a tap gesture recognizer?

a. Number of touches

b. Finger spacing

c. Finger pressure

d. Touch length

4. How many recognizers will you need to recognize left, right, and down swipes in a view?

a. 1

b. 3

c. 6

d. 2

5. Overriding the motionEnded:withEvent method is necessary for recognizing what type of gesture?

a. Panning

b. Swiping

c. Tapping

d. Shaking

6. To get the number of degrees from radians, you can use which of the following conversion formulas?

a. Degrees = Radians × 180 / Pi

b. Degrees = Radians × 360 / Pi

c. Degrees = Radians × Pi / 180

d. Degrees = Radians × Pi / 360

7. You can hold an object’s frame in which data structure?

a. AnyObject

b. ObjectRect

c. CGFrame

d. CGRect

8. To help differentiate between objects in the document outline, you can set which of the following?

a. Labels

b. Notes

c. Classes

d. Segues

9. To determine how far a user has moved her fingers during a pinch gesture, which variable property do you look at?

a. space

b. scale

c. distance

d. location

10. You can scale or rotate the view without any complex math by using which variable property of a view?

a. transform

b. scale

c. rotate

d. scaleandrotate

Answers

1. B. Rotation is returned in radians, a common unit of measure.

2. C. Use a UIPinchGestureRecognizer to implement scaling gestures within an application.

3. A. You can easily set the number of touches that will be required to trigger a tap gesture.

4. B. You need a gesture recognizer for each of the swipe directions that you want to implement; three directions, three recognizers.

5. D. The shake gesture requires an implementation of the motionEnded:WithEvent method.

6. A. To convert radians to degrees, use this equation: Degrees = Radians × 180 / Pi.

7. D. A CGRect data structure can be used to hold an object’s frame.

8. A. Labels are a convenient way to provide custom names of the items listed within the Document Outline.

9. B. The scale variable property will help you determine how far a user has moved her fingers relative to her original position.

10. A. The transform variable property can be used to apply a nondestructive transformation (such as rotation or scaling) to a view.

Activities

1. Expand the Gestures application to include panning and pressing gestures. These are configured almost identically to the gestures you used in this hour’s tutorial.

2. Alter this project to use the image view that you added in Interface Builder rather than the one created programmatically. Practice using the constraints system to see the effect that constraints have on the image view as it changes size and rotates.

3. Improve on the user experience by adding the pinch and rotation gesture recognizers to the UIImageView object itself, enabling users to interact directly with the image rather than another view.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset