Chapter 12. Advanced game topics

We’ve covered a great number of topics on Swift and game development thus far. You’ve been introduced to Sprite Kit and have been exposed to how it was leveraged in Pencil Adventure, but that was just the beginning. You’ve animated sprites, created levels, managed scoring, added sound and visual effects, and hopefully you’ve learned a lot about the Swift language along the way.

Every game goes beyond animated sprites and sound effects. Every game development effort comes with its own set of tricky issues that need to be solved. We’ve carefully chosen, when possible, to solve these problems with Pencil Adventure so that you can carry those solutions through to your own games.

We’ll show you how to manage your art assets in a way that avoids the need to create multiple variations of each sprite so you can spend your time on things that really matter to your audience. We’ll show you how to master advanced visual effects through the implementation of sketch rendering. We’ll clarify the mysteries of scaling your app for universal device support, and while we’re at it, we’ll show you how we’ve used caching to improve performance in Pencil Adventure.

You can read the full source code of our Pencil Adventure game here on Github.

Managing art assets

Managing the art assets for a game can be a challenge. Poorly managed assets can produce low-quality results, and suffer from performance and excessive memory issues. Careful planning is necessary to ensure that the assets are the correct size for each of the targeted platforms and automation can be used to assist in generating a majority of the variations for each asset.

An art pipeline is used to define the production of art from the planning stage all the way through to production of the final in-game assets, often times including the use of custom tools or utilities for automation.

The art pipeline used by Pencil Adventure begins with a specification for asset creation. All assets would be designed directly for the retina iPad, which has the highest resolution of all supported devices (2048x1536.) To aid in this process, a template was created, which included a designated portion of the template that was exactly that resolution with additional space for a work area.

Here is what that template looks like, populated with the indoor assets (note that these assets are the raw assets that were created by a member of the team and have not yet been processed for sketch rendering):

Alt Text

The full-frame red rectangle in the left half of the image represents the retina iPad resolution. Another, smaller rectangle is also present, which represents the aspect ratio of the iPhone 5. As you can see, the aspect ratio of that device chops off some of the top and bottom of the template.

We’ll refer to the smaller rectangle as the safe frame, which allows us to see how the assets will fit within the most restrictive device. This device was chosen for the safe frame because its aspect ratio (1.775) was the most limiting, chopping off more vertical screen space than any other device.

Our art pipeline includes automated tools to resize our assets for each of the pixel densities that we will support. It will be the job of our automated tools to generate the other image sizes, as well as manage platform dependent filenames as defined by Cocoa ([email protected], file@2x~iphone.png). The art pipeline would also create Xcode-compliant asset catalogs and texture atlas folders.

With this type of automation in place, the job of our content creators was simple. Place completed assets into an appropriate folder (we used a special staging area for this), and the art pipeline script would take care of the rest.

To see the art pipeline in action, locate the Staging folder in the source distribution. You’ll find it in .../PencilAdventure/Art/Staging. This folder includes a staging area for Steve’s sprite animation atlas, a game sprites atlas, a level selection button atlas, and an image catalog.

If you look inside any of those folders, you’ll find a list of image files, each created at full resolution for the retina iPad.

Let’s compare this to the output from our art pipeline by looking in the corresponding folder within the game’s source tree: .../PencilAdventure/PencilAdventure. If you look inside any of the *.atlas or *.xcassets sub-folders, you’ll see a corresponding set of assets that were automatically generated by the art pipeline from those found in the staging area. For asset catalogs, this also includes the imageset directory structure, and the Contents.json file.

Let’s take a look at the art pipeline tools. You’ll find these two bash scripts:

  • .../PencilAdventure/Art/artgen
  • .../PencilAdventure/Art/update_art

The first script, artgen, does all of the automation heavy lifting. It scans the images in an input directory, finds files that have changed since the last run, resamples the images as needed, and drops them into the output folder structure based on the type of input (asset catalog or sprite atlas.)

The second script, update_art, acts as more of a director for the art pipeline, defining the general processing and mapping of folders inside of the staging area of the game source tree.

The artgen script is written specifically for the concept of starting with a retina iPad resolution and down-sampling from there to the other formats. The total format outputs for a single staged file named steve.png would be:

  • steve@2x~ipad.png
  • steve~ipad.png
  • steve@2x~iphone.png

You may notice that the non-retina iPhone is not present in the list of output formats. This is because those devices do not support iOS versions new enough for Swift-based apps.

With our automation in place, there’s one final step we can take to ensure we always have the most recent art assets.

Incorporating with the build

By including the art pipeline directly in to the build process, we avoid the need for a content developer to ever worry about running the pipeline tools. The art would simply always be up-to-date.

We can see the custom script action inside the PencilAdventure target properties’ build phases as shown here.

Alt Text

As you can see, we use the $SOURCE_ROOT variable to locate our art folder, change to that directory, and then launch the update_art script.

This was originally added to the project target by clicking the + button on the Build Phases tab, then selecting New Run Script Phase, as shown below.

Alt Text

 

Advanced visual effects: Sketch rendering

Alt Text

If you’ve never tried to produce an advanced visual effect before, you’re in for a real treat. Finding the right solution to a tricky visual effect is one of the most rewarding accomplishments in game development. In addition, you get to show your friends something they probably haven’t seen before. So sit back and put your thinking cap on; we’re going to tackle the world of pixels using vectors, a bit of trigonometry, and some problem solving skills. Don’t shy away if math isn’t your strength. We promise it’ll be presented in a way you’ll understand!

The initial concept of Pencil Adventure was a platform game that featured a pencil as the hero of the story, and an environment that looked like an animation hand-drawn on paper, and sketched like it came from a pencil. Sprite Kit doesn’t offer any type of effect that resembles this, so the effect would have to be created from scratch. We’ll cover the approach used from problem solving to final solution.

Approaching the problem

The sketching used in Pencil Adventure is a non-trivial problem to solve. We’re going to have to think through it carefully in order to accomplish our goal. Sometimes it’s necessary to state the obvious to help cement a concept into our brains and cognitively recognize a truth.

The universal truth of problem solving

In order to solve a problem, the problem space must be clearly defined. Put simply, we must first understand the problem completely.

This allows us to recognize the various issues that will need resolution, and also helps segment the problem into smaller pieces that may be easier to solve. Let’s do that now:

  • Our game scene is made up of sprites, which are defined by their rectangular texture. The rectangular textures usually contain a bitmap with an alpha channel that allows the sprite to appear to have a non-rectangular shape.
  • Our sketches will require that we draw the non-rectangular content of the texture, not the texture bounds itself.
  • A method will be needed that can recognize the topological features (the shapes and recognizable details) of an image so that those features can be converted into a some kind of data structure that can be used for sketching.
  • We will need to gain access to the pixel data of the texture so that we can analyze it for topological features.
  • The topological features should be drawn in such a way that they look like they were sketched with a pencil by a human. This will likely mean drawing and over-drawing some portions of the sketch, just as a human would.
  • Humans make mistakes, so the sketching algorithm should simulate those mistakes, such as over-drawn lines and lines that are close, but not exactly true to the exact shape being drawn.
  • The sketched lines should be modified in some way as to resemble pencil on paper.
  • To keep the sketching simple and efficient, we’ll probably want to use straight lines, which means that any curves will be simulated with multiple straight lines.
  • We should define rules for how the images are constructed in order to simplify processing and algorithmic complexity. In other words, we won’t want to feed it our favorite LOLCAT image, since cartoon images will likely be cleaner and easier to process.

It’s pretty clear that we have two problems to solve. First, we need a method to recognize the edges of shapes from the set of input pixels (our image) so that these edges can be sketched. Second, we’re going to need to create a drawing technique that resembles a pencil sketch.

Armed with this information, we’re ready to tackle these problems, one at a time.

Step 1: From pixels to lines

Take a look at the following image of the stool from Pencil Adventure. The thick black line highlights the edges of the image. More specifically, the black line defines the edge that separates pixels of a different color within the texture’s bitmap.

Alt Text

Our plan for the sketch rendering is to sketch this stool with a series of straight lines. We would like to keep the number of lines to a minimum in order to save memory and to avoid having to unnecessarily sketch too many lines. This means finding a set of straight lines that closely approximates the edges. Probably something like this:

Alt Text

This image shows an outline that approximates the stool using line segments defined by a set of vertices (the points that connect our lines.) Our challenge is to generate a small set of vertices from the pixels within the bitmap. This is a process called vectorization. We could search the web for solutions, but at the risk of reinventing the wheel, we’re going to opt to derive a solution of our own and solve this problem ourselves.

If we think about this problem at a high level, ignoring the implementation details for the moment, then we might surmise that one approach would be to find an edge in the bitmap and walk along that edge, occasionally dropping a vertex here and there until we build up a complete set of line segments that defines the object. Since that sounds like as good a plan as any, let’s go that route.

Breaking down a complex problem

Vectorization is not a trivial process, but we have something working in our favor--this is our game, and we can define the inputs. We’re not going to attempt to recreate a vectorized set of primitives that allows us to re-create a bitmap in perfect detail. We just need to outline the pixels in the bitmap along its color edges, where the color differences between neighboring pixels define an edge. Let’s start with the obvious, finding those pixels that are along the color edges.

We’ll define a pixel on the color edge as any pixel with a large enough color difference between itself and any one of its neighbors. To do this, we’ll need to calculate how different two colors are. This is a lot simpler than it sounds. A common technique for doing this is to calculate the distance between the two colors in a color space.

Using color spaces to determine color differences

Standard RGB colors like those we’ll be working with have three coordinates (R, G, B). We can treat these values as 3-dimensional coordinates, similar to X, Y, Z to define our own 3-dimensional color space. This color space will be limited to the range of our color values (0...255).

In this color space, each point in space would have a location (its X/Y/Z coordinate) as well as a color that is defined by treating the X, Y, Z coordinates as R, G, B color components. This particular color space is referred to as a color cube.

Here’s what a color cube looks like. We can only see the outer bounds of the cube, but be aware that the entire volume inside of the cube is also color-coded.

Alt Text

An advantage of the color cube is that it’s very simple to work with, especially for determining the difference between colors. As you can see, pixels that are near each other in the color cube are very similar, while pixels that are far apart represent visually different colors.

We don’t need to build a color space or write any special code for this. We simply need to treat our RGB values like XYZ coordinates. Doing this, we can calculate the 3-dimensional distance between them.

If you’re familiar with the formula for calculating the distance between two points, then the 3-dimensional distance calculation should look like a natural extension:

// Distance between color (r1, g1, b1) and (r2, g2, b2)
let dr = r2 - r1
let dg = g2 - g1
let db = b2 - b1
let distance = sqrt(dr*dr + dg*dg + db*db)

We can use a threshold value to determine if the color difference between any two neighboring pixels is enough to be considered an edge pixel. Small threshold values will pick up smaller details in the image, while larger threshold values will pick up only the most obvious edges in the image.

As we have now defined what an edge pixel is and how they are calculated, we can begin the process of scanning our image.

Simplify!

Checking a pixel against its neighbor can be a laborious task for the CPU. As you’ll see later, we’re going to scan the image multiple times in order to find all of the edges in the image, which means we’ll be reading pixels (and their neighbors), and performing color difference calculations many times over for each pixel.

The action of performing the same calculations on the same input data should raise the big red flag, hinting that we can simplify and optimize. To save our CPU from repeated work, we can precompute which pixels are edge pixels. We can store the precomputed data in what we’ll call an image map.

Our image map will essentially be an array of bytes (one byte per pixel) that will allow us to store up to 8 bit-flags for each pixel. As clever as you are, dear reader, I’m sure you can imagine that one of those flags will be used to represent if a pixel is an edge pixel.

To build the image map we simply scan the entire image to compare each pixel’s color value with that of its neighbors. When a pixel has been determined to be an edge pixel, we’ll set a flag in the corresponding entry within our image map.

With our image map in hand, we can toss aside our image as we have all the information we need about it in our image map.

Walking edges

The edges we’re going to draw can be defined as: a series of lines that follow along contiguous sets of edge pixels in the image (or in our case, the image map.) We’ll call this walking along an edge.

To walk along an edge, we first must start on an edge. A simple way to do this is to scan the image map for the first edge point we find. This may not be the most optimal starting point, but since we don’t need optimal results simpler is better!

To walk the edge, we check the neighbors of the current pixel to see if any of them are also edge pixels. If so, we move on to that neighbor and repeat the process. We continue this process until we run out of neighboring edge pixels at which point, we’ve reached the end of the edge. Since edges can be disjointed (i.e., not connected), we will need to repeat this entire process (searching for a new starting point, walk the edge to the end.) When we can no longer find a starting point, we’ve finished processing the entire image.

In order to avoid visiting the same edge pixel more than once, we can designate one of the bits in our image map as a visited flag. As we walk the edges in our image map, we mark each pixel as visited so that we can avoid visiting it again.

Resolving double-edges

Let’s refer back to our definition of an edge pixel as any pixel with a large enough color difference between itself and any one of its neighbors. Can you spot the weakness in our approach?

If a pixel is determined to be an edge pixel because of a large enough color difference with a neighbor, then that neighbor, too, will also be found to be an edge pixel.

To resolve this dilemma, we’ll include a third flag in our image map, one that denotes that a pixel should never be considered as an edge pixel. We’ll call this the no-edge flag.

As we scan the image for edge pixels, we ignore any pixel with the no-edge flag set. Also, when a pixel is found to be an edge pixel, the no-edge flag is set on the neighbor.

Of course, walking the edge is only part of the problem. We need to periodically drop vertices along the edge as we walk it. We could define an arbitrary number of steps along the edge (say, every 10 pixels) but that is fraught with problems. We could end up with a lot of waste--multiple vertices along a straight line. We could also end up with not enough detail--sharp corners could get chopped off at an angle because those corner points happened to land in-between the start and end points. Instead, we’re going to want to pay attention to the shape of the edge that we’re walking.

Recognizing shape means paying attention to the direction we’re moving as we walk along the edge and keeping track of how far we veer off course from a straight line. For example, as we make our way around a corner of the shape, we will need to recognize that we’ve changed direction too much for a straight line and drop a vertex to start a new line segment. That sounds tricky, but it’s really not.

Vectors to the rescue

If you’ve never worked with vectors, of if you’ve avoided them because you’ve heard that they were tricky or complicated, then this section is just for you.

This will be simple and painless. Promise!

Having a basic understanding of what a vector is, and what it can do for you, is critical for game development. You’ll be hard pressed to find any game that doesn’t make some use of vector math. We won’t cover every aspect of vector math, just enough to whet your whistle and introduce you to a whole new way of thinking about, and solving, problems of this sort.

What is a vector?

To understand the vector, let’s first think about what a point is. A point is an absolute position in space. An object that is positioned at the position [100, 35] is at that exact location. When we use points, we assign them to things that are absolutely positioned in space. Because of this, each object in our scene has its own positional point.

Let’s consider walking between two different points, A and B. When we move from point A to point B, then we travel the distance between the two points in the specific direction from point A toward point B. If we wanted to encapsulate that movement into a set of variables, we might calculate them as:

// Calculating how far to move between A and B
// in the X and Y directions
let distanceMovedInX = B.x - A.x
let distanceMovedInY = B.y - A.y

Let’s tuck these two variables (distanceMovedInX and distanceMovedInY) into our pocket and return to our original point A. If we were to add these two values to the x and y coordinates of point A, we would find point B:

// Calculating point B
B.x = A.x + distanceMovedInX
B.y = A.y + distanceMovedInY

A vector is very similar to a point in that it, too, has an X and a Y coordinate. In fact, a vector can be stored in the same structure that stores a point. However, since we treat vectors differently, it usually makes sense to store them as their own type.

Those two simple variables define a vector.

Unlike a point, a vector is  not absolute. Gravity is a good example of a vector because gravity is a direction (usually down). A gravity vector doesn’t have a reference. That is, it is not tied to any absolute point; gravity is always straight down no matter where you are.

Because of the innate difference with how vectors are used, vector classes (including CGVector) tend to refer to the coordinates as dx and dy. These stand for delta X and delta Y, which are fancy words for the amount of change in X and Y.

Referring back to our sample points A and B, let’s look at how we define a vector in its more natural form:

// My first vector
var vector = CGVector()
vector.dx = B.x - A.x
vector.dy = B.y - A.y

A common use for a vector is to add them to (or subtract them from) a point.This is called vector addition; and we’ve already done this when we calculated point B from the variables distanceMovedInX and distanceMovedInY. Here is that code again, but this time, we’ll use the more natural form:

// Vector addition

Vectors have one other attribute, their length (or magnitude). This isn’t usually stored with the vector because it is calculated from the dx and dy coordinates. The length is simply the distance that we travel between points A and B. It shouldn’t surprise you to see that the code to calculate the length of a vector looks remarkably like that used to calculate the distance between two points:

// Calculate the length of a vector
let dx2 = vector.dx * vector.dx
let dy2 = vector.dy * vector.dy
let length = sqrt(dx2 + dy2)
B.x = A.x + vector.dx B.y = A.y + vector.dy

Like the distance between two points, the length of a vector will always a positive value because it represents the total distance an object will move, regardless of the direction of travel, if a vector is added to it.

A very important feature of the vector is its ability to be compared to another vector in order to determine how similar their directions are. Consider an archery game where the player is allowed to shoot 10 arrows which are scored based on the single best shot toward the center of the target. Its your job to determine which of those arrows was the best shot. To do this, we’re going to need to compare each of the player’s 10 attempts with our true target vector.

This is going to require the use of dot product. There sure are a lot of fancy words for vector operations, aren’t there? This one just means the difference of direction between two vectors. In practical, where-the-code-meets-the-keybaord terms, a dot product is a single floating point value that tells us how similar two vectors are. It’s a pretty magical thing!

Because vectors are directions and the difference between two directions is an angle, the dot product is closely related to the angle between two vectors. (If you’re curious, it represents the cosine of that angle.)

It feels like it’s time for another diagram. Here’s a vector wheel with eight spokes and a control vector (in red) that we’ll compare with each of the spokes. Their coordinates dx and dy are intentionally not shown because what is important, is their direction.

Alt Text

Each vector on our wheel is labeled with the result of a dot product between that vector and the control vector. Note that the topmost vector in the diagram points straight up, which is the same direction as our control. This topmost vector is labeled 1.0 because when you compare two identical vectors with a dot product, you get 1.0. Remember that 1.0 means “equal to” in terms of a dot product.

The bottom-most arrow (labelled -1.0) points in the exact opposite direction as compared to our control. If you think about it, -1.0 is a sensible way to say “exactly the opposite direction of."  You’ll notice the other values range between -1.0 and +1.0.

This is really useful! Modern 3D games, for example, will perform a dot product on the GPU millions of times per frame because the 3D calculation for how much light is reflected from a surface is the dot product between the surface direction and the incoming light direction to that surface. Physics calculations, and much more, rely on the dot product. Even rockets are sent into space using this handy calculation.

With something so useful, you might be surprised to find out just how simple the calculation is. With much ado, I give you the dot product:

// Calculate the dot product between vectors v1 and v2
let dotProduct = v1.dx * v2.dx + v1.dy * v2.dy

We won’t go into the mathematical proof of how or why this works (that’s beyond the scope of this book), so instead let’s just take a leap of faith.

Before we can make use of our dot product, we need to cover one more detail. A dot product only provides values in the range of [-1.0 to +1.0] if the input vectors are normals. A normal is just another word for a vector whose length is 1.0. If we don’t normalize our input vectors, then our dot product will still be correct, but the resulting value will be scaled to the length of our input vectors.

The process of normalizing a vector is to simply divide each of its components (dx and dy) by its length. We’ve already seen how to calculate the length, so let’s put that to work normalizing a vector:

// Calculate the length of the a vector 'v'
let length = sqrt(v.dx * v.dx + v.dy * v.dy)

// Normalize 'v' by dividing 'dx' and 'dy' by length
let normal = CGVector(dx: v.dx / length, dy: v.dy / length)

Working with vectors the easy way

Core Graphics provides the CGVector structure for storing and working with vectors. In Pencil Adventure, we’ve extended CGVector to include some common operations, including calculating the length of a vector, vector arithmetic, and conversion to and from CGPoint structures. In addition, we’ve extended CGPoint in much the same way so that it is easy to perform common operations on them. You can see these extensions in the Extensions group inside of the PencilAdventure project.

Let’s take a quick look at how we use these common operations in Pencil Adventure.

// Define a couple of points A and B
let A = CGPoint(x: 10, y: 100)
let B = CGPoint(x: 250, y: 475)

// Create a vector from that points in the direction from A to B
let vectorAB = (B - A).toCGVector()

// Calculate the length of the a vectorAB
// (this also tells us the distance between A and B)
let length = vectorAB.length

// Find point B by adding vectorAB to A
let newPointB = (A.toCGVector() + vectorAB).toCGPoint()

// Normalize (or, get the normal for) vectorAB
let normal = vectorAB.normal

Dropping vertices along the edge

Let’s get back to our pursuit of the vectorized texture. We’ve covered the process of finding edge pixels, marking them as visited and walking the edge pixels in an image map by stepping to neighboring edge pixels. The final piece to this puzzle is to recognize where the vertices should be placed in order to build the individual line segments that will eventually be drawn.

If we refer back to the image of the vectorized stool, we can see that vertices appear anywhere there is a bend in the edge. The sharper the bend, the more frequent the vertices will appear along that edge.

We can accomplish this by keeping track of the direction we move while stepping from pixel to pixel as we walk the edge. If that direction deviates too much, it is time to drop another vertex.

In the following diagram, we can see this process in action:

Alt Text

Starting at we move along the edge to the right giving us an initial direction (the starting vector, which has been extended in the diagram for explanation purposes.) As we continue to walk along the edge, we check the angle (using the dot product) of each point along the edge with the starting vector. This angle represents the amount of error (e) as we deviate from starting vector’s direction. This is called an error metric, as it is a way of measuring the amount of error. When e surpasses our tolerance for error, we drop a vertex and begin the process again with a new starting vector.

The beauty of this approach is that we have an error tolerance that we can adjust. Tight tolerance means that we follow the perimeter very closely (at the expense of more vertices), and a loose tolerance provides us with fewer vertices at the expense of a less accurate approximation.

Here are the results from a few different tolerances. Shown in the diagram is the number of segments needed to draw a tree.

Alt Text

From the figure above, it’s clear to see how increasing tolerances can have a pretty drastic effect on the accuracy of our vector representation.

Final vector output

The final step in the vectorization process is to store the vertices that define the path of an edge so that they can be drawn by the sketching process. Paths are stored as an array of CGPoint structures ( [CGPoint]). As an image may contain many disjoint edges, we will end up storing multiple path arrays as an array of arrays of CGPoint structures ([[CGPoint]]).

Sketching lines

Given a set of vector paths, we need to render them to an image that can be used as a texture on a sprite. In this section, we’ll discuss that rendering process and how we leverage the hierarchy of the scene for animating them.

If we were to draw the paths with standard lines, we would visit each line segment in the path (i.e., each pair of two consecutive points in the array) and simply draw a line between them. Repeat this process for every line segment in each path.

Our sketch rendering requires to deviations from this standard way of drawing a path. First, we break up our segments into randomly sized sub-segments. The sub-segments are then extended so that they overlap one another and possibly extend beyond their intended edge, much like a human may accidentally overshoot an edge when drawing by hand. We then offset (or jitter) the endpoints in order to simulate human error, which reduces the accuracy of the represented image edge. Finally, we draw these sub-segments with a noisy line that resembles a pencil drawing.

These offset amounts are driven by a set of random numbers whose ranges are stored in a SketchMaterial structure. These properties include how much overlap to allow, how much a line may overshoot an edge, or how much offset (or jitter) is introduced into the sub-segment endpoints. By playing with these parameters we can  change the look of our final rendered result by quite a lot. From clean straight lines to messy 5th-grader sketches.

Animating the sketches

The rendering process will contain random elements, so each time we render a set of vector paths, we’ll get a slightly different result.

We leverage this and render four sketched images for a sprite. We hide all but one of them for display and periodically select a different sketch image to simulate the jittery nature of hand-drawn animations.

Furthermore, by performing this animation at a lower frame rate than the game, it gives the impression of a hand-animated game scene.

Our SketchMaterial also contains the values that control the properties of the pencil drawing. From these properties, we’re able to drastically alter the visual appearance of our sketched sprites.

Alt Text

The process used to create these drawing styles is surprisingly simple, consisting solely of many small lines.

The sub-segments generated by the sketch process are passed into our pencil drawing routine, addPencilLineToPath. This function dices the sub-segment into many micro lines (that’s fancy-speak for very short lines) as defined by the material, usually between 1 and 10 pixels. It then randomizes the position of the micro line’s end points according to a material parameter. Finally, the line is drawn. This process of drawing randomized micro lines continues until the entire sub-segment is drawn.

Adjusting the material parameters which control the length of each micro line allows us to control the density of the drawing appearance, while the material parameter for the micro line end-point randomization controls the thickness of the line.

Scale and aspect in a universal app

When creating a universal app, there are three factors related to the physical display that affect how sprites (and other nodes in your scene) are positioned and drawn on screen. These three factors are:

  • Screen size   The physical size of the screen.
  • Aspect ratio   The shape of the screen (4:3 or 16:9).
  • Pixel density   Beyond resolution, we’re referring to how densely packed the pixels are. Two screens may be the same size, with one screen having four times the number of pixels.

Sprite Kit abstracts this for us to some degree by using a coordinate system of points rather than pixels within your scene (SKScene) objects. The default resolution for a scene (in points, not pixels) is 1024 x 768. It’s important to be aware that whenever you are specifying a position, size, distance to move (etc.) for your nodes in a scene, these coordinates are always specified in points.

This abstraction simplifies many aspects of our scene management. Consider the iPad 2 vs the iPad Retina. The former has a resolution of 1024 x 768 while the latter has a resolution of 2048 x 1536 (exactly twice the width and twice the height of the iPad 2.) By using a coordinate system of points, Sprite Kit scenes provide a constant coordinate system to fill the screen of both devices without changing the underlying dimensions of the scene’s coordinate system.

This isn’t a silver bullet solution. Consider what happens when we attempt to use the scene’s 1024 x 768 coordinate system on an iPhone 5 (which has a screen resolution of 1136 x 640) in landscape orientation:

Alt Text

As you can see, the scene extends beyond the top and bottom edges of the iPhone 5’s landscape display. This is because the aspect ratio of the scene doesn’t match the aspect ratio of the iPhone’s display. If we were to naively place our player’s health meter and score either at the top or the bottom of the scene, they wouldn’t be visible when the game was played on the iPhone 5.

Sprite Kit doesn’t try to solve these problem for us because there are a number of ways to resolve these issues, each depending upon the individual game that we’re making.

In the following section, we’re going to discuss how to understand what Sprite Kit is doing for us and what we must do in addition to accomplish our goals for Pencil Adventure.

Scene aspect mode

The scene can be mapped to the display in a few different ways using the scene’s scaleMode property. The example above shows the default using SKSceneScaleMode.AspectFill, which maintains the aspect ratio of the scene while filling the screen. This results in a scene that either fills the screen perfectly (if the aspect of the display matches the aspect of the scene) or a scene that is cropped to the display (as shown in the iPhone 5 example.)

This default works best for Pencil Adventure, but it may not necessarily be the best choice for all game endeavors. There are other choices, such as .Fill, which simply maps the scene to the display, ignoring aspect ratios. Choose your scaleMode carefully as it has long-term implications on how you manage your scenes during development.

Calculating scene scale

Nodes added to the scene may require additional scaling in order to appear correctly. In Pencil Adventure it was important that our sprites were scaled to the proper size for each device from the iPhone to the iPad, such that a sprite that occupied 20 percent of the screen height on one device would occupy the same 20 percent of screen height on the next. We found that the default behavior did not always produce consistent results from device to device. In Pencil Adventure, we used a simple scaling technique that was used to provide consistent scaling for all sprites.

The technique was to simply scale the scene’s frame by the view within which the scene is being presented. This code was then added to the base class for all scenes and was used to scale any nodes being added to the scene. This allowed our sprites and other nodes to maintain consistent proportions from device to device.

func getSceneScale() -> CGSize {
    return CGSize(width: getSceneScaleX(), height: getSceneScaleY())
}

func getSceneScaleX() -> CGFloat {
    return frame.width / view!.frame.width
}

func getSceneScaleY() -> CGFloat {
    return frame.height / view!.frame.height
}

Applying scene scale to nodes in the scene

The scene’s scale is then applied to each sprite in the scene in the following manner:

sprite.xScale = scene.getSceneScaleX()
sprite.yScale = scene.getSceneScaleY()

Some nodes may have specific scale factors applied to them. In these cases, the scene’s scale factor is simply multiplied against the desired scale:

sprite.xScale = 0.5 * scene.getSceneScaleX()
sprite.yScale = 0.5 * scene.getSceneScaleY()

Calculating the viewable area

As described above, we use .AspectFill for mapping our scene to the display. Because this can produce clipping of our scene on some devices it is important that we know the exact viewable portion of our scene (in points). This is especially useful for HUD information that should appear along the top or bottom edges of the screen.

The viewable area calculation can be found in the base class for all scenes, PaperScene.swift:

// Calculate our viewable area (in points)
let viewToFrameScale = frame.width / view.frame.size.width
viewableArea = CGRect()
viewableArea.size.width = view.frame.size.width * viewToFrameScale
viewableArea.size.height = view.frame.size.height * viewToFrameScale
viewableArea.origin.x = (frame.size.width - viewableArea.size.width) / 2
viewableArea.origin.y = (frame.size.height - viewableArea.size.height) / 2

When arranging items on the screen that are mapped to the extents of the display, we simply use our viewableArea to locate the proper position. Here’s an example of positioning the Lifeline node in the upper-right corner of the display: 

// Position ourselves in the upper-right corner
position.x = scene.viewableArea.origin.x + scene.viewableArea.size.width
position.y = scene.viewableArea.origin.y + scene.viewableArea.size.height

Scale is everywhere!

We’ve seen how the scene can be mapped in different ways to the underlying view/display. We’ve also seen how scale can be applied to sprites to provide a consistent sizing across devices. Finally, we covered the calculations for determining the viewable area of a scene when using .AspectFill

It’s also important to note that scale applies to any aspect of the visual display of your game. You will likely encounter other areas that your game specifically needs to take special considerations for scaling its visual content. In Pencil Adventure, this extended to the rendering parameters for sketch rendering. Higher resolution displays will have a proportionally lower scale factor.

When developing your game, it will be important to always consider scale, how it will appear across universal devices and devise ways to build scale correction into the design of your classes as much as possible. Doing so will reduce future debugging and headaches.

Improving game performance

You are probably familiar with the use of caching to speed up disk operations and for accessing recently used blocks of memory (as with the L1 and L2 caches on your processor.) Reading from physical media or from RAM may be an ideal candidate for caching, but they certainly hold no monopoly on caching.

Generalizing the use of caching

Caching, in general terms, is a tactic to avoid duplicating expensive operations.

In Pencil Adventure, we find ourselves vectorizing sprites (as discussed in Advanced Game Topics) which is another ideal candidate for caching because this is an expensive operation whose results can be calculated once and stored for later use.

The process for creating and managing our vectorized results is rather simple - each time a we calculate the vector paths for a sprite, the results are stored in a file. Additionally, before we perform any vector work on a sprite, we first check the cache to see if the work has already been done.

In Pencil Adventure, we have extended this to a two-level caching mechanism. This means that our cache exists in two stages, one on disk and one in RAM. The advantage to this is that the RAM copy provides immediate access, while the disk is used to retain results across multiple runs of the game. Although the disk is slower, it still provides faster results than re-vectorizing the image.

The logic for this can be found in the vectorizeImage(...) method within ImageTools.swift, but here’s a bit of pseudo-code that may help clarify the strategy:

doWork(input)
{
    // Check local cache
    if (output found in local RAM cache)
    {
        return the local RAM cache copy
    }
    
    // Check remote cache
    if (output found in filesystem cache)
    {
        load from filesystem cache into local RAM cache
        return the local RAM cache copy
    }
    
    // Not cached, perform expensive calculations
    output = calculateOutput(input)
    
    // Store output in local cache
    addToLocalCache(output)
    
    // Store output in remote cache
    addToRemoteCache(output)

    // Finally, return the results to our caller    
    return output
}

The storage mechanism for a cache is a dictionary of some type, which allows us to map an input (the key) to an output (the value.) This doesn’t mean that the our cache has to be stored as a Dictionary type; a file can also be thought of as a dictionary in which the filename is the key and the content of the file is the value.

Cache invalidation

As discussed we use a dictionary to store our cached data. As we’re storing the results of vectorization for an image and our images are all uniquely named, we’ve chosen to use the image name as the dictionary key in our caches. This may sound like a reasonable solution, but there are actually a number problems with this naive approach.

The cache output is dependent upon all of its inputs, not just the image name. If the contents of the image change without changing the name, then the cache will be none the wiser and provide out-of-date output. In addition, the vectorizing calculations are controlled by a number of parameters. If any of these parameters change, then the data stored in the cache will not match the desired output. For that matter, the vectorizing code itself may change.

In order to resolve all of these issues, we would need to generate a key value for our cache dictionary that takes into account the contents of an image, the parameters for vector calculations, the code itself and probably even more.  That’s a lot of work and it’s more work than we really need for Pencil Adventure. It boils down to a matter of convenience versus practicality within our schedule. In the end, we opted for a two-pronged approach to simplify these caching issues.

The first prong to our approach was to check the age of the cache entry against the age of the input. In other words, if the image file is newer than the cache file, we immediately invalidate that cache entry and force it to regenerate. This at least covers the most common problem of the art assets changing during development. This also allowed artists to freely change the art assets without fear of getting out-of-date caches.

The second prong was to provide a simple flag in the codebase that allowed us to bypass the cache altogether. This was useful to developers that were working on the code or manipulating the vectorization parameters. You’ll find this flag called disableCache at the top of ImageTools.swift.

Summary

In this chapter we discussed the simplest of caches, used only to store pre-calculated data. More complex caches do exist such as caches that can theoretically store more data than the computer can hold (such as those used for disk storage and used by databases.) Many strategies exist to work around these limitations while still providing dramatic effectiveness.  You also learned how to tackle the world of pixels using vectors, a bit of trigonometry and some problem solving skills.

If this is your first exposure to the use of caching for performance, then you are encouraged to keep an eye out for possible conditions in which a cache can be used to improve performance in your future games.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset