Chapter    1

Understanding Programming

Programming is nothing more than writing step-by-step instructions for a computer to follow. If you’ve ever written down the steps for a recipe or scribbled directions for taking care of your pets while you’re on vacation, you’ve already gone through the basic steps of writing a program. The key is simply knowing what you want to accomplish and then making sure you write the correct instructions that will tell someone how to achieve that goal.

Although programming is theoretically simple, it’s the details that can trip you up. First, you need to know exactly what you want. If you wanted a recipe for cooking chicken chow mein, following a recipe for cooking baked salmon won’t do you any good.

Second, you need to write down every instruction necessary to get you from your starting point to your desired result. If you skip a step or write steps out of order, you won’t get the same result. Try driving to a restaurant where your list of driving instructions omits telling you when to turn on a specific road. It doesn’t matter if 99 percent of the instructions are right; if just one instruction is wrong, you won’t get to your desired goal.

The simpler your goal, the easier it will be to achieve it. Writing a program that displays a calculator on the screen is far simpler than writing a program to monitor the safety systems of a nuclear power plant. The more complex your program, the more instructions you’ll need to write, and the more instructions you need to write, the greater the chance you’ll forget an instruction, write an instruction incorrectly, or write instructions in the wrong order.

Programming is nothing more than a way to control a computer to solve a problem, whether that computer is a laptop, smart phone, tablet, or wearable watch. Before you can start writing your own programs, you need to understand the basic principles of programming in the first place.

 Note  Don’t get confused between learning programming and learning a particular programming language. You can actually learn the principles of programming without touching a computer at all. Once you understand the principles of programming, you can easily learn any particular programming language such as Swift.

Programming Principles

To write a program, you have to write instructions that the computer can follow. No matter what a program does or how big it may be, every program in the world consists of nothing more than step-by-step instructions for the computer to follow, one at a time. The simplest program can consist of a single line such as:

print ("Hello, world"!)

Obviously, a program that consists of a single line won’t be able to do much, so most programs consist of multiples lines of instructions (or code) such as:

print ("Hello, world!")
print ("Now the program is done.")

This two-line program starts with the first line, follows the instructions on the second line, and then stops. Of course, you can keep adding more instructions to a program until you have a million instructions that the computer can follow sequentially, one at a time.

Listing instructions sequentially is the basis for programming. Unfortunately, it’s also limiting. For example, if you wanted to print the same message five times, you could use the following:

print ("Hello, world!")
print ("Hello, world!")
print ("Hello, world!")
print ("Hello, world!")
print ("Hello, world!")

Writing the same five instructions is tedious and redundant, but it works. What happens if you want to print this same message a thousand times? Then you’d have to write the same instruction a thousand times.

Writing the same instruction multiple times is clumsy. To make programming easier, the goal is to write the least number of instructions to get the most work done. One way to avoid writing the same instruction multiple times is to organize your instructions using a second basic principle of programming, which is called a loop.

The idea behind a loop is to repeat one or more instructions multiple times, but only by writing those instructions down once. A typical loop might look like this:

for i in 1...5 {
  print ("Hello, world!")
}

The first line tells the computer to repeat the loop five times. The second line tells the computer to print the message “Hello, world” on the screen. The third line just defines the end of the loop.

Now if you wanted to make the computer print a message one thousand times, you don’t need to write the same instruction a thousand times. Instead, you just need to modify how many times the loop repeats such as:

for i in 1...1000 {
  print ("Hello, world!")
}

Although loops are slightly more confusing to read and understand than a sequential series of instructions, loops make it easier to repeat instructions without writing the same instructions multiple times.

Most programs don’t exclusively list instructions sequentially or in loops, but use a combination of both such as:

print ("Hello, world!")
print ("Now the program is starting.")
for i in 1...1000 {
  print ("Hello, world!")
}

In this example, the computer follows the first two lines sequentially and then follows the last three lines repetitively in a loop. Generally, listing instructions sequentially is fine when you only need the computer to follow those instructions once. When you need the computer to run instructions multiple times, that’s when you need to use a loop.

What makes computers powerful isn’t just the ability to follow instructions sequentially or in a loop, but in making decisions. Decisions mean that the computer needs to evaluate some condition and then, based on that condition, decide what to do next.

For example, you might write a program that locks someone out of a computer until that person types in the correct password. If the person types the correct password, then the program needs to give that person access. However, if the person types an incorrect password, then the program needs to block access to the computer. An example of this type of decision making might look like this:

if password == "secret" {
    print ("Access granted!")
} else {
    print ("Login denied!")
}

In this example, the computer asks for a password and when the user types in a password, the computer checks to see if it matches the word “secret.” If so, then the computer grants that person access to the computer. If the user did not type “secret,” then the computer denies access.

Making decisions is what makes programming flexible. If you write a sequential series of instructions, the computer will follow those lists of instructions exactly the same, every time. However, if you include decision-making instructions, also known as branching instructions, then the computer can respond according to what the user does.

Consider a video game. No video game could be written entirely with instructions organized sequentially because then the game would play exactly the same way every time. Instead, a video game needs to adapt to the player’s actions at all times. If the player moves an object to the left, the video game needs to respond differently than if the player moves an object to the right or gets killed. Using branching instructions gives computers the ability to react differently so the program never runs exactly the same.

To write a computer program, you need to organize instructions in one of the following three ways as graphically show in Figure 1-1:

  • Sequentially – the computer follows instructions one after another
  • Loop – the computer repetitively follows one or more instructions
  • Branching – the computer chooses to follow one or more group of instructions based on outside data

9781484212349_Fig01-01.jpg

Figure 1-1. The three basic building blocks of programming

While simple programs may only organize instructions sequentially, every large program organizes instructions sequentially, in loops, and in branches. What makes programming more of an art and less of a science is that there is no single best way to write a program. In fact, it’s perfectly possible to write two different programs that behave exactly the same.

Because there is no single “right” way to write a program, there are only guidelines to help you write programs easily. Ultimately what only matters is that you write a program that works.

When writing any program, there are two, often mutually exclusive goals. First, programmers strive to write programs that are easy to read, understand, and modify. This often means writing multiple instructions that clearly define the steps needed to solve a particular problem.

Second, programmers try to write programs that perform tasks efficiently, making the program run as fast as possible. This often means condensing multiple instructions as much as possible, using tricks or exploiting little-known features that are difficult to understand and confusing even to most other programmers.

In the beginning, strive toward making your programs as clear, logical, and understandable as possible, even if you have to write more instructions or type longer instructions to do it. Later as you gain more experience in programming, you can work on creating the smallest, fastest, most efficient programs possible, but remember that your ultimate goal is to write programs that just work.

Structured Programming

Small programs have fewer instructions so they are much easier to read, understand, and modify. Unfortunately, small programs can only solve small problems. To solve complicated problems, you need to write bigger programs with more instructions. The more instructions you type, the greater the chance you’ll make a mistake (called a “bug”). Even worse is that the larger a program gets, the harder it can be to understand how it works so that you can modify it later.

To avoid writing a single, massive program, programmers simply divide a large program into smaller parts called subprograms or functions. The idea is that each subprogram solves a single task. This makes it easy to write and ensure that it works correctly.

Once all of your separate functions work, then you can connect them all together to create a single large program as shown in Figure 1-2. This is like building a house out of bricks rather than trying to carve an entire house out of one massive rock.

9781484212349_Fig01-02.jpg

Figure 1-2. Dividing a large program into multiple subprograms or functions helps make programming more reliable

Dividing a large program into smaller programs provides several benefits. First, writing smaller subprograms is fast and easy, and small subprograms make it easy to read, understand, and modify the instructions.

Second, subprograms act like building blocks that work together, so multiple programmers can work on different subprograms, then combine their separate subprograms together to create a large program.

Third, if you want to modify a large program, you just need to yank out, rewrite, and replace one or more subprograms. Without subprograms, modifying a large program means wading through all the instructions stored in a large program and trying to find which instructions you need to change.

A fourth benefit of subprograms is that if you write a useful subprogram, you can plug that subprogram into other programs. By creating a library of tested, useful subprograms, you can create other programs quickly and easily by reusing existing code, thereby reducing the need to write everything from scratch.

When you divide a large program into multiple subprograms, you have a choice. You can store all your programs in a single file, or you can store each subprogram in a separate file as shown in Figure 1-3. By storing subprograms in separate files, multiple programmers can work on different files without affecting anyone else.

9781484212349_Fig01-03.jpg

Figure 1-3. You can store subprograms in a single file or in multiple files

Storing all your subprograms in a single file makes it easy to find and modify any part of your program. However, the larger your program, the more instructions you’ll need to write, which can make searching through a single large file as clumsy as flipping through the pages of a dictionary.

Storing all of your subprograms in separate files means that you need to keep track of which files contain which subprogram. However, the benefit is that modifying a subprogram is much easier because once you open the correct file, you only see the instructions for a single subprogram, not for a dozen or more other subprograms.

Because today’s programs can get so large, it’s common to divide its various subprograms in separate files.

Event-Driven Programming

In the early days of computers, most programs worked by starting with the first instruction and then following each instruction line by line until it reached the end. Such programs tightly controlled how the computer behaved at any given time.

All of this changed when computers started displaying graphical user interfaces with windows and pull-down menus so users could choose what to do at any given time. Suddenly every program had to wait for the user to do something such as selecting a menu command or clicking a button. Now programs had to wait for the user to do something before reacting.

Every time the user did something, that was considered an event. If the user clicked the left mouse button, that was a completely different event than if the user clicked the right mouse button. Instead of dictating what the user could do at any given time, programs now had to respond to different events that the user did. Making programs responsive to different events is called event-driven programming.

Event-driven programs divide a large program into multiple subprograms where each subprogram responds to a different event. If the user clicked a menu command, a subprogram would run its instructions. If the user clicked a button, a different subprogram would run another set of instructions.

Event-driven programming always waits to respond to the user’s action.

Object-Oriented Programming

Dividing a large program into multiple subprograms made it easy to create and modify a program. However, trying to understand how such a large program worked often proved confusing since there was no simple way to determine which subprograms worked together or what data they might need from other subprograms.

Even worse, subprograms often modified data that other subprograms used. This meant sometimes a subprogram would modify data before another subprogram could use it. Using the wrong data would cause the other subprogram to fail, causing the whole program to fail. Not only does this situation create less reliable software, but it also makes it much harder to determine how and where to fix the problem.

To solve this problem, computer scientists created object-oriented programming. The goal is to divide a large program into smaller subprograms, but organized related subprograms together into groups known as objects. To make object-oriented programs easier to understand, objects also model physical items in the real world.

Suppose you wrote a program to control a robot. Dividing this problem by tasks, you might create one subprogram to move the robot, a second subprogram to tell the robot how to see nearby obstacles, and a third subprogram to calculate the best path to follow. If there was a problem with the robot’s movement, you would have no idea whether the problem lay in the subprogram controlling the movement, the subprogram controlling how the robot sees obstacles, or the subprogram calculating the best path to follow.

Dividing this same robot program into objects might create a Legs object (for moving the robot), an Eye object for seeing nearby obstacles, and a Brain object (for calculating the best path to avoid obstacles). Now if there was a problem with the robot’s movement, you could isolate the problem to the code stored in the Legs object.

Besides isolating data within an object, a second idea behind object-oriented programming is to make it easy to reuse and modify a large program. Suppose we replace a robot’s legs with treads. Now we’d have to modify the subprogram for moving the robot since treads behave differently than legs. Next, we’d have to modify the subprogram for calculating the best path around obstacles since treads force a robot to go around obstacles while legs would allow a robot to walk over small obstacles and go around larger ones.

If you wanted to replace a robot’s legs with treads, object-oriented programming would simply allow you to yank out the Legs object and replace it with a new Treads object, without affecting or needing to modify any additional objects that make up the rest of the program.

Since most programs are constantly modified to fix errors (known as bugs) or add new features, object-oriented programming allows you to create a large program out of separate building blocks (objects), and modify a program by only modifying a single object.

The key to object-oriented programming is to isolate parts of a program and promote reusability through three features known as encapsulation, inheritance, and polymorphism.

Encapsulation

The main purpose of encapsulation is to protect and isolate one part of a program from another part of a program. To do this, encapsulation hides data so it can never be changed by another part of a program. In addition, encapsulation also holds all subprograms that manipulate data stored in that object. If any problems occur, encapsulation makes it easy to isolate the problem within a particular object.

Every object is meant to be completely independent of any other part of a program. Objects store data in properties. The only way to manipulate those properties is to use subprograms called methods, which are also encapsulated in the same object. The combination of properties and methods, isolated within an object, makes it easy to create large programs quickly and reliably by using objects as building blocks as shown in Figure 1-4.

9781484212349_Fig01-04.jpg

Figure 1-4. Objects encapsulate related subprograms and data together, hidden from the rest of a program

Inheritance

Creating a large, complicated program is hard, but what makes that task even harder is writing the whole program from scratch. That’s why most programmers reuse parts of an existing program for two reasons. First, they don’t have to rewrite the feature they need from scratch. That means they can create a large program faster. Second, they can use tested code that’s already proven to work right. That means they can create more reliable software faster by reusing reliable code.

One huge problem with reusing code is that you never want to make duplicate copies. Suppose you copied a subprogram and pasted it into a second subprogram. Now you have two copies of the exact same code stored in two separate places in the same program. This wastes space, but more importantly, it risks causing problems in the future.

Suppose you found a problem in a subprogram. To fix this problem, you’d need to fix this code everywhere you copied and pasted it in other parts of your program. If you copied and pasted this code in two other places in your program, you’d have to find and fix that code in both places. If you copied and pasted this code in a thousand places in your program, you’d have to find and fix that code in a thousand different places.

Not only is this inconvenient and time consuming, but it also increases the risk of overlooking code and leaving a problem in that code. This creates less reliable programs.

To avoid the problem of fixing multiple copies of the same code, object-oriented programming uses something called inheritance. The idea is that one object can use all the code stored in another object but without physically making a copy of that code. Instead, one object inherits code from another object, but only one copy of that code ever exists.

Now you can reuse one copy of code as many times as you want. If you need to fix a problem, you only need to fix that code once, and those changes automatically appear in any object that reuses that code thorough inheritance.

Basically, inheritance lets you reuse code without physically making duplicate copies of that code. This makes it easy to reuse code and easily modify it in the future.

Polymorphism

Every object consists of data (stored in properties) and subprograms (called methods) for manipulating that data. With inheritance, one object can use the properties and methods defined in another object. However, inheritance can create a problem when you need a subprogram (method) to use different code.

Suppose you created a video game. You might define a car as one object and a monster as a second one. If the monster throws rocks at the car, the rocks would be a third object. To make the car, monster, and rocks move on the screen, you might create a method named Move.

Unfortunately, a car needs to move differently on the screen than a monster or a thrown rock. You could create three subprograms and name them MoveCar, MoveMonster, and MoveRock. However, a simpler solution is to just give all three subprograms the same name such as Move.

In traditional programming, you can never give the same name to two or more subprograms since the computer would never know which subprogram you want to run. However in object-oriented programming, you can use duplicate subprogram names because of polymorphism.

Polymorphism lets you use the same method names but replace it with different code. The reason why polymorphism works is because each Move subprogram (method) gets stored in a separate object such as one object that represents the car, a second object that represents the monster, and a third object that represents a thrown rock. To run each Move subprogram, you would identify the object that contains the Move subprogram you want to use such as:

  • Car.Move
  • Monster.Move
  • Rock.Move

By identifying both the object that you want to manipulate and the subprogram that you want to use, object-oriented programming can correctly identify which set of instructions to run even though one subprogram has the identical name of another subprogram.

Essentially, polymorphism lets you create descriptive subprogram names and reuse that descriptive name as often as you like when you also use inheritance.

The combination of encapsulation, inheritance, and polymorphism forms the basis for object-oriented programming. Encapsulation isolates one part of a program from another. Inheritance lets you reuse code. Polymorphism lets you reuse method names but with different code.

Understanding Programming Languages

A programming language is nothing more than a particular way to express ideas, much like human languages such as Spanish, Arabic, Chinese, or English. Computer scientists create programming languages to solve specific types of problems. That means one programming language may be perfect to solve one type of problem but horrible at solving a different type of problem.

The most popular programming language is C, which was designed for low-level access to the computer hardware. As a result, the C language is great for creating operating systems, anti-virus programs, and hard disk encryption programs. Anything that needs to completely control hardware is a perfect task for the C programming language.

Unfortunately, C can be cryptic and sparse because it was designed for maximum efficiency for computers without regard for human efficiency for reading, writing, or modifying C programs. To improve C, computer scientists created an object-oriented version of C called C++ and a slightly different object-oriented version called Objective-C, which was the language Apple adopted for OS X and iOS programming.

Because C was originally designed for computer efficiency at the expense of human efficiency, all variants of C including C++ and Objective-C can also be difficult to learn, use, and understand. That’s why Apple created Swift. The purpose of Swift is to give you the power of Objective-C while being far easier to learn, use, and understand. Swift is basically an improved version of Objective-C, which was an improved version of C.

Each evolution of computer programming builds on the previous programming standards. When you write programs in Swift, you can use Swift’s unique features along with object-oriented features. In addition, you can also use event-driven programming, structured programming, and the three basic building blocks of programming (sequential, loops, and branching) when writing Swift programs as well.

Swift, like all computer programming languages, contain a fixed list of commands known as keywords. To tell a computer what to do, you have to keywords to create statements that cause the computer to perform a single task.

Keywords act as building blocks for any programming language. By combining keywords, you can create subprograms or functions that give a programming language more power. For example, one keyword in Swift is var, which defines a variable. One function defined in Swift is print, which prints data on the screen.

You’ve already seen the print function that prints text such as:

print ("Hello, world!")

This Swift print function (which stands for print line) works with any data enclosed in its parentheses. Just as learning a human language requires first learning the basic symbols used to write letters such as an alphabet or other symbols, so does learning a programming language require first learning the keywords and functions of that particular programming language.

Although Swift contains dozens of keywords and functions, you don’t have to learn all of them at once to write programs in Swift. You just have to learn a handful of keywords and functions initially. As you get more experienced, you’ll gradually need to learn additional Swift keywords and functions.

To make programming as easy as possible, Swift (like many programming languages) uses keywords and functions that look like ordinary English words such as print or var (short for variable). However, many programming languages, such as Swift, also use symbols that represent different features.

Common symbols are mathematical symbols for addition (+), subtraction (-), multiplication (*), and division (/).

Swift also uses curly brackets to group related code that needs to run together such as:

{
    print ("This is a message")
    print ("The message is done now")
}

Unlike human languages where you can misspell a word or forget to end a sentence with a period and people can still understand what you’re saying, programming languages are not so forgiving. With a programming language, every keyword and function must be spelled correctly, and every symbol must be used where needed. Misspell a single keyword or function, use the wrong symbol, or put the right symbol in the wrong place, and your entire program will fail to work.

Programming languages are precise. The key to programming is to write:

  • As little code as possible
  • Code that does as much as possible
  • Code that’s as easy to understand as possible

You want to write as little code as possible because the less code you write, the easier it will be to ensure that it works right.

You want code that does as much as possible because that makes your program more capable of solving bigger problems.

You want code that’s easy to understand because that makes it easy to fix problems and add features.

Unfortunately, these three criteria are often at odds with one another as shown in Figure 1-5.

9781484212349_Fig01-05.jpg

Figure 1-5. The three, often contradictory, goals of programming

If you write as little code as possible, that usually means your code can’t do much. That’s why programmers often resort to shortcuts and programming tricks to condense the size of their code, but that also increases the complexity of their code, making it harder to understand.

If you write code that does as much as possible, that usually means writing large numbers of commands, which makes the code harder to understand.

If you write code that’s easy to understand, it usually won’t do much. If you write more code to make it do more, that makes it harder to understand.

Ultimately, computer programming is more of an art than a science. In general, it’s better to focus on making code as easy to understand as possible because that will make it easier to fix problems and add new features. In addition, code that’s easy to understand means other programmers can fix and modify your program if you can’t do it.

That’s why Apple created Swift to make code easier to understand than Objective-C without sacrificing the power of Objective-C. Despite being more powerful than Objective-C, Swift code can often be shorter than equivalent Objective-C code. For that reason, the future programming language of Apple will be Swift.

The Cocoa Framework

Keywords, functions, and symbols let you give instructions to a computer, but no programming language can provide every possible command you might need to create all types of programs. To provide additional commands, programmers use keywords to create subprograms that perform a specific task.

When they create a useful subprogram, they often save it in a library of other useful subprograms. Now when you write a program, you can use the programming language’s keywords plus any subprograms stored in libraries. By reusing subprograms stored in libraries, you can create more powerful and reliable code.

For example, one library might contain subprograms for displaying graphics. Another library might contain subprograms for saving data to a disk and retrieving it again. Still another library might contain subprograms for calculating mathematical formulas. To make writing OS X and iOS programs easier, Apple has created a library framework of useful subprograms called the Cocoa framework.

There are two reasons for reusing an existing framework. First, reusing a framework keeps you from having to write your own instructions to accomplish a task that somebody else has already solved. Not only does a framework provide a ready-made solution, but a framework has also been tested by others, so you can just use the framework and be assured that it will work correctly.

A second reason to use an existing framework is for consistency. Apple provides frameworks for defining the appearance of a program on the screen, known as the user interface. This defines how a program should behave from displaying windows on the screen to letting you resize or close a window by clicking the mouse.

It’s perfectly possible to write your own instructions for displaying windows on the screen, but chances are good that writing your own instructions would take time to create and test, and the end result would be a user interface that may not look or behave identically as other OS X or iOS programs.

However, by reusing an existing framework, you can create your own program quickly and ensure that your program behaves the same way that other programs behave. Although programming might sound complicated, Apple provides hundreds of pre-written and tested subprograms that help you create programs quickly and easily. All you have to do is write the custom instructions that make your program solve a specific, unique problem.

To understand how Apple’s Cocoa framework works, you need to understand object-oriented programming for two reasons. First, Swift is an object-oriented programming language, so to take full advantage of Swift, you need to understand the advantages of object-oriented programming.

Second, Apple’s Cocoa framework is based on object-oriented programming. To understand how to use the Cocoa framework, you need to use objects.

 Note  The Cocoa framework is designed for creating OS X programs. A similar framework called Cocoa Touch is designed for creating iOS apps. Because the Cocoa Touch framework (iOS) is based on the Cocoa framework (OS X), they both work similarly but offer different features.

By relying on the Cocoa framework, your programs can gain new features each time Apple updates and improves the Cocoa framework. For example, spell checking is a built-in feature of the Cocoa framework. If you write a program using the Cocoa framework, your programs automatically get spell-checking capabilities without you having to write any additional code whatsoever. When Apple improves the spell-checking feature in the Cocoa framework, your program automatically gets those improvements with no extra effort on your part.

The Cocoa framework is a general term that describes all of Apple’s pre-written and tested libraries of code. Some different parts of the Cocoa framework can give your programs audio playing capabilities, graphics capabilities, contact information storage abilities such as names and addresses, and Internet capabilities.

The Cocoa framework creates the foundation of a typical OS X program. All you have to do is write Swift code that makes your program unique.

The View-Model-Controller Design

It’s possible to write a large program and store all your code in a single file. However, that makes finding anything in your program much harder. A better solution is to divide a large program into parts and store related parts in separate files. That way you can quickly find the part of your program you need to modify, and you make it easy for multiple programmers to work together since one programmer can work on one file and a second programmer can work on a second file.

When you divide a program into separate files, it’s best to stay organized. Just as you might store socks in one drawer and pants in a second drawer so you can easily find what the clothes need, so should you organize your program into multiple files so each file only contains related data. That way you can find the file you need to modify quickly and easily.

The type of data you can store in a file generally falls into one of three categories as shown in Figure 1-6:

  • Views (user interface)
  • Models
  • Controllers

9781484212349_Fig01-06.jpg

Figure 1-6. Dividing a program into a model-view-controller design

The view or user interface is what the user sees. The purpose of every user interface is to display information, accept data, and accept commands from the user. In the old days, programmers often created user interfaces by writing code. While you can still do this in Swift, it’s time consuming, error prone, and inconsistent because one programmer’s user interface might not look and behave exactly like a second programmer’s user interface.

A better option is to design your user interface visually, which is what Xcode lets you do. Just draw objects on your user interface such as buttons, text fields, and menus, and Xcode automatically creates an error-free, consistent-looking user interface that only takes seconds to create. When you create a user interface using Xcode, you’re actually using Apple’s Cocoa framework to do it.

By itself, the user interface looks good but does nothing. To make a program do something useful, you need to write code that solves a specific problem. For example, a lottery-picking program might analyze the latest numbers picked and determine the numbers most likely to get picked the coming week. The portion of your code that calculates a useful result is called the model.

The model is actually completely separate from the view (user interface). That makes it easy to modify the user interface without affecting the model (and vice versa). By keeping different parts of your program as isolated as possible from other parts of your program, you reduce the chance of errors occurring when you modify a program.

Since the model is always completely separate from the view, you need a controller. When a user chooses a command or types data into the user interface, the controller takes that information from the view and passes it to the model.

The model responds to this data or commands and calculates a new result. Then it sends the calculated result to the controller, which passes it back to the view to display for the user to see. At all times, the view and model are completely separate.

With Xcode, this means you’ll write and store the bulk of your Swift code in files that define the model of your program. If you were calculating winning lottery numbers, your Swift code for performing these calculations would be stored in the model.

You’ll also need to write and store Swift code in a controller file. In a simple program, you might have a single view. In a more complicated program, you might have several views. For each view, you’ll typically need a controller file to control that view. So the second place you’ll write and store Swift code is in the controller files.

In simple programs, it’s common to combine a model with a controller in one file. However for larger, more complicated programs, it’s better to create one (or more) files for your model and one file for each controller. The number of controller files is usually equal to the number of views that make up your user interface.

Once you’ve clearly separated your program into models, views, and controllers, you can easily modify your program quickly and easily by replacing one file with a new file. For example, if you wanted to modify the user interface, you could simply design a different view, connect it to your controller, and you’d be done without touching your model files.

If you wanted to modify the model to add new features, you could just update your model files and connect it to your controller without touching your view.

In fact, this is exactly how programmers create programs for both OS X and iOS. Their model remains unchanged. All they do is write one controller and view for OS X and one controller and view for iOS. Then they can create both OS X and iOS apps with the exact same model.

How Programmers Work

In the early days, a single programmer could get an idea and write code to make that program work. Nowadays, programs are far more complicated, and user expectations are much higher so that you need to design your program before writing any code. In fact, most programmers actually don’t spend much of their time writing or editing code at all. Instead, programmers spend the bulk of their time thinking, planning, organizing, and designing programs.

When a programmer has an idea for a program, the first step is to decide if that idea is even any good. Programs must solve a specific type of problem. For example, word processors make it easy to write, edit, and format text. Spreadsheets make it easy to type in numbers and calculate new results from those numbers. Presentation programs make it easy to type text and insert graphics to create a slideshow. Even video games solve the problem of relieving boredom by offering a challenging puzzle or goal for players to achieve.

 Note  The number one biggest failure of software development is not defining a specific problem to solve. The second biggest failure of software development is identifying a specific problem to solve but underestimating the complexity of the steps needed to solve that problem. You have to know what problem to solve and how to solve that problem. If you don’t know either one, you can’t write a useful program.

Once you have an idea for a problem to solve, the next step is defining how to solve that problem. Some problems are simply too difficult to solve. For example, how would you write a program to write a best-selling novel? You may be able to write a program that can help you write a novel, but unless you know exactly how to create a best-selling novel predictably, you simply can’t write such a program.

Knowing a problem to solve is your target. Once you clearly understand your problem, you now need to identify all the steps needed to solve that problem. If you don’t know how to solve a particular problem, you won’t be able to tell a computer how to solve that problem either.

After defining the steps needed to solve a problem, now you can finally write your program. Generally you’ll write your program in parts and test each part before continuing. In this way, you’ll gradually expand the capabilities of your program while making sure that each part works correctly.

The main steps you’ll go through while using Xcode are:

  • Write code and design your user interface
  • Edit your code and user interface
  • Run and test your program

When your program is finally done, guess what? It’s never done. There will always be errors you’ll need to fix and new features you’ll want to add. Programmers actually spend more time editing and modifying existing programs than they ever do creating new ones.

When you write and edit code, you’ll use an editor, which is a program that resembles a word processor that lets you type and edit text. When you design and modify your user interface, you’ll use a feature in Xcode called Interface Builder, which resembles a drawing program that lets you drag, drop, and resize objects on the screen. When you run and test your program, you’ll use a compiler, which converts (or compiles) your Swift code into an actual working OS X program.

To create an OS X program, you just need a copy of Xcode, which is Apple’s free programming tool that you can download from the Mac App Store. With Xcode installed on your Macintosh, you can create your own programs absolutely free just by learning Xcode and Swift.

Summary

To learn how to write programs for the Macintosh, you need to learn several separate skills. First, you need to understand the basic principles of programming. That includes organizing instructions in sequences, loops, or branches; understanding structured programming techniques, event-driven programming, and object-oriented programming.

Second, you need to learn a specific programming language. For the Macintosh, you’ll be learning Swift. That means you’ll learn the keywords and functions used in Swift along with learning how to write and organize your Swift code in separate files.

Third, you need to know how to use Xcode, which is Apple’s programming tool for creating OS X and iOS apps. Xcode lets you write and edit Swift code as well as letting you visually design and modify your user interface.

Fourth, you’ll need to learn how to use Apple’s Cocoa framework so that you can focus solely on writing instructions that make your program do something unique.

Whether you want to write your own software to sell, or sell your programming skills to create custom software for others, you’ll find that programming is a skill that anyone can learn.

Remember, programming is nothing more than problem solving. By knowing how to solve problems using Swift and Xcode, you can create your own programs for the Macintosh much easier and far faster than you might think.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset