© Marvin Waschke 2017

Marvin Waschke, Personal Cybersecurity, 10.1007/978-1-4842-2430-4_2

2. Why Is Computer Security So Weak?

Come On, Guys! Can’t You Do Better?

Marvin Waschke

(1)Bellingham, Washington, USA

The history of computers for the last 80 years has been a fall from innocence. The story begins in a protected place where computer crime as we know it today was non-existent. As the computing community delved deeper into the computer’s potential, the computing garden was gradually infiltrated by cybercrime. The Internet let in all comers, including those remarkably lacking in innocence, to an unprepared community, and the scene was set for cybermayhem. This chapter describes the road to mayhem.

The security problems of today are traceable to the origins of computing. The form of security used for early computers is almost irrelevant to contemporary cybersecurity. In the best of possible worlds, protection for users would be built into the foundation of computer design. But that could not happen because early computer designers had no concept of protecting the user from intrusion and damage, and certainly no idea that computers might be connected together into an enormous worldwide web. The fundamental decisions in the early stages of computer design do not correspond to the needs of the present. Hardware and software engineers are still dealing with the traces of these early decisions.

Babbage and Lovelace

Charles Babbage is usually cited as the inventor of the programmable computer and Ada Lovelace as the first computer programmer. They lived and worked in the early and mid-19th century. Although Babbage and Lovelace were clearly brilliant and far ahead of their time, their accomplishments were only tenuous influences at best on the scientists and engineers in the first half of the 20th century, who began the line of development that leads to the computers of today.

Babbage first designed what he called the Difference Engine. The device required a skilled and innovative machine shop for its construction, pushing the limits of Victorian technology. The Difference Engine was a calculator that could calculate logarithmic and trigonometric tables, which were important for navigation, but it was not programmable. A later design called the Analytic Engine was programmable. Ada Lovelace wrote programs for the Analytic Engine. None of Babbage’s designs were completely constructed during his lifetime, although a modern replica of a version of the Difference Engine was built and successfully tested in 1991 at the Science Museum in London.1

The Babbage devices were analog, not digital. Electronic digital computers represent numbers and symbols as discrete values, usually voltages in electronic circuits. The values depend on voltages falling within ranges. In modern computer memory, a value must be either 1 or 0; an intermediate value is impossible. The zeroes and ones combine to express everything else.

An analog computer depends on physical measurements rather than discrete values and must be manufactured with great precision to produce correct results. Babbage designed more complex and powerful computers than those that proceeded his, but he stayed within the analog tradition. Moving from analog to digital was one of the accomplishments that began modern computing.

Babbage’s triumphant innovation was programmability. He designed his Difference Engine to calculate by finite differences, a mathematical method that applies to many problems, but is much narrower than the range of problems that can be addressed by a programmable computer . The Analytic Engine was programmable; it could be given instructions to solve many different problems. In fact, computer scientists have proven that the Analytic Engine could solve any problem that a modern digital computer can solve, although impractically slowly.2

Ada Lovelace not only devised programs for the Analytical Engine, she ­conceived an important concept of modern programing: numeric codes can represent symbols. For example, the code “97” represents the letter “a” in most computers. A series of numeric codes like this can represent a page of text. This simple insight moves computing from numerical calculation to manipulating language and general reasoning.

Since Babbage’s designs were not built in his lifetime, security was never an issue, but it is easy to imagine that a simple lock on the mechanism, a guard, or a lock on the door would have adequately protected the machinery. In addition, it is hard to imagine anyone wishing to interfere with a machine cranking away at calculating a table of logarithms. If anyone had, the ordinary means for dealing with theft or trespass would probably have been adequate.

Unfortunately, the great discoveries of Babbage and Lovelace were forgotten and had to be reinvented in the 20th century. World War II marked the beginning of modern computing.

The Programmable Computer

The development of contemporary computing began in earnest during the buildup to World War II. Early computers were primarily military. They were developed for two tasks: aiming big naval guns and breaking codes. Both were intense numerical tasks that had to be performed under the fierce pressure of war.

Before computers, these tasks were performed by squads of clerks with simple calculators such as mechanical adding machines. Methods were developed for combining the results of each human calculator into a single calculation and checking the results, but the process was still slow and errors were common. An incorrect ballistic calculation could hurl expensive ordnance off-­target, resulting in lost battles, sunken friendly ships, and injury or death. Cracked codes could determine the outcome of battles, turn the course of war, and save lives. Prompt and accurate calculations were vital to national survival.

The first calculation machines were analog and like Babbage’s Difference Engine . These machines were faster and more accurate but they could perform only one type of computation. If a cipher changed, the machine was useless until it was rebuilt. A device that could perform different types of calculation without torturous rebuilding would be more efficient and useful. Babbage’s Analytical Engine might have filled the bill, but it was long forgotten and probably too slow.

Digital electronic computers were faster and easier to build than analog, and development in the 1940s was almost entirely digital. Computing took a huge step forward when calculating instructions joined data in computer input. The first computer to combined program instructions with data input was the Electronic Discrete Variable Automatic Computer (EDVAC.)3 John von Neumann (1903-1957) wrote and released the first report on the design of the EDVAC. He is considered the inventor of the program-stored-as-data architecture, for which Babbage and Lovelace had postulated the basic ideas but never actually implemented, and were forgotten until after their ideas had been reinvented. Modern computers are based on the von Neumann architecture .

The von Neumann architecture gives computing one of its most important characteristics: its malleability. Computers are limited by their resources, such as storage and memory, but they can be made to execute an unlimited array of algorithms with different goals. The range of capabilities is wonderfully large and these capabilities can be changed at will while the computer is in operation. Without this malleability, computers would not have the many uses they have today.

But malleability makes computers subject to subversion in ways that other kinds of systems do not face. A computer program can be changed while it is in operation. The changes can have a huge impact but can be hard to detect. For example, an enemy programmer who gained access to a computer like the EDVAC system for aiming a ballistic missile could, with sufficient skill, be able to subtly reprogram the system to miss or hit the wrong target without changing the physical device. If the computer had to be physically rebuilt to make the change, it would be much harder to mount such an attack surreptitiously.

This new kind of threat meant new kinds of security measures, but the new threats were not apparent. Threats that involved physical contact with the computer could be stopped by existing practices. Posting guards and screening the scientists and engineers involved in critical projects was routine military research practice long before computers. Some engineers at the time may have thought about the new vulnerabilities in von Neumann’s architecture, but they were quickly shoved to the back of the desk by more pressing concerns.

The notions of intrusion on the physical machine and intrusion into the running software naturally were conflated. Today, we often hear of computers invaded by hackers from the other side of the planet, but before networks became prevalent, interrupting the software without getting near the hardware was impossible. The only way to subvert a computer was to slip through a security gate. What invaders might do after they got past the gate was a secondary question. The mindset and skills needed to program were so rare in the early days, the director of a project might be more inclined to hire an invader rather than send them to jail.

On the other hand, separation of hardware and software transformed computers into the flexible tool that now dominates so much of our society and economy.

The Mainframe

Unlike most technology companies, the origins of International Business Machines (IBM) go back to the late 19th century when the company manufactured tabulating machines, time clocks, and other business equipment. IBM held patents on the Hollerith key punch, which became prevalent as the input and output medium for mainframe computers. Until World War II, IBM continued to develop and manufacture tabulators and mechanical calculators for accounting systems . Shortly before the war it developed a non-programmable electronic computer and became involved in defense development, converting some of its manufacturing capacity to manufacturing ordnance, but it retained its focus on computing, participating in some large government efforts to develop computers. IBM leveraged access to government research into dominance in computer manufacture. With its history in both business and research, IBM could then dominate both research and business applications of computers. The phrase for the time was “Big Blue and the seven dwarves.” Big Blue was a nickname for IBM and the seven dwarves were Digital Equipment, Control Data, General Electric, RCA, Univac, Burroughs, and Honeywell.4

As computer hardware became more compact, smaller computers became viable. These were built as cheaper alternatives to mainframes and required less space and fewer staff to run. They were known as minicomputers and were frequently set up as multi-user systems in which computer processing time was distributed among the users, each having the illusion of complete control of their own computer. These multi-user systems have security challenges. User data must be separated and protected from other users. Unauthorized users must also be prevented from interfering with other users. These concerns were addressed by UNIX , an operating system developed by AT&T. The UNlX source code was offered to educational institutions. A flurry of innovation followed and many variations of UNIX appeared. Eventually UNIX evolved into today’s Linux. IBM introduced timesharing also, but used a different method that evolved away from timesharing to become a key technology of cloud computing.5

The Personal Computer (PC)

The earliest personal computers preceded the IBM PC, but the IBM PC was the first to enter the workplace as a tool rather than a toy. The IBM PC was introduced in August of 1981. Measured against the personal computers of today, the 1981 IBM PC was puny. The processing capacity of a desktop now is several orders of magnitude greater than that of the first PC. Yet, the first PC was equipped to perform serious work. IBM offered word processing, spreadsheets, and accounting software for the PC, whose monitor and keyboard was destined to appear on every desktop in every business.

When PCs were introduced to the workplace, they were appliances: super powered typewriters that made corrections without retyping the document; automated spreadsheets for data analysis and tabular records; and streamlined accounting systems for businesses. These appliances were isolated islands, accessible only to their owners. The personal in “personal computer” was literal; it was not a node in a communication web.

Early PCs were usually not networked with other PCs or any other type of computer system, although they might be set up to emulate a terminal attached to a mainframe or minicomputer. A terminal is nothing more than a remote keyboard and display. When a PC emulates a terminal, the emulation software is designed only to receive screen data from the mainframe or mini and translate it to a PC screen image. The emulation software also passes PC keyboard input to the mainframe or mini. When a PC is emulating, the PC is only a conduit. No data on the PC changes and the host, mainframe or mini, can’t distinguish between a PC and a dedicated terminal. A person needing both a PC and a terminal could claw back some real estate on the top of their desk by replacing the terminal with an emulator on their PC and the company need only buy one expensive device instead of two.

Hard drives did not appear in PCs until the second model of the IBM PC appeared. The earliest PCs used audio cassettes for storage. Later, users relied on small floppy disks for saving information and backing up. The floppies usually ended up in a box, possibly locked, on the user’s desk. Both their processing and the data were isolated and inaccessible to everyone but the PC owner. An isolated PC on every desk was the vision of many in the PC world. This vision was nearly realized in the late 1980s and early 1990s.

Isolated and relatively cheap PCs could be and were purchased by business departments rather than IT and were considered closer to office machines than real computers that deserved the attention of the trained technologists from the IT department. Although mainframe security differs from PC security, IT staffs are aware of data protection, unlike most office workers. Generally, left on their own, most workers are unaware and basic security practices, such as regular backups, are neglected or never established.

PC practices could be rather chaotic, varying greatly from department to department. Since each department often purchased their software and hardware independently, compatibility was hit and miss. Although these practices were inefficient, the PC increased departmental efficiency enough to justify continued departmental purchases of PCs and software. PC security, other than preventing thieves from carrying off expensive pieces of equipment, was not a recognized need. Theft of memory cards was common enough that locks were often placed on computer cases. Only the occasional loss of unbacked-up data from a disk crash reminded managers that PCs require a different kind of attention than an electric typewriter or copier.

Occasionally, you hear of offices that brought in computer services regularly to dust the interior of their PCs, but neglected to back up their data. Dusting the interior of PCs is occasionally needed, but in a typical clean office environment, dusting may not be required until after a PC is obsolete. Backing up is a continual necessity. Such misplaced priorities are an indication of system management without a clear understanding of threats.

The predominant PC operating system , Microsoft Disk Operating System (MS-DOS) , was a single-user system. Its code would not support more than a single user. Microsoft licensed a second operating system, . Like Linux today, Xenix was derived from UNIX , the venerable mini-computer operating system. UNIX was a multi-user operating system that built substantial security, like limiting user authorization, into the core of the system. Xenix inherited multi-user support and some of the security built into UNIX.

The isolated PC vision could get along well without investments in the cumbersome and performance-sapping mechanisms of a multi-user systems and networking. For a while, Microsoft thought Xenix would become the Microsoft high-end operating system when PCs became powerful enough to support Xenix well. That view changed. Microsoft distanced itself from the Xenix operating system in the mid to late 1980s and began to concentrate on its own high-end operating system, NT .

NT did not begin as an operating system that concentrated upon security and did not inherit the security concerns of Xenix . It was still anchored in the milieu in which each PC was an island with minimal contact with other computers. Security as we think of it today was still not a driving concern. NT did provide greater stability and freed Microsoft from limitations that came from operating with the early processor chips that were used in early PCs. Current Microsoft operating systems are still based on the NT design, although they have evolved substantially.

Microsoft and the other hardware and software vendors could have chosen to develop more secure multi-user systems , but there was little apparent need. PCs were not powerful enough to drive multi-user systems well. At that time, most people’s vision for the PC was an isolated personal device on every desk and in every home. Although the technology for networking computers was available, it was not implemented at first. The cost of installing wiring and the lack of apparent benefit may have affected this.

The Local Area Network

The introduction of the Local Area Network (LAN) was a milestone on the path to the end of innocence for personal computing. A LAN is a network that connects computers over a small area, often a building or a floor in a building. A network connection transforms a desktop PC into a communication device. The new power that a LAN connection conferred was not understood when LANs were introduced. Few realized that the versatile machine on everyone’s desk could no longer be managed and maintained like a high-end typewriter.

At first glance, a LAN connection appears to be a minor enhancement to the PC’s capabilities. LANs were often positioned as a cost reduction measure. Connected PCs could share resources like disk storage. Instead of investing in large storage disks for each PC, a single file servercould be equipped with a disk large enough to store everyone’s files. Centrally stored data could be stored and served back to the PC of anyone who needed the files. PC hardware was much more expensive then and this was a tempting possibility. Some PCs were deployed that had little or no local storage and operated entirely off the central file server .

Documents written at one desk were instantly available for revision at a dozen other desks on the LAN. Eventually, this mode of working would become very common. It connects with the concept of the paperless office, an office in which all documents are stored and maintained in electronic form. These documents are passed from person to person electronically rather than as paper. The path to a paperless office was longer than expected.6 Shared disks were an early step forward that inspired enthusiasm, but it took cloud implementations and tablets to substantially reduce the amount of paper used in offices.

A LAN without an external connection can only connect the computers on the LAN. Documents could not be delivered to remote offices without a connection to another type of network, called a Wide Area Network (WAN) . Before the Internet , wide area communication was usually reserved for large enterprises. Computer communications with customers or suppliers was usually not available. Teletype communications, based on the telephone system, was generally used for business-to-business document transfer. Faxing, also based on the telephone system, was gaining ground during this period.

Email was available, but on an isolated LAN or group of LANs connected on a WAN , the reach of email was limited. A cobbled together UNIX system based on dialup modems could transfer email from one isolated LAN to another, but it was difficult to learn and set up, relying on the UNIX command line. Delivery was quirky and depending on dialup connections that were easily snarled. It was not fast and reliable like the Internet-based system everyone uses today.7

Even with its limitations, email and early network forums began to thrive and foreshadow the current popularity of social media . Today’s arguments over the ethics of spending time on Facebook at work resemble the passionate debates that once raged over the legitimacy and ethics of using company email to set up a lunch date with a colleague.

LAN Security

The appearance of LANs inspired new thinking on computer security. A LAN fits a secure-the-perimeter model, which is an extension of the concept of locking doors and surrounding buildings with fences. Ethernet is almost the only LAN protocol used today. The Ethernet standard specifies the way bits and bytes are transmitted in patterns that are identified, received, and sent on the conductors that connect individual PCs on a LAN. The Ethernet design was conceived in the mid-1970s at the Xerox labs in Palo Alto. The Institute of Electrical and Electronics Engineers (IEEE) published an Ethernet standard in 1983.8

Usually, computers and other devices connected to an Ethernet LAN determine a security perimeter that prohibits or limits access to the LAN. The LAN perimeter may be nested inside a wide area network (WAN) perimeter that might be within an even wider corporate perimeter. See Figure 2-1.

A416354_1_En_2_Fig1_HTML.jpg
Figure 2-1. Defense perimeters nest

The attack surface of an isolated LAN is limited to physical access to the computers connected to the LAN plus the cables and network gear that tie the network together. In most cases, an isolated LAN can be protected by restricting access to rooms and buildings. A physically secured LAN is still vulnerable to invasion by rogue employees or attackers who penetrate physical security. Cabling and wiring closets were added the list of items to be physically secured, but, for the most part, a LAN can be secured in the same way isolated PCs can be secured.

Many enterprises are not limited to a single building or compact campus. If a geographically spread organization wanted to network all their PCs together, they had to step up to a WAN to connect widely separated sites. WAN services, at that time, were usually supplied by third parties using the telecommunications voice transmission infrastructure. Often the telecoms themselves offered WAN services. This opened inter-LAN communication to tampering from within the telephone system. With each step, from isolated PC, to PCs connected to a LAN, to LANs connected by WANs , PCs became more useful and began to play a more important role moving data from employee to employee and department to department, and consolidating data for management. However, with each step, the PC on the desk became more exposed to outside influences.

Although the vulnerabilities of a LAN are greater than those of a disconnected PC, developers were not spurred to rethink the security of the hardware and software designs of the PC. The developers remained largely oblivious to the threats that were coming.

The Methodology Disconnect

Mainframe software was almost always mission critical; few businesses could afford to use expensive mainframe resources on anything but mission critical projects. The downside of these critical projects was the frequency of missed deadlines and dysfunctional results. These failures cost millions of dollars and ended careers. Cost-overruns and failed systems often seemed more common than successes. The managers in suits responded with a rigid methodology intended to ensure success. A project had to begin with a meticulous analysis of the problem to be solved and progress to an exacting design that developers were expected translate to code without deviation from the design.

The methodology had some serious flaws. Often in software development, new, better, and more efficient approaches are only visible after coding is in progress. The methodology was a barrier to taking advantage of this type of discovery. In addition, the methodology provided every player with opportunities to blame someone in a previous stage for failure, or toss issues over the wall for the next stage to resolve.

This was the state of software engineering for a distributed environment when I first entered PC development. The atmosphere was heady. Development had broken away from the mainframe .

Programming offers unlimited opportunities for creativity and can be more art than science. Many developers who were attracted to the creative side of coding did not thrive in the regimented mainframe development environment. For these developers to be part of a small team, each with their own computer under their control, on which they could experiment and push to the limits and beyond, was like a trip to Las Vegas with an unlimited bankroll.

New products popped up everywhere and the startup culture began. In the exuberance of the time, security was more a hindrance to development rather than a basic requirement. Although most developers knew that networking was about to become a mainstay of computing, they still preferred to tack security to the end of development process and leave the hassle of signing in and proper authorization to the testers. After all, it was not like a timesharing system; no one could get to a PC sitting on an office worker’s desk or in someone’s living room.

The Internet

The Internet was the next stage in the transformation of the personal computer from a stand-alone appliance like a typewriter or a stapler into a communications portal . Home computers were transformed in the same way, connecting home and office to a growing world of information, institutions, and activities. See Figure 2-2.

A416354_1_En_2_Fig2_HTML.jpg
Figure 2-2. The Internet transformed desktop PCs from appliances to communications portals

Network

The Internet is a network that connects other networks. When a business’ network connects to the Internet, all the computers that connect to the business’ network join the Internet.

The computers connected in a network are called nodes. Nodes shows up often in computer science terminology. The Latin meaning of node is knot, a meaning that the word retains. In networking jargon, a node is a junction, a knot, where communication lines meet. Most of the time, a network node is a computer, although other network gear, such as switches and routers, are also nodes. When connected to the Internet , nodes are knitted together into a single interconnected fabric. Some nodes are connected to each other directly, others are connected in a series of hops from node to node, but, when everything is working right, all nodes connected to the Internet are connected and can communicate. At present, there are over three billion nodes connected into the Internet . The exact number changes continuously as nodes connect and disconnect.9

The Internet led software designers and developers to think about ­computer applications differently. Instead of standalone programs like word processors and spreadsheets, they could design systems that provided complex central services to remote users. Timesharing mainframes and minicomputers had hosted applications that offered services to users on terminals , but the Internet offered a network, which is more flexible. In a network, nodes connect to any number of other nodes. A terminal in a typical time-sharing architecture communicates with a single host. In this architecture, terminals usually were not sophisticated and relied on the host for all computing. Communication between terminals must be routed through the central host. This kind of architecture is hard to expand. At some point, the number of terminals exceeds the load the host can support and the system has reached its limit. Sometimes the capacity of the central host can be increased, but that reaches a limit also. See Figure 2-3.

A416354_1_En_2_Fig3_HTML.jpg
Figure 2-3. A mainframe spoke-and-hub pattern differs from the Internet pattern

Within a network like the Internet , a system can expand by adding new servers, nodes that act like a mainframe host to provide information and processing to other client nodes, in place of a single host mainframe . With the proper software, servers can be duplicated as needed to support rising numbers of clients. On the Internet , users do not connect directly to a single large computer. Instead, users connect to the Internet, and through the Internet, they can connect to other nodes on the Internet. This connectivity implies that servers can be located anywhere within the reach of the Internet.

As the Internet blossomed, developers still tended to assume that no one was malicious. In the early 1990s, I was part of a startup that was developing a distributed application. Our office was in Seattle and our first customer was a bank located in one of the World Trade Towers in Manhattan. One morning, one of the developers noticed that some odd data was appearing mysteriously in his instance of the application. After a minute, he realized where it was coming from. Our first customer was unknowingly broadcasting data to our office. Fortunately, only test data was sent, but it could have been critical private data. If our customer had gone into production with that version of the product, our little company would have ended abruptly. The defect was quickly corrected, but it was only caught by chance. The development team missed a vital part of configuring the system for communication and did not think to monitor activity on the Internet . Until that moment, we did no testing that would have detected the mess.

Of course, our team began to give Internet connections more attention, but it would be a decade before Internet security issues routinely got the attention among developers that they do now. Mistakes like this were easy to make and they were made often, although perhaps not as egregious as ours.

Communication Portal

The decentralized connectivity of the Internet is the basis for today’s computers becoming communication portals that connect everyone and everything. Utilities like email are possible in centralized terminal-host environments, and attaching every household to a central time-sharing system is possible and was certainly contemplated by some pre-Internet visionaries, but it never caught on. Instead, the decentralized Internet has become the ubiquitous communications solution.

Much of the expansion of home computing starting in the mid-1990s can be traced to the growth of the home computer as a communications portal . A computer communications portal has more use in the home than a standalone computer. Before the Internet , most people used their home computer for office work or computer games. Not everyone has enough office work at home to justify an expensive computer, nor is everyone taken with computer games enough to invest in a computer to play them. The communications, data access, and opportunities for interaction offered by the Internet alter this equation considerably.10

The expansion of communication opportunities has had many good effects. Publishing on the Internet has become easy and cheap. Global publication is no longer limited to organizations like book publishers, news agencies, newspapers, magazines, and television networks. Email is cheaper and faster than paper mail and has less environmental impact. Electronic online commerce has thrived on the ease of communication on the Internet.

The Internet has reduced the obstacles to information flow over national boundaries. The browser has become a window on the world. New opportunities have appeared for both national and international commerce. Social media has changed the way families and friends are tie together. The transformation did not happen overnight, but it occurred faster than expected and is still going on.

The computer as a communications portal appeals to many more people than office functions and games. Everyone can use email and almost everyone is receptive to the benefits of social media. When the uses of computers were limited to functionality that required technical training, computer owners were a limited subgroup of the population with some level of technical insight. The group of those interested in social media and email has expanded to include both the most technically sophisticated and the most naïve. The technically naïve members of this group are unlikely to understand much of what goes on behind the scenes on their computer and its network. This leaves them more vulnerable to cybercrime.

Origins

The Internet did not spring from a void. It began with research in the 1960s into methods of computer communication and evolved from there to the Internet we know today. Often, the Internet is said to have begun with the development of the Advanced Research Projects Agency Network (ARPANET) 11, but without the network technology developed earlier, ARPANET would have been impossible.

ARPANET and the Internet

In the 1950s, mainframes in their glasshouse data centers were nearly impregnable, but a different wind had begun to blow. In the late 1950s, Joseph Lickliter (1915-1990), head of the ARPA , began thinking about connecting research mainframes as part of the research ARPA performed for the defense department. Lickliter wanted connect computers to provide access to researchers who were not geographically close to a research data center. He called his projected network the “Intergalactic Computer Network” in a memo to the agency.12

Lickliter brought together researchers from the entire country. This group eventually developed the ARPANET. The network was designed to connect research computers, not, as is sometimes reported, as a command and control system that could withstand nuclear attack. Lickliter went on to other projects before the network was realized, but he is often called the originator.

The first incarnation of the ARPANET was formed on the west coast between Stanford, University of California Los Angeles, University of California Santa Barbara, and the University of Utah. These were university research centers, not top secret military centers, nor were they commercial data centers, processing inventories and accounts.

The concept of tying together isolated computers and knowledge was compelling. The objective was to include as many research centers as possible, not raise barriers to joining. At the height of the Cold War, fear of espionage was rampant, but apparently, no one thought about ARPANET as a means for spying on the research center network and little was done to secure it, beyond the usual precautions of keeping the equipment behind lock and key. The early ARPANET did not include commercial datacenters. Therefore, there may have been some concern about data theft, but money and finances were not on anyone’s mind.

During the 1980s and the early 1990s, two networking approaches contended. The ARPANET represented one approach. The other was IBM’s Systems Network Architecture (SNA) . The two differed widely, reflecting their different origins.

Businesses needed to connect their computers for numerous reasons. Branches need to connect to headquarters, vendors to customers, and so on. To serve these needs, IBM developed a proprietary network architecture unrelated to the ARPANET. The architecture was designed for the IBM mainframes and minicomputers used by its business customers. It was implemented with communications equipment that IBM designed and built specifically for SNA. Non-IBM hardware could connect with SNA to IBM’s equipment, but SNA was not used to connect non-IBM equipment to other non-IBM equipment. In other words, SNA was only useful in environments where IBM equipment was dominant.

Almost all networks today are based on what is called a layered architecture. Instead of looking at network transmission as a single process that puts data on and takes data off a transmission medium, a layered architecture looks at the process as a combination of standardized parts that are called layers.

When put together for communication, the layers are called a network stack. For example, the bottom layer, called the physical layer, is concerned with signals on the transmission medium. This layer knows how to translate a message into the physical manifestations the transmission media must have. If the medium is copper wire, the layer is a software and hardware module designed and built to put modulated electrical signals on copper wire at the proper voltage and frequency. If copper wire is replaced by optical fiber , the copper wire module is replaced by an optical fiber module that reads and writes light pulses. When a layer in the stack is replaced, the other layers do not have to change because each layer is designed to interact with the other layers in a standard way.

Layers work like an electric plugs and sockets. It doesn’t matter whether you plug a lamp or an electric can opener into a wall socket. Both will work fine. Also, the electricity could come from a hydro-electric dam, a windmill, or a gas generator out back. The lamp will still light and you can still open the can of beans because the socket and the electricity is the same. You replace a layer in a network stack in the same way. If what goes in or comes out meets its specification, what happens in between does not matter. You can think of each network layer as having a plug and socket. An upper layer plugs into the socket on the next lower layer. If each pair of plugs and sockets meet the same specification, the stack works.

A layered network architecture is a tremendous advantage, as is the standard design of electricity sources and electrical appliances. When the need arises, you can switch sources. If you lose power during a storm, you can fire up your private gas generator and the lamp still lights. This also applies to networking. If copper is too slow, you can replace it with optical fiber and replace the copper bottom layer module with a fiber module. This saves much time and expense because the entire network stack does not need to be rewritten. When connecting to another network, only the top layers where the connection occurs need be compatible. Typically, networks are divided into seven layers. The bottom layer is the only layer that deals with hardware. The rest are all software.

In addition to being an early adopter of a layered network architecture , ARPANET used packet switchingfor message transmission. Packet switching divides a message into chunks called packets. Each of these packets is addressed to the destination network node . Each packet finds its own way through the network from one switching node to the next. A switching node is a special computer that can direct a packet to the next step toward the target address. Since there is usually more than one way to travel from source to destination, the network is resilient; if one path is blocked, the packet is switched to another path and the packet eventually gets through. When all the packets arrive at the destination, they are reassembled and passed to the receiving program.

A group researching a military communications network that could survive a catastrophe, such as a nuclear attack, discussed the concept of a packet switching network. The ARPANET was not developed by this group, but the ARPANET team put packet switching to work in their network and gained the resiliency that military communications required.

There are other advantages to packet switching beyond resilience. It is very easy to connect to an ARPANET-style network. Part of this is due to the layered architecture. For example, if a network based on a different transmission technology wants to join the Internet, they develop a replacement layer that will communicate with the corresponding layer in the ARPANET network stack and they are ready to go without re-engineering their entire implementation. When a new node joins the Internet, other nodes do not need to know anything about the newcomer except that its top layer is compatible. The newcomer only needs to be assigned an address. The switching nodes use the structure of the address to begin to direct packets to the newcomer.

The layered architecture facilitates ease in entry by not mixing lower level concerns and application code. Application changes do not require changes deep in the stack that would prevent connections with nodes without the lower level changes. This explains the rich array of applications that communicate over the Internet with nodes that are unaware of the internals of the application.

ARPANET also developed basic protocols and addressing schemes that are still in use today. These protocols are flexible and relatively easy to implement. A layered architecture and packet switching contribute to the flexibility and ease. The basic Internet protocol, Transmission Control Protocol over Internet Protocol (TCP/IP) conducts messages from an application running on one node to another application running on a node, but it is the job of the node, not the network, to determine which application will receive the message.

The flexibility and ease of connection profoundly affected IBM’s SNA . By the mid-1990s, SNA was in decline, rapidly being replaced by the Internet architecture. SNA was hard to configure, required expensive specialized equipment, and was hard to connect to non-SNA systems. By the mid-1990s, SNA systems were often connected over Internet-type networks using tunneling: hiding SNA data in Internet messages, then stripping away the Internet wrapper for the SNA hardware at the receiving end. This added an extra layer of processing to SNA communication.

The SNA connection from computer to computer is relatively inflexible. There was no notion of sending or receiving a message to or from another computer that any application able to handle the message could respond to the message. This is one of the features that makes Internet-style communication flexible and powerful, but it also eases intrusion into application interaction, which is the basis of many cybercrimes. It is not surprising that financial institutions were late to replace SNA with the ARPANET-style networking of the Internet.13

An ARPANET-style layered network architecture separates the authentication of the source and target of a message from the transmission of the message. Generally, this is advantageous because combining transmission and authentication could degrade the performance of transmission as identities and credentials are exchanged. Proving who you are requires complex semantics and negotiations. Quickly and efficiently moving raw bits and bytes from computer to computer does not depend on the semantics of the data transferred. Including semantics based authentication slows down and complicates the movement of data.

The ARPANET architecture choice was practical and sound engineering, but it was at the expense of the superior security of SNA.14 In SNA , unlike ARPANET, transmission and authentication were combined in the same layer. This meant that adding a new node to a network involved not only connection but authentication, proving who and what the new node was. This made adding new nodes a significant configuration effort, but it also meant that the kind of whack-a-mole contests that the authorities have today with hacker sites would be heavily tilted in the authorities’ favor.

Taken all together, a layered architecture, packet switching , and a remarkable set of protocols add up to a flexible and powerful system. These characteristics were chosen to meet the requirements that the ARPANET was based upon. The ARPANET was transformed into today’s Internet as more and more networks and individuals connected in.

World Wide Web

The World Wide Web (WWW), usually just “the Web,” is the second part of today’s computing environment. Technically, the Web and the Internet are not the same. The Web is a large system of connected applications that use the Internet to communicate. The Web was developed to share documents on the Internet. Instead of a special application written to share each document in a specific way, the Web generalizes document format and transmission. From the end user’s standpoint, the visible part of the Web is the browser, like Internet Explorer, Edge, Firefox, Chrome, and Safari.

A browser displays text received in a format called Hypertext Markup Language (HTML). Computers that send these documents to other nodes have software that communicates using a simple scheme that creates, requests, updates, and deletes documents. The browser requests a document and the server returns with a document marked up with HTML. The browser uses the HTML to decide how to display the document. The richness of browser displays is all described and displayed following HTML instructions. Although this scheme is simple, it has proved to be exceptionally effective.

And it goes far beyond simple display of documents. The first web servers did little more than display a directory of documents and return a marked up copy to display in an HTML interpreter. Browsers put the request, update, and display into a single appliance. Browsers also implement the hyperlinks that make reading on the Internet both fascinating and distracting. A hyperlink is the address of another document embedded in the displayed document. When the user clicks on a link, the browser replaces the document being read with the document in the hyperlink. Everyone who surfs the Net knows the seductive, perhaps addictive, power of the hyperlink.

As time passed, servers were enhanced to execute code in response to activity on browsers. All web store fronts use this capability. When the user pushes an order button, a document is sent to the server that causes the server to update an order in the site’s database and starts a process that eventually places a package on your front porch.

The documents and markup languages passing between the client and server have become more elaborate and powerful as the web evolved, but the document-passing protocol, Hypertext Transfer Protocol (HTTP), has remained fundamentally unchanged. In fact, the use of the protocol has increased, and often has nothing to do with hypertext.

Most applications coded in the 21st century rely on HTTP for communication between nodes on the network. Servers have expanded to perform more tasks. Many uses of the protocol do not involve browsers. Instead, documents are composed by programs and sent to their target without ever being seen by human eyes. The messages are intended for machine reading, not human reading. They are written in formal languages with alphabet soup names such as XML and JSON. Most developers are able to read these messages, but no one regularly looks at messages unless something is wrong. Typically, the messages are delivered so fast and in such volumes that reading all the traffic is not feasible. The average user would find them incomprehensible.

The Web has evolved to become the most common way of sharing information between computers, both for human consumption and direct machine consumption. Even email, which is often pure text, is usually transferred over the Web.

The ubiquity of data transfer over the Web is responsible for much of the flourishing Internet culture we have today. Compared to other means of Internet communication, using the Web is simple, both for developers and users. Without the Web, the applications we use all the time would require expensive and tricky custom code; they would very likely have differing user interfaces that would frustrate users. For example, a developer can get the current temperature for many locations from the National Weather Service with a few lines of boilerplate code because the weather service has provided a simple web service. Without the Web, the attractive and powerful applications that communicate and serve us would not be here.

Opportunities for development are also opportunities for crime. Developers are challenged with a learning curve when they work on non-standard applications. They are much more effective when they work with standard components and techniques. The Web provides a level of uniformity of components and techniques that make development more rapid and reliable.

However, these qualities also make criminal hacking more rapid and reliable. An application that is easier to build is also easier to hack. A hacker does not have to study the communications code of a web-based application because browsers and communications servers are all similar. The hacker can spot an application to hack into and immediately have a fundamental understanding of how the application works and its vulnerable points.

The Perfect Storm

As marvelous as personal computing devices, the Internet , and the World Wide Web are, they are part of a perfect storm that has propelled the flood of cybercrime that threatens us. The storm began to hit in the late 1990s.

The very success of the PC has engendered threats. PCs are everywhere. They sit on every desk in business. They are on kitchen tables, in bedrooms, and next to televisions in private homes. In the mid-1980s, less than 10% of US households had a PC. 15 That rose by a factor of six to over 80% percent in 2013.16

Contrast this with the 1970s when the Internet was designed and began to be deployed among research centers. The Internet was for communication among colleagues, highly trained scientists, and engineers. In 2013, the training of most Internet users did not go beyond glancing at a manual that they did not take time to understand before throwing it into the recycling bin. In 1980, computer users were naïve in a different way; they expected other users to be researchers and engineers like themselves, but they were also sophisticated in their knowledge of their software and computer equipment, unlike the typical user of today. The 1980 user was also more likely to know personally the other users on the network with whom they dealt.

Users today are often dealing with applications they barely comprehend and interacting with people they have never seen and know nothing about. This is a recipe for victimization. They are using a highly complex and sophisticated device that they do not understand. Many users are more familiar with the engine of their car than the workings of their computer. Unlike the Internet designers, users today deal with strangers on the Internet. Strangers on the street reveal themselves by their demeanor and dress. Through computers, Internet strangers can be more dangerous than street thugs because they reveal only what they want to be seen and readily prey on the unsuspected.

Computer hardware and software designers have been slow to recognize the plight of their users, both businesses and individuals. Until networked PCs became common, PCs only needed to be protected from physical theft or intrusion. The most PCs needed was a password to prevent someone from the next cubicle from illicit borrowing. Consequently, security was neglected. Even after PCs began to be networked, LANs were still safe. Often, security became the last thing on the development schedule, and, as development goes, even a generous allotment of time and resources for security shrank when the pressure was turned on to add a feature that might sell more product or impress the good folk in the C-suite. In an atmosphere where security was not acknowledged as a critical part of software and hardware, it was not developed. Even when it was thought through and implemented, experience with malicious hackers was still rare, causing designs to miss what would soon become real dangers.

The Soviet Union crashed in the 1990s, just as naïvely secured computers were beginning to be connected to the Internet , which was engineered to have a low bar of entry and designed with an assumption that everyone could be trusted. One of the consequences of the fall of the Soviet Union was that many trained engineers and scientists in Eastern Europe lost their means of livelihood. In some locations, government structures decayed and lawlessness prevailed. The combination of lawlessness and idle engineers who were prepared to learn about computing and connecting to the Internet was fertile ground for the growth of malicious and criminal hackers looking for prey.

The Internet is borderless. Unless a government takes extraordinary measures to isolate their territory, a computer can be anywhere as soon as it connects. If a user takes precautions, their physical location can be extremely hard to detect. A malicious hacker located in an area where the legal system has little interest in preventing hacking is close to untouchable.

The Web and its protocols are brilliant engineering. So much of what we enjoy today, from Facebook to efficiently managed wind-generated power, can be attributed to the power of the Web. The ease with which the Web’s infrastructure can be expanded and applications designed, coded, and deployed has the markings of a miracle. However, there is a vicious downside in the ease with which hackers can understand and penetrate this ubiquitous technology.

The perfect storm has generated a loosely organized criminal class that exchanges illegally obtained or just plain illegal goods in surprising volumes, using the very facilities of the Internet that they maraud. And these criminal bazaars increase the profit from crime and the profits encourage more to join in the carnage.

A solution is under construction. The computing industry has dropped its outmoded notion that security is a secondary priority. More is being invested in preventing cybercrime every year. Crime does not go away quickly; perhaps it never disappears, but with effort, there are ways to decrease it. One of the ways is to be aware and take reasonable precautions. The next chapter will discuss some of the technological developments that help.

Footnotes

1 See the Computer History Museum, “The Babbage Engine,” www.computerhistory.org/babbage/modernsequel/ . Accessed January, 2016. The output mechanisms were not completed until 2002.

2 The Analytic Engine is “Turing complete.” This is not the place to discuss Turing computers, but “Turing complete” is a mathematical definition of a general computer.

3 Encyclopedia.com, “Early Computers,” www.encyclopedia.com/doc/1G2-3401200044.html . Accessed January 2016.

4 Shayne Nelson, “The 60s - IBM & the Seven Dwarves,” June 1, 2004, http://it.toolbox.com/blogs/tricks-of-the-trade/the-60s-ibm-the-seven-dwarves-955 . Accessed January 2016.

5 The technology is virtualization of which I will talk about later.

6 BusinessWeek, “The Office of the Future”, June 1975, www.bloomberg.com/bw/stories/1975-06-30/the-office-of-the-futurebusinessweek-business-news-stock-market-and-financial-advice . Accessed February, 2016. This article is a pre-PC discussion of the paperless office. It documents the mixture of views prevalent at the time. This article seems to assume that a paperless office would consist of terminals connected to a central mainframe. In the future, a LAN of connected PCs would be thought of as a better architecture.

7 Danny Weiss, “Eudora’s Name and Historical Background”, www.eudorafaqs.com/eudora-historical-background.shtml . Accessed February 2016, offers some interesting insight into the early days of email.

8 IEEE, “The 40th Anniversary of Ethernet”, 2013, http://standards.ieee.org/events/ethernet/index.html . Accessed February 2016, offers a brief history of the Ethernet standard.

9 Internet Live Stats, “Internet Users,” www.internetlivestats.com/internet-users/ . Accessed February 2016. This site delivers a continuous readout of Internet users based on statistical modeling and selected data sources.

10 Of course, it did not hurt that the price of computers began to plummet at the same time.

11 ARPANET is sometimes called the Defense Advanced Research Projects Agency Network (DARPANET). ARPA was renamed DARPA in 1972.

13 For a detailed technical description of SNA in a heterogeneous environment, see R. J. Cypser, Communications for Cooperating Systems: Osi, Sna, and Tcp/Ip, Addison-Wesley, 1991.

14 Not everyone agrees on the security of SNA. See Anura Gurugé, Software Diversified Services, “SNA Mainframe Security,” June 2009, www.sdsusa.com/netqdocs/SNA.Security.090721.pdf . Accessed February 2016.

15 See “Top Ten Countries with Highest number of PCs,” www.mapsofworld.com/world-top-ten/world-top-ten-personal-computers-users-map.html .

16 Thom File and Camille Ryan, “Computer and Internet Use in the United States: 2013”, U.S. Census Bureau, November 2014. www.census.gov/content/dam/Census/library/publications/2014/acs/acs-28.pdf . Accessed February, 2016.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset