Chapter 3. Technical Infrastructure and Operational Practices and Infrastructure

Key concepts you will need to understand:

  • ✓ Risks and controls related to hardware platforms, system software and utilities, network infrastructure, and IS operational practices

  • ✓ Systems performance and monitoring processes, tools, and techniques (for example, network analyzers, system error messages, system utilization reports, load balancing)

  • ✓ The process of IT infrastructure acquisition, development, implementation, and maintenance

  • ✓ Change control and configuration-management principles for hardware and system software

  • ✓ Practices related to the management of technical and operational infrastructure (for example, problem-management/resource-management procedures, help desk, scheduling, service-level agreements)

  • ✓ Functionality of systems software and utilities (for example, database-management systems, security packages)

  • ✓ Functionality of network components (for example, firewalls, routers, proxy servers, modems, terminal concentrators, hubs, switches)

  • ✓ Network architecture (for example, network protocols, remote computing, network topologies, Internet, intranet, extranet, client/server)

Techniques you will need to master:

  • ✓ Evaluate the acquisition, installation, and maintenance of hardware to ensure that it efficiently and effectively supports the organization’s IS processing and business requirements, and is compatible with the organization’s strategies

  • ✓ Evaluate the development/acquisition, implementation, and maintenance of systems software and utilities to ensure ongoing support of the organization’s IS processing and business requirements, and compatibility with the organization’s strategies

  • ✓ Evaluate the acquisition, installation, and maintenance of the network infrastructure to ensure efficient and effective support of the organization’s IS processing and business requirements

  • ✓ Evaluate IS operational practices to ensure efficient and effective utilization of the technical resources used to support the organization’s IS processing and business requirements

  • ✓ Evaluate the use of system performance and monitoring processes, tools, and techniques to ensure that computer systems continue to meet the organization’s business objectives

IT Organizational Structure

As an IS auditor, you will need to understand the technical infrastructure of the organization, the IS organizational structure, and the operational practices. IT management is responsible for the acquisition and maintenance of the information architecture. This includes networking devices, servers and operating systems, data storage systems, applications, and the standards and protocols associated with network communication.

IT managers must define the role and articulate the value of the IT function. This includes the IT organizational structure as well as operational practices. The IT management functions are generally divided into two functional areas:

  • Line management—Line managers are concerned with the routine operational decisions on a day-to-day basis.

  • Project management—Project managers work on specific projects related to the information architecture. Projects are normally a one-time effort with a fixed start, duration, and end that reach a specific deliverable or objective.

The following are line management functions:

  • Control group—Responsible for the logging and collection of input for users.

  • Data management—Responsible for the data architecture (integrity, validity, and so on).

  • Database administrator—Responsible for the maintenance of the organization’s database systems

  • Systems administratorResponsible for the maintenance of computer systems and local area networks (LANs). Sets up system accounts, installs system-wide software, and so on.

  • Network manager/administratorResponsible for the planning, implementation, and maintenance of the telecommunications infrastructure.

  • Quality assurance managerResponsible for ensuring the quality of activities performed in all areas of information technology.

The IT functional areas are responsible for the computing infrastructure. This includes computer hardware, network hardware, communications systems, operating systems, and application software and data files. IT management must understand how these elements work together and must establish a control infrastructure (defined functions, policies, procedures, governance) that will reduce risk to the organization during the acquisition, implementation, maintenance, and disposal processes.

A clear understanding of networking and data components enables you to evaluate existing input/output (I/O) controls and monitoring procedures. Organizations should constantly seek to use technology more efficiently and effectively to meet business objectives. This quest will provide a myriad of choices regarding technology and the acquisition, development, implementation, and maintenance of the network as a whole and its individual components.

A clearly defined IT strategic plan combined with acquisition planning, project management, and strong operational practices (policies and procedures) will ensure two things: First, the IT organization will be aligned with the business strategy and objectives. Second, IT resources will be used effectively and efficiently. IT managers consistently balance operational issues and the implementation of new technology. This balancing act creates competing priorities: Operational issues usually fall into the “urgent” category, whereas the implementation of new technology falls into the “important” category. “Urgent” includes application and network issues that negatively impact operational systems. These issues need to be addressed and corrected as quickly as possible to ensure continued operations. “Important” issues include the review, testing, and implementation of new methodologies or technologies to improve the operational environment. If an IT organization allows itself to be driven primarily by urgent issues while putting important issues to the side, it will quickly become a completely reactive environment and will not look forward to properly align technology and business objectives. Adherence to strong operational and planning practices ensures that the IT organization strikes an equal balance between operational issues and future planning, and continues to align technical resources with business objectives.

Throughout this chapter, we describe in detail the separate components of the information architecture. Some of these concepts will be new to you, and some you will have already encountered as an IS auditor. Together, they will give you a complete overview of the information architecture. With this knowledge, you should be able to review complex business processes and associate the related components, controls, and operational practices. It is important to remember that no matter how complex the technology is, the foundation of standards, practices, and controls remains the same.

Take a basic example that might be familiar to a majority of you: Amazon.com. Amazon uses the same type of transactions and transaction processing that that many e-commerce companies use; it also uses the same base set of controls. The basic components of the systems include a web server to respond to requests and a relational database to manage the transactions. When the customer finds the desired items to purchase, some type of shopping card and payment system is used. The final portions of the system are the inventory management/shipping system and the financial system, which handles all the financial transactions. Overall, the business functions of Amazon.com are not that different. Knowing the individual components, communication protocols, and operational procedures, however, helps the IS auditor evaluate the system to ensure that the organization is using accepted best practices (controls) and that the information architecture is aligned with the business objectives.

Evaluating Hardware Acquisition, Installation, and Maintenance

A significant part of the information architecture is the computing hardware. These systems include the following:

  • Processing components—The central processing unit (CPU). The CPU contains the electrical/electronic components that control or direct all operations in the computer system. A majority of devices within the information architecture are CPUs (supercomputers, mainframes, minicomputer, microcomputer, laptops, and PDAs).

  • Input/output components—. The I/O components are used to pass instructions or information to the computer and to generate output from the computer. These types of devices include the keyboard, the mouse (input), and monitors/terminal displays.

Computers logically fall into categories and differ depending on the processing power and size for the organization. The following are the basic categories for computers:

  • Supercomputers—. These types of computers have a large capacity of processing speed and power. They are generally used for complex mathematical calculations. Supercomputers generally perform a small number of very specific functions that require extensive processing power (decryption, modeling, and so on). Supercomputers differ from mainframes in that mainframes can use diverse concurrent programs.

  • Mainframes—. Mainframes are large general-purpose computers that support large user populations simultaneously. They have a large range of capabilities that are controlled by the operating system. A mainframe environment, as opposed to a client/server environment, is generally more controlled with regard to access and authorization to programs; the entire processing function takes place centrally on the mainframe. Mainframes are multiuser, multithreading, and multiprocessing environments that can support batch and online programs.

  • Minicomputer—. Minicomputers are essentially smaller mainframes. They provide similar capabilities but support a smaller user population (less processing power).

  • Microcomputer (personal computers)—Microcomputers are primarily used in the client/server environment. Examples include file/print servers, email servers, web servers, and servers that house database-management systems. Individual workstations also fall into the microcomputer category and are used for word processing, spreadsheet applications, and individual communications (email). Microcomputers are generally inexpensive because they do not have the processing power of larger minicomputers or mainframes.

  • Notebook/laptop computers—Notebook and laptop computers are portable and allow users to take the computing power, applications, and, in some cases, data with them wherever they travel. Notebooks and laptops today have as much computing power as desktop workstations and provide battery power when traditional power is not available. Because of the mobile nature of notebook and laptop computers, they are susceptible to theft. Theft of a laptop computer is certainly the loss of a physical asset, but it also can include the loss of data or unauthorized access to the organization’s information resources.

  • Personal digital assistants (PDAs)—. PDAs are handheld devices and generally have significantly less processing power, memory, and applications than notebook computers. These devices are battery powered and very portable (most can fit into a jacket pocket). Although the traditional use of a PDA is for individual organization, including the maintenance of tasks, contacts lists, calendars, and expense managers, PDAs are continually adding functionality. As of this writing, a significant number of PDAs provide wireless network access and have either commercial off-the-shelf software or custom software that enables users to access corporate information (sales and inventory, email, and so on). Most PDAs use pen (stylus)–based input instead of the traditional keyboard, effected by using either an onscreen keyboard or handwriting recognition. PDAs are synchronized with laptop/desktop computers through serial interfaces through the use of a cradle or wireless networking (802.11 or Bluetooth). The synchronization can be user initiated or automated, based on the needs of the user.

Earlier in this section, we discussed some of the attributes of computing systems, including multiprocessing, multitasking, and multithreading. These attributes are defined as follows:

  • Multitasking—. Multitasking allows computing systems to run two or more applications concurrently. This process enables the systems to allocate a certain amount of processing power to each application. In this instance, the tasks of each application are completed so quickly that it appears to multiple users that there are no disruptions in the process.

  • Multiprocessing—. Multiprocessing links more than one processor (CPU) sharing the same memory, to execute programs simultaneously. In today’s environment, many servers (mail, web, and so on) contain multiple processors, allowing the operating system to speed the time for instruction execution. The operating system can break up a series of instructions and distribute them among the available processors, effecting quicker instruction execution and response.

  • Multithreading—. Multithreading enables operating systems to run several processes in rapid sequence within a single program or to execute (run) different parts, or threads, of a program simultaneously. When a process is run on a computer, that process creates a number of additional tasks and subtasks. All the threads (tasks and subtasks) can run at one time and combine as a rope (entire process). Multithreading can be defined as multitasking within a single program.

Risks and Controls Relating to Hardware Platforms

In aligning the IT strategy with the organizational strategy, IT provides solutions that meet the objectives of the organization. These solutions must be identified, developed, or acquired. As an IS auditor, you will assess this process by reviewing control issues regarding the acquisition, implementation, and maintenance of hardware. Governance of the IT organization and corresponding policies will reduce the risk associated with acquisition, implementation, and maintenance. Configuration management accounts for all IT components, including software. A comprehensive configuration-management program reviews, approves, tracks, and documents all changes to the information architecture. Configuration of the communications network is often the most critical and time-intensive part of network management as a whole. Software development project management involves scheduling, resource management, and progress tracking. Problem management records and monitors incidents and documents them through resolution. The documentation created during the problem-management process can identify inefficient hardware and software, and can be used as a basis for identifying acquisition opportunities that serve the business objectives. Risk management is the process of assessing risk, taking steps to reduce risk to an acceptable level (mitigation) and maintaining that acceptable level of risk. Risk identification and management works across all areas of the organizational and IT processes.

Note

Risks and Controls Relating to Hardware Platforms

The COBIT framework provides hardware policy areas for IT functions. These policy areas can be used as a basis for control objectives to ensure that the acquisition process is clearly defined and meets the needs of the organization. The COBIT areas address the following questions:

  • Acquisition—How is hardware acquired from outside vendors?

  • Standards—What are the hardware compatibility standards?

  • Performance—How should computing capabilities be tested?

  • Configuration—Where should client/servers, personal computers, and others be used.

  • Service providers—Should third-party service providers be used?

One of the key challenges facing IT organizations today is the speed of new technology releases in the marketplace and detailed baseline documentation for their organizations. IT organizations need a process for documenting existing hardware and then maintaining that documentation. This documentation supports the acquisition process and ensures that new technologies that meet the business objectives can be thoroughly tested to ensure that they are compatible with the existing information architecture.

Contained within the COBIT framework regarding hardware and software acquisition, the auditor will consider the control objectives defined in Table 3.1.

Table 3.1. Acquisition Control Objectives

Identify Automated Solutions

Control Objective

1.1 Definition of Information Requirements

The organization’s system development life cycle methodology should require that the business requirements for the existing system and the proposed new or modified system (software, data, and infrastructure) are clearly defined before a development, implementation, or modification project is approved. The system development life cycle methodology should specify the solution’s functional and operational requirements, including performance, safety, reliability, compatibility, security, and legislation.

1.2 Formulation of Alternative Courses of Action

The organization’s system development life cycle should stipulate that alternative courses of action should be analyzed to satisfy the business requirements established for a proposed new or modified system.

1.3 Formulation of Acquisition Strategy

Information systems acquisition, development, and maintenance should be considered in the context of the organization’s IT long- and short-range plans. The organization’s system development life cycle methodology should provide for a software acquisition strategy plan defining whether the software will be acquired off-the-shelf; developed internally, through contract, or by enhancing the existing software; or developed through a combination of these.

1.4 Third-Party Service Requirements

The organization’s system development life cycle methodology should require an evaluation of the requirements and specifications for an RFP (request for proposal) when dealing with a third-party service vendor.

1.5 Technological Feasibility Study

The organization’s system development life cycle methodology should require an examination of the technological feasibility of each alternative for satisfying the business requirements established for the development of a proposed new or modified information system project.

1.6 Economic Feasibility Study

In each proposed information systems development, implementation, and modification project, the organization’s system development life cycle methodology should require an analysis of the costs and benefits associated with each alternative being considered for satisfying the established business requirements.

1.7 Information Architecture

Management should ensure that attention is paid to the enterprise data model while solutions are being identified and analyzed for feasibility.

1.8 Risk Analysis Report

In each proposed information system development, implementation, or modification project, the organization’s system development life cycle methodology should require an analysis and documentation of the security threats, potential vulnerabilities and impacts, and the feasible security and internal control safeguards for reducing or eliminating the identified risk. This should be realized in line with the overall risk-assessment framework.

1.9 Cost-Effective Security Controls

Management should ensure that the costs and benefits of security are carefully examined in monetary and nonmonetary terms, to guarantee that the costs of controls do not exceed benefits. The decision requires formal management sign-off. All security requirements should be identified at the requirements phase of a project and should be justified, agreed to, and documented as part of the overall business case for an information system. Security requirements for business continuity management should be defined to ensure that the proposed solution supports the planned activation, fallback, and resumption processes.

1.10 Audit Trails Design

The organization’s system development life cycle methodology should state that adequate mechanisms for audit trails must be available or developed for the solution identified and selected. The mechanisms should provide the capability to protect sensitive data (for example, user IDs) against discovery and misuse.

1.11 Ergonomics

Management should ensure that the IT function adheres to a standard procedure for identifying all potential system software programs, to satisfy its operational requirements.

1.13 Procurement Control

Management should develop and implement a central procurement approach describing a common set of procedures and standards to be followed in the procurement of information technology–related hardware, software, and services. Products should be reviewed and tested before their use and the financial settlement.

1.14 Software Product Acquisition

Software product acquisition should follow the organization’s procurement policies.

1.15 Third-Party Software Maintenance

Management should require that before licensed software is acquired from third-party providers, the providers have appropriate procedures to validate, protect, and maintain the software product’s integrity rights. Consideration should be given to the support of the product in any maintenance agreement related to the delivered product.

1.16 Contract Application Programming

The organization’s system development life cycle methodology should require that the procurement of contract programming services be justified with a written request for services from a designated member of the IT function. The contract should stipulate that the software, documentation, and other deliverables are subject to testing and review before acceptance. In addition, it should require that the end products of completed contract programming services be tested and reviewed according to the related standards by the IT function’s quality assurance group and other concerned parties (such as users and project managers) before payment for the work and approval of the end product. Testing to be included in contract specifications should consist of system testing, integration testing, hardware and component testing, procedure testing, load and stress testing, tuning and performance testing, regression testing, user acceptance testing, and, finally, pilot testing of the total system, to avoid any unexpected system failure.

1.17 Acceptance of Facilities

Management should ensure that an acceptance plan for facilities to be provided is agreed upon with the supplier in the contract. This plan should define the acceptance procedures and criteria. In addition, acceptance tests should be performed to guarantee that the accommodation and environment meet the requirements specified in the contract.

1.18 Acceptance of Technology

Management should ensure that an acceptance plan for specific technology to be provided is agreed upon with the supplier in the contract. This plan should define the acceptance procedures and criteria. In addition, acceptance tests provided for in the plan should include inspection, functionality tests, and workload trials.

The selection of computer hardware requires the organization to define specifications for outside vendors. These specifications should be used in evaluating vendor-proposed solutions. This specification is sometimes called an invitation to tender (ITT) or a request for proposal (RFP).

Per ISACA, the portion of the ITT pertaining to hardware should include the following:

  • Information-processing requirements

    • Major existing application systems and future application systems

    • Workload and performance requirements

    • Processing approaches (online/batch, client/server, real-time databases, continuous operation)

  • Hardware requirements

    • CPU speed

    • Peripheral devices (sequential devices, such as tape drives; direct-access devices, such as magnetic disk drives, printers, CD-ROM drives, and WORM drives)

    • Data-preparation/input devices that accept and convert data for machine processing

    • Direct-entry devices (terminal, point-of-sale terminals, or automated teller machines)

    • Networking capability (Ethernet connections, modems, and ISDN connections)

  • System software applications

    • Operation systems software (current version and any required upgrades)

    • Compilers

    • Program library software

    • Database-management software and programs

    • Communication software

    • Access-control software

  • Support requirements

    • System maintenance (for preventative, detective [fault reporting], or corrective purposes)

    • Training (user and technical staff)

    • Backups (daily and disaster)

  • Adaptability requirements

    • Hardware/software upgrade capabilities

    • Compatibility with existing hardware/software platforms

    • Changeover to other equipment capabilities

  • Constraints

    • Staffing levels

    • Existing hardware capacity

    • Deliver dates

  • Conversion requirements

    • Test time for the hardware/software

    • System-conversion facilities

    • Cost/pricing schedule

The acquisition of hardware might be driven by requirements for a new software acquisition, the expansion of existing capabilities, or the scheduled replacement of obsolete hardware. With all these events, senior management must ensure that the acquisition is mapped directly to the strategic goals of the organization. The IT steering committee should guide information systems strategy—and, therefore, that its acquisitions—align with the organization’s goals.

In addition, the senior managers of the IT steering committee should receive regular status updates on acquisition projects in progress, the cost of projects, and issues that impact the critical path of those projects. The IT steering committee is responsible for reviewing issues such as new and ongoing projects, major equipment acquisitions, and the review and approval of budget; however, the committee does not usually get involved in the day-to-day operations of the IS department.

The IT organization should have established policies for all phases of the system development life cycle (SDLC) that controls the acquisition, implementation, maintenance, and disposition of information systems. The SDLC should include computer hardware, network devices, communications systems, operating systems, application software, and data. These systems support mission-critical business functions and should maximize the organization’s return on investment. The combination of a solid governance framework and defined acquisition process creates a control infrastructure that reduces risk and ensures that IT infrastructure supports the business functions.

As an IS auditor, you will look for evidence of a structured approach to hardware acquisition, implementation, and maintenance. These include written acquisition policies and outline the process for feasibility studies, requirements gathering, and the approval process of the IT steering committee. After hardware is procured, the IT organization must have a defined project-management and change-control process to implement the hardware. All hardware acquired must fall under existing maintenance contracts and procedures, or contracts must be acquired and procedures updated to reflect the new hardware. The hardware should be tested according to written test plans before going into production, and the hardware should be assigned to the appropriate functional areas (such as systems administration) to ensure that production responsibility is clearly defined. The acquired hardware, whether a replacement or new to the IT infrastructure, should be secured (physical, logical) and added to the business continuity plan.

Change Control and Configuration Management Principles for Hardware

The change-control and configuration-management processes detail the formal documented procedures for introducing technology changes into the environment. More specifically, change control ensures that changes are documented, approved, and implemented with minimum disruption to the production environment and maximum benefits to the organization.

During the normal operation of the IT infrastructure, there will be changes to hardware and software because of normal maintenance, upgrades, security patches, and changes in network configurations. All changes within the infrastructure need to be documented and must follow change control procedures. In the planning stages the party responsible for the changes (such as end users, line managers or the network administrator) should develop a change-control request. The request should include all systems affected by the change, the length of resources required to implement the change (time and money), and a detailed plan. The plan should include what specific steps will be taken for the change and should include test plans and back-out procedures, in case the change adversely affects the infrastructure. This request should go before the change-control board that votes on the change and normally provides a maintenance window in which the change is to be implemented. When the change is complete and tested, all documentation and procedures that are affected by the change should be updated. The change-control board should maintain a copy of the change request and its review of the implementation of the change.

Note

Change Control and Configuration Management Principles for Hardware

The change-control board provides critical oversight for any production IT infrastructure. This board ensures that all affected parties and senior management are aware of both major and minor changes within the IT infrastructure. The change-management process establishes an open line of communication among all affected parties and allows those parties and subject matter experts (SMEs) to provide input that is instrumental in the change process.

In addition to change and configuration control, the IT organization is responsible for capacity planning. A capacity plan and procedures should be developed to ensure the continued monitoring of the network and associated hardware. Capacity planning ensures that the expansion or reduction of resources takes place in parallel with the overall organizational growth or reduction. The audit procedures should include review of the following:

  • Hardware performance-monitoring plan—Includes a review of problem logs, processing schedules, system reports, and job accounting reports.

  • Problem log review—The problem log assists in identifying hardware malfunctions, operator actions, or system resets that negatively affect the performance of the IT infrastructure. The IT organization should regularly monitor the problem log to detect potential IT resource capacity issues.

  • Hardware availability and performance reports—Review and ensure that the services required by the organization (CPU utilization, storage utilization, bandwidth utilization, and system uptime) are available when needed and that maintenance procedures do not negatively impact the organization’s operations.

The IT organization should work closely with senior management to ensure that the capacity plan will meet current and future business needs, and to implement hardware-monitoring procedures to ensure that IT services, applications, and data are available. The IT organization should regularly monitor its internal maintenance, job scheduling, and network management to ensure that they are implemented in the most efficient and effective manner.

Note

Hardware availability and performance reports—

Evaluating Systems Software Development, Acquisition, Implementation, and Maintenance

The development, acquisition, implementation, and maintenance software and hardware support key business practices. The IS auditor must ensure that the organization has controls in place that manage these assets in an effective and efficient manner. A clear understanding of the technology and its operational characteristics are critical to the IS auditor’s review.

Understanding Systems Software and Utilities Functionality

We have discussed both hardware and network devices and their important role in today’s IT infrastructure. The hardware can be considered a means to an end, but the software is responsible for delivering services and information. A majority of organizations today use client/server software, which enables applications and data to be spread across a number of systems and to serve a variety of operating systems. The advantage of client/server computing is that clients can request data, processing, and services from servers both internal and external to the organization. The servers do the bulk of the work. This technology enables a variety of clients, from workstations to personal digital assistants (PDAs). Client/server computing also enables centralized control of the organization’s resources, from access control to continuity.

One level above the hardware we have already discussed is firmware. This type of “software” is generally contained on a chip within the component of the hardware (motherboard, video card, modem, and so on). The operating system runs at the next level on top of the hardware and firmware, and is the nucleus of the IT infrastructure. The operating system contains programs that interface among the user, processor, and applications software. Operating systems can be considered the “heart” of the software system. They allow the sharing of CPU processing, memory, application access, data access, data storage, and data processing; they ensure the integrity of the system. The software that is developed for the computer must be compatible with the operating system.

Within the operating system are functions, utilities, and services that control the use and sharing of computer resources. These are the basic functions for the operating system:

  • Defining user interfaces

  • Enabling user access to hardware, data, file systems, and applications

  • Managing the scheduling of resources among users

Along with the never-ending demand for scalability, interoperability, and performance, most operating systems today have parameters that can be customized to fit the organization. These parameters enable administrators to align many different types of functional systems with the performance and security needs of the organization. The selection of parameter settings should be aligned with the organization’s workload and control structure. After the parameters are configured, they should be continually monitored to ensure that they do not result in errors, data corruption, unauthorized access, or a degradation of service.

In a client/server model, the server handles the processing of data, and security functions are shared by both workstation and server. The server responds to requests from other computers that are running independently on the network. An example of client/server computing is accessing the Internet via a web browser. An independent machine (workstation) requests data (web pages) from a web server; the web server processes the request, correlates the requested data, and returns it to the requester. The web server might contain static pages (HTML documents), or the HTML pages might contain dynamic data contained within a database-management system (DBMS). In this scenario, the server processes multiple requests, manages the processing, allocates memory, and processes authentication and authorization activities associated with the request. The client/server model enables the integration of applications and data resources.

Client/server architectures differ depending on the needs of organization. An additional component of client/server computing is middleware. Middleware provides integration between otherwise distinct applications. As an example of the application of middleware, IT organizations that have legacy applications (mainframe, non–client/server, and so on) can implement web-based front ends that incorporate the application and business logic in a central access point. The web server and its applications (Java servlets, VBScript, and so on) incorporate the business logic and create requests to the legacy systems to provide requested data. In this scenario, the web “front end” acts as middleware between the users and the legacy systems. This type of implementation is useful when multiple legacy systems contain data that is not integrated. The middleware can then respond to requests, correlate the data from multiple legacy applications (accounting, sales, and so on), and present to the client.

Middleware is commonly used to provide the following functionality:

  • Transaction-processing (TP) monitors—. These applications or programs monitor and process database transactions.

  • Remote procedure calls (RPC)—. An RPC is a function call in client/server computing that enables clients to request that a particular function or set of functions be performed on a remote computer.

  • Messaging services—. User requests (messages) can be prioritized, queued, and processed on remote servers.

As an IS auditor, you should assess the use of controls that ensure the confidentiality, integrity, and availability of client/server networks. The development and implementation of client/server applications and middleware should include proper change control and testing of modifications and should ensure that version control is maintained. Lack of proper controls with regard to the authentication, authorization, and data across multiple platforms could result in the loss of data or program integrity.

Organizations use more data today in decision making, customer support, sales, and account management. Data is the lifeblood of any organization. With the high volume of change, transaction processing, and access, it is important to maintain the confidentiality, availability, and integrity of data according to the organization’s business requirements. A DBMS is used to store, maintain, and enforce data integrity, as well provide the capability to convert data into information through the use of relationships and high-availability access. The primary functions of the DBMS are to reduce data redundancy, decrease access time, and provide security over sensitive data (records, fields, and transactions). A typical DBMS can be categorized as a container that stores data. Within the container of related information are multiple smaller containers that comprise logically related data. Figure 3.1 shows a DBMS relational structure for an asset-management system.

Relational database structure.

Figure 3.1. Relational database structure.

In Figure 3.1 the location (LocationID) of the asset is related to both the point of contact (POC) and the asset itself. Relational databases use rows (tuples, equal to records) and columns (domains or attributes, which correspond to fields).

A DBMS should include a data dictionary that identifies the data elements (fields), their characteristics, and their use. Data dictionaries are used to identify all fields and field types in the DBMS to assist with application development and processing. The data dictionary should contain an index and description of all the items stored in the database.

Three basic database models exist: hierarchical, network, and relational. A hierarchical database model establishes a parent-child relationship between tables (entities). It is difficult to manage relationships in this model when children need to relate to more than one parent; this can lead to data redundancy. In the network database model, children can relate to more than one parent. This can lead to complexity in relationships, making an ID difficult to understand, modify, and recover in the event of a failure. The relational database model separates the data from the database structure, allowing for flexibility in implementing, understanding, and modifying. The relational structure enables new relationships to be built based on business needs.

The key feature of relational databases is normalization, which structures data to minimize duplication and inconsistencies. Normalization rules include these:

  • Each field in a table should represent unique information.

  • Each table should have a primary key.

  • You must be able to make changes to the data (other than the primary key) without affecting other fields.

Users access databases through a directory system that describes the location of data and the access method. This system uses a data dictionary, which contains an index and description of all the items stored in the database.

Note

Relational database structure.

In a transaction-processing database, all data transactions to include updating, creating, and deleting are logged to a transaction log. When users update the database, the data contained in the update is written first to the transaction log and then to the database. The purpose of the transaction log is to hold transactions for a short period of time until the database software is ready to commit the transaction to the database. This process ensures that the records associated with the change are ready to accept the entire transactions. In environments with high volumes of transactions, records are locked while transactions are committed (concurrency control), to enable the completion of the transactions. Concurrency controls prevent integrity problems when two processes attempt to update the same data at the same time. The database software checks the log periodically and then commits all transactions contained in the log since the last commit. Atomicity is the process by which data integrity is ensured through the completion of an entire transaction or not at all.

Note

Relational database structure.

Risks and Controls Related to System Software and Utilities

The software (operating systems, applications, database-management systems, and utilities) must meet the needs of the organization. The challenge facing a majority of IT organizations today is the wide variety of software products and their acquisition, implementation, maintenance, and integration. Organizational software is used to maintain and process the corporate data and enable its availability and integrity. The IT organization is responsible for keeping abreast of new software capabilities to improve business processes and expand services. In addition, the IT organization needs to monitor and maintain existing applications to ensure that they are properly updated, licensed, and supported. A capacity-management plan ensures that software expansion or reduction of resources takes place in parallel with overall business growth or reduction. The IT organization must solicit input from both users and senior management during the development and implementation of the capacity plan, to achieve the business goals in the most efficient and effective manner.

Note

Risks and Controls Related to System Software and Utilities

Change Control and Configuration Management Principles for System Software

Whether purchased or developed, all software must follow a formal change-control process. This process ensures that software meets the business needs and internal compatibility standards. The existence of a change control process minimizes the possibility that the production network will be disrupted and ensures that appropriate recovery and back-out procedures are in place.

When internal applications are developed and implemented, the IT organization is responsible for maintaining separate development, test, and production libraries. These libraries facilitate effective and efficient management and control of the software inventory, and incorporate security and control procedures for version control and release of software. The library function should be consistent with proper segregation of duties. For example, the system developer may create and alter software logic, but should not be allowed access to information processing or production applications. Source code comparison is an effective method for tracing changes to programs.

Evaluating Network Infrastructure Acquisition, Installation, and Maintenance

The IT organization should have both long- and short-term plans that address maintenance, monitoring, and migration to new software. During the acquisition process, the IT organization needs to map software acquisitions to the organization’s strategic plan and ensure that the software meets documented compatibility standards. The IT organization needs to understand copyright laws that apply to the software and must create policies and procedures that guard against unauthorized use or copying of the software without proper approval or licensing agreements. It must maintain a list of all software used in the organization, along with licensing agreements/certificates and support agreements. The IT organization might implement centralized control and automated distribution of software, as well as scan user workstations to ensure adherence to licensing agreements. The following are some key risk indicators related to software acquisition:

  • Software acquisitions are not mapped to the strategic plan.

  • No documented policies are aimed at guiding software acquisitions.

  • No process exists for comparing the “develop vs. purchase” option.

  • No one is assigned responsibility for the acquisition process.

  • Affected parties are not involved with assessing requirements and needs.

  • There is insufficient knowledge of software alternatives.

  • Security features and internal controls are not assessed.

  • Benchmarking and performance tests are not carried out.

  • Integration and scalability issues are not taken into account.

  • Total cost of ownership is not fully considered.

Understanding Network Components Functionality

The IT infrastructure communication is facilitated by integrated network components. These devices, software, and protocols work together to pass electrical transmissions between systems through analog, digital, or wireless transmission types. It is important to understand the devices and standards involved with networking and telecommunications because they are the most complex part of the information architecture. As you read through this section, keep in mind that we are combining network devices, access media, and networking protocols/standards and how they are used in the network infrastructure. A good starting point for gaining this knowledge is to first understand how computers and other network devices communicate. Understanding communication is key to realizing how devices interoperate to provide network services.

In networking, network standards and protocols facilitate the creation of an integrated environment of application and services communication. For organizations to create these environments and provide centralized troubleshooting, organizations have created reference models for network architectures. Three external organizations develop standards and specifications for protocols used in communications:

  • The International Organization for Standardization (ISO)

  • The American Institution of Electrical and Electronic Engineers (IEEE)

  • The International Telecommunications Union–Telecommunications Sector (ITU-T), formerly the International Telegraph and Telephone Consultative Committee (CCITT)

The ISO developed the Open Systems Interconnect (OSI) model in the early 1980s as a proof-of-concept model that all vendors could use to ensure that their products could communicate and interact. The OSI model was not used as directly as a standard, but it gained acceptance as a framework that a majority of applications and protocols adhere to. The OSI model contains seven layers, each with specific functions. Each layer has its own responsibilities with regard to tasks, processes, and services. The separation of functionality of the layers ensures that the solutions offered by one layer can be updated without affecting the other layers. The goal of the OSI model is to provide the framework for an open network architecture in which no one vendor owns the model, but all vendors can use the model to integrate their technologies. The Transmission Control Protocol/Internet Protocol (TCP/IP) is discussed later in this chapter, but it is important to note that TCP/IP was developed and used for many years before the introduction of the OSI model.

The OSI reference model breaks down the complexities of network communication into seven basic layers of functionality and services. To best illustrate how these seven layers work, we can relate how computers communicate to how humans communicate. The following paragraphs describe how a software application’s thought, or data payload, is transferred and prepared for communication by the operating system’s communications services. When this is accomplished, we will then look at how the data payload is transported, addressed, and converted from a logical thought into physical signals that can travel across cables or wireless transmissions.

Step 1: Having and Managing a Thought (Data Encapsulation)

As people, we have learned that it is always wise to think before speaking. In other words, we actually need to formulate a thought to communicate (data payload), format the presentation of the thought for the destination, and manage the thought appropriately. For example, sometimes you might be trying to communicate a complex idea to another person, and you find the need to enhance your normal verbal communication with written diagrams or even hand gestures. These types of efforts might not contain new information in and of themselves, but they are used to facilitate the communication of an idea when words alone fail. Computers and other networking devices sometimes need to use such extracurricular thought management as well. So the first step in communicating a data payload between one computer and another is to pass an application’s thought to the networking services for further manipulation and management. Within the OSI reference model for networking architecture, these processes are handled within Layers 5, 6, and 7. Table 3.2 looks at each of these layers more closely.

Table 3.2. Open Systems Interconnect (OSI) Model

OSI Layer

Purpose

Telecommunication Protocol Examples

Protocol Data Unit

7 (Application)

This is the networking layer that interfaces with applications and operating systems. This is where the user (through an application or operating system activity) first passes off information to networking services for telecommunication.

HTTP, Telnet, SMTP, DNS,SNMP

Data

6 (Presentation)

The data might require special formatting techniques, including (but not limited to) preparation according to special presentation protocols for picture and sound, compression, or even encryption.

ASCII, JPG, GIF, MIDI

 

5 (Session)

Communicating the data might be a special device-to-device bond, or session, for cooperative multidevice communication efforts, for example.

NetBIOS, RPC, SQL

 

Using OSI Layers 7–5 Within the Data PDU

As a user, you will first interface with networking communications by using a network-enabled application. A network-enabled application is capable of utilizing network protocols to send and receive data. Let’s take a look at using your web browser to request and view data from an Internet web server. To view a web page, you must enter a URL web address such as http://www.certifiedtechtrainers.com. This simple process tells your web browser to use the Hypertext Transfer Protocol (HTTP) to contact a web server at www.certifiedtechtrainers.com. Neither you nor the browser needs to encapsulate your request for transport, to encapsulate the request with a formal IP address, or even to encapsulate the request appropriately to traverse your connected networking medium, such as an Ethernet cable. HTTP is a management protocol that invokes and coordinates the services of other networking protocols as necessary. That’s why HTTP is said to be an OSI Layer 7 protocol, or application-layer protocol. HTTP itself is not an application, but it is the protocol used to interface a desktop application such as a web browser with other necessary communication protocols.

When you enter the URL http://www.certifiedtechtrainers.com into your browser, you unknowingly invoked another application-layer protocol called Domain Name Service (DNS). If we agreed to meet at Certified Tech Trainers office, you would need to know the actual street address of the location, right? Likewise, the www.certifiedtechtrainers.com name maps to an actual IP address designated to the web server you intend the HTTP request to be sent to. DNS queries a DNS server to find out the IP address of www.certifiedtechtrainers.com so it can somewhat transparently reformat your request from http://www.certifiedtechtrainers.com to http://24.227.60.114. As you can see, these application-layer protocols are more focused on communication management than on actual request transmission. However, there is still more data management necessary. The default web page you are requesting likely has pictures or sounds. HTTP does not know how to present formatted graphics. Rather, the .gif and .jpg picture protocols operate at OSI Layer 6, the presentation layer, to handle the presentation of some common picture formats found on web pages. At some point, you might submit a request for securely encrypted communication by using https:// instead of http:// in your requested URL (this is often done without the user’s awareness because requests for such a secure connection are often invoked via a link). By doing so, the browser makes a special request to have the HTTP request encrypted with the Secure Sockets Layer (SSL) encryption protocol. In conjunction with HTTP, SSL operates at OSI Layer 5, or the session layer.

At this point, the top three layers—application, presentation, and session—have been used in managing the request for data. This all occurs before consideration of how to transport or logically address the actual packets that need to be transmitted. Looking at all the activity just described, it would make sense that the data itself (data payload) and all the ancillary HTTP, HTTPS, JPG, GIF, and SSL communication could be considered the “thought” we desire to transmit. The technical term for this networking “thought” is the protocol data unit (PDU), known as data. You now understand how a computer needs to think before it speaks, just as you do.

Using OSI Layer 4 Within the Segment PDU

Now that we have a nicely managed data PDU, we need to package the communication appropriately for transport. As an application-layer protocol, HTTP does not transport the data itself. Rather, it manages transferring the data using other protocols to do so. The data PDU can now be encapsulated into a segment for transport using an OSI Layer 4 protocol such as the Transmission Control Protocol (TCP).

Just as you must break your thought into sentences and words for transport via your mouth or hands, the computer must encapsulate segments of the data PDU for transmission as well. TCP is a nifty OSI Layer 4 (transport-layer) networking protocol that is especially adept at this task. Not only does it segment the communication, but it does so in a methodical way that allows the receiving host to rebuild the data easily by attaching sequence numbers to its TCP segments. This sequencing information, along with other special TCP transport management communications, is provided in a TCP header within each data segment TCP encapsulates. Other transport-management communications provided by TCP include implementation of confirmation services for ensuring that all segments reach the intended recipient. When you want to be sure a letter is successfully delivered by the postal service, you might pay the additional cost for requiring a return receipt to be delivered to you upon successful delivery of the letter. If you never receive your return receipt, you could mail another copy of the letter and wait for a second confirmation. Without going into technical specifics, TCP can implement a similar system to ensure reliable transport. However, if you do not have the money or time to arrange for a return receipt for your letter, you might opt to forgo the assurance that the return receipt provides and send it via regular post, which guarantees only best effort, or unreliable delivery.

The technical parallel is to encapsulate the data PDU using the User Datagram Protocol (UDP) at the OSI transport layer instead of TCP. UDP does not implement a system of successful transmission confirmation, and is known as unreliable transport, providing best-effort delivery. The data PDU itself is unchanged either way because it is merely encapsulated by a transport protocol for transmission.

OSI Layer

Purpose

Telecommunication Protocol Examples

Protocol Data Unit

4 (Transport)

Provides for reliable or unreliable delivery. Transport protocols can provide for transmission error detection and correction services.

TCP, UDP

Segment

Note

Using OSI Layer 4 Within the Segment PDU

Using OSI Layer 3 Within the Packet PDU

We now have a data PDU that has been encapsulated within a segment PDU for transport. However, the segment has no addressing information necessary for determining the best path to the intended recipient. By using an OSI Layer 3, or network-layer, protocol such as Internet Protocol (IP), we can encapsulate the segment into a packet with a logical destination address. In the previous example, the segment needs an IP packet header designating 24.227.60.114 as the logical destination address. IP is a network address protocol especially designed for this purpose because it implements a hierarchical addressing scheme.

Imagine if your computer supported listing filenames but did not support a logical directory structure. You could locate files with no problem, as long as not too many files existed on your hard drive. When you have a great number of files, however, you want to be able to logically organize your files into a directory structure to allow you to quickly and easily navigate to your intended file. In a similar fashion, IP supports network and subnetwork groupings of host addresses, which facilitates logical path determination. This is especially important in a large network with many hosts. For example, a router operates as a network segment junction, or gateway, at OSI Layer 3 using IP and IP routing tables to determine optimal paths for packet forwarding. By using IP’s hierarchical addressing, routers can determine which remote routing gateways connect to which remote network or subnetwork host groups. If the network is so small that network groupings of hosts are not necessary, the computers and network devices might not utilize IP at the network layer because doing so would add unnecessary performance overhead, or cost.

OSI Layer

Purpose

Telecommunication Protocol Examples

Protocol Data Unit

3 (Network)

Protocols at this layer provide logical addressing and facilitate path determination.

IP, IPSec, ICMP, RIP, OSPF

Packet

Using OSI Layer 2 Within the Frame PDU

The HTTP data PDU has now been encapsulated within the segment PDU by TCP for transport, and then re-encapsulated within the packet PDU by IP providing an IP address header. Remember that the IP packet can be stripped and rebuilt without affecting the TCP segment, which is a benefit of data encapsulation. At this point, we need to re-encapsulate the IP packet into a vehicle that is appropriate for the connected physical medium.

We are now encountering the great transformation of logical processes into physical signals. Just think of it—when you are able to encapsulate your own thoughts into segments with words and sentences, and then address your thoughts by adding your friend’s name to the front, a miracle occurs when you are able to transform the logical thoughts and processes into physical vibrations and sound waves that can traverse the air (your connected communication medium). The computer’s data needs to make this leap, too. It re-encapsulates the IP packet into a frame that is appropriate for the transmission medium. If Ethernet is being used for the local area network, the IP packet is encapsulated within an Ethernet frame. If the IP packet needs to traverse a point-to-point link to the Internet service provider (via your point-to-point dial-up connection), the packet is encapsulated using PPP. These protocols are used to link the logical data processes with the physical transmission medium. Appropriately, this occurs at OSI Layer 2, or the data link layer.

Why does OSI need a data link layer? It needs a separate encapsulation step because physical media can change along a network path. As an Ethernet frame arrives on one side of a routing gateway, it might need to leave the other using PPP. Because you can strip and rebuild a frame without affecting the packet, segment, or data PDUs inside, frame conversion is entirely possible. This is analogous to taking a trip with an airplane. You and your luggage are the data payload. The car you drive to the airport is an appropriate frame type; your locally connected medium is the streets. When you reach the airport, which serves as a junction gateway between land and air travel, you and your luggage are re-encapsulated from your car into a plane because the plane is the appropriate frame type for traveling in air as opposed to traveling on the roads.

Frame headers provide actual physical addressing information as well. Knowing the logical street address of Certified Tech Trainers is helpful to get to the proper city or neighborhood to meet with CTT. Street addresses can change, however, even if the physical geographic latitude and longitude have not. The data link layer manages a flat network address scheme of Media Access Control (MAC) addresses for physical network interface connections. Whereas IP addressing and routes help the packet get to the near proximity of the intended host, MAC addresses are used by the connected network interfaces to know which frames to receive and process, and which frames are intended for other network interfaces.

OSI Layer

Purpose

Telecommunication Protocol Examples

Protocol Data Unit

2 (Data Link)

Protocols at this layer provide access to media (network interface cards, for example) using MAC addresses. They can sometimes also provide transmission error detection, but they cannot provide error correction.

802.3/802.2, HDLC, PPP, Frame Relay

Frame

Using OSI Layer 1 Within the Bits PDU

We have already processed four of the five steps of data encapsulation. Data has been encapsulated into TCP segments at the OSI transport layer. It was re-encapsulated with IP at the OSI network layer and was then further encapsulated with an Ethernet frame at the OSI data link layer. The last step before transmission is to break the frame into electromagnetic digital signals at the OSI physical layer, which communicates bits over the connected physical medium. These bits are received at the destination host, which can reconstruct the bits into Ethernet frames and decapsulate the frames back to IP packets. The destination host then can decapsulate the IP packets back to TCP segments, which are then decapsulated and put back together to form data. The data can then be processed by OSI application-, presentation-, and session-layer protocols to make the data available to the destination user or user applications.

Table 3.3 describes the seven OSI layers.

Table 3.3. OSI Reference Model and Data Encapsulation

OSI Layer

Purpose

Protocol Examples

PDU

7 (Application)

This is the networking layer that interfaces with applications and operating systems. This is where the user (through an application or operating system activity) first passes off information to networking services for telecommunication.

HTTP, Telnet, SMTP, DNS, SNMP

Data

6 (Presentation)

The data might require special formatting techniques, including (but not limited to) preparation according to special presentation protocols for picture and sound, compression, or even encryption.

ASCII, JPG, GIF, MIDI

 

5 (Session)

Communicating the data might a special device-to-device bond, or session, for cooperative multi-device communication efforts, for example.

NetBIOS, RPC, SQL

 

4 (Transport)

This layer provides for reliable or unreliable delivery. Transport protocols can provide for transmission error detection and correction services.

TCP, UDP

Segment

2 (Data Link)

Protocols at this layer provide access to media (network interface cards, for example) using MAC addresses. They can sometimes also provide transmission error detection, but they cannot provide error correction.

802.3/802.2, HDLC, PPP, Frame Relay

Frame

1 (Physical)

The purpose of hardware at this level is to move bits between devices. Specifications for voltage, wire speed, and pin-out cables are provided at this layer.

Network cabling, wireless transmissions, microwave transmissions

Bits

Networking Concepts and Devices

Now that we have a better comprehension of network architecture according to the OSI reference model, we have the foundation necessary for discussing various networking concepts, issues, and devices.

A variety of network types are common to most organizations, and are discussed in the following sections.

Local Area Networks (LANs)

Local Area Networks are private or nonpublic packet-based switched networks contained within a limited area providing services within a particular organization or group. Services can include file/print sharing, email, and communications. This structure is similar to a gated community or industrial complex, in that the network of roads is designed to be used primarily by internal residents or employees.

In developing the network architecture, the organization must assess cost, speed, flexibility, and reliability. The IT organization should review what physical media will be used for physically transmitting the data, as well as what methods will be available to access the physical network medium. Additionally, the organization must decide on the topology (physical arrangement) and the network components to be used in that topology.

LANs were originally designed to connect users so they could exchange or share data. The devices and software associated with the transmission of data were originally designed to connect devices that were no more than 3,200 feet (1,000m) apart, but these distances can be extended by special devices and software. If the distance between network devices exceeds the recommended length, the signal will attenuate and cause communication problems. Attenuation is the weakening or degradation of signals during transmission. In addition to attenuation, signals can incur electromagnetic interference (EMI), which is caused by electromagnetic waves created by other devices in the same area as the network cabling.

Note

Local Area Networks (LANs)

LANs transmit packets to one or more nodes (computing devices) on the network and include the following types of transmissions:

  • Unicast—A sending station transmits single packets to a receiving station.

  • Multicast—A sending station sends a single packet to a specific number of receiving stations.

  • Broadcast—A sending station sends a single packet to all stations on the network.

Generally, the first step in the development of the network architecture is to define the physical media over which network communications (transmissions) will occur. The physical media specifications are contained at the physical layer within the OSI model (see Table 3.3).

Table 3.3. Physical Layer, OSI Model

Type

Use

Physical Standards

Access Standards

Copper: twisted pair (Category 3 and 5) Ring)

Short distances (less than 200 feet) Supports voice/data

Ethernet 10Base-T (10Mbps)

Ethernet 100Base-T (100Mbps)

10Base-TX (100Mbps)

100Base-T4 (100Mbps

1000Base-T (1000Mbps)

IEEE 802.3/802.3u/802.3z

Ethernet/Gigabit

(Ethernet/Fast)

Ethernet CSMA/CD)

IEEE 802.5 (Token

Coaxial cable

Supports voice/data

10Base5 (thick coax 10Mbps)

10Base2 (thin coax 10Mbps)

IEEE 802.3

Fiber optic

Long distances Supports voice/data

10Base-F (10Mbps)

100Base-FX (100Mbps)

1000Base-LX (1000Mbps)

1000Base-CX (1000Mbps)

IEEE 802.3/802.3ae/802.3z

(Ethernet/Fast

Ethernet/Gigabit

Ethernet CSMA/CD)

IEEE 802.5 (Token Ring)

FDDI

Wireless

Short distances

Supports

voice/data

 

802.11 (wireless) 802.11b

(2.4GHz–11Mbps)

802.11a (5GHz–54Mbps)

802.11g

(2.4GHz–54Mbps)

Physical standards dictate both the speed and the reliability of the network. In networking, this is called the media access technology. The IT organization might determine that because all the users and network devices are contained in one physical location (for example, one building), and a majority of the traffic that will be transmitted on the network is voice and data, it will use Ethernet 100Base-T. Ethernet allows multiple devices to communicate on the same network and usually uses a bus or star topology. In the case of 100Base-T, packets are transmitted at 100Mbps.

Ethernet is known as a contention-based network topology. This means that, in an Ethernet network, all devices contend with each other to use the same media (cable). As a result, frames transmitted by one device can potentially collide with frames transmitted by another. Fortunately, the Ethernet standard dictates how computers deal with communications, transmission controls, collisions, and transmission integrity.

Here are some important definitions to help you understand issues common to Ethernet:

  • Collisions—. Result when two or more stations try to transmit at the same time

  • Collision domain—. A group of devices connected to the same physical media so that if two devices access the media at the same time, a collision of the transmissions can occur

  • Broadcast domain—A group of devices that receive one another’s broadcast messages

How Ethernet Deals with Collisions

As a contention-based topology, Ethernet accepts that collisions will occur and has provided two mechanisms to ensure transmission and data integrity. CSMA/CD is a method by which devices on the network can detect collisions and retransmit. When the collision is detected, the source station stops sending the original transmission and sends a signal to all stations that a collision has occurred on the network. All stations then execute what is known as a random collision back-off timer, which delays all transmission on the network, allowing the original sending station to retransmit.

CSMA/CA is a method by which a sending station lets all the stations on the network know that it intends to transmit data. This intent signal lets all other devices know that they should not transmit because there could be a collision, thereby affecting collision avoidance.

As you have learned, collisions are common to an Ethernet network. However, network architecture can be optimized to keep collisions to a minimum. All computers on the same physical network segment are considered to be in the same collision domain because they are competing for the same shared media. High levels of collisions can result from high traffic congestion due to many devices competing on the same network segment. One way to address collision domains and alleviate excessive collisions is to decrease the size of the collision domain (number of competing network devices) by using bridges, switches, or routers (discussed later in the chapter) to segment the network with additional collision domains. As stated earlier, there are unicast, multicast, and broadcast packets that are transmitted on the network. If two networks are separated by a bridge, broadcast traffic, but not collision traffic, is allowed to pass. This reduces the size of the collision domain. Routers are used to segment both collision and broadcast domains by directing traffic and working at Layer 3 of the OSI model.

A separate media-access technology known as token passing can be implemented in place of Ethernet as part of a network architecture. In token passing, a control frame is passed along the network cabling from device to device; all transmissions are made via the token. When a device needs to send network traffic, it waits for the token to arrive and grants it the right to communicate. The token then takes the data, including the routing information (receiving station[s]) and continues from computer to computer. Each computer on the network checks the token’s routing information to see if it is the destination station. When the destination station receives its data, it sets a bit in the token to let the sending station know that it received the data. Token-passing methods are used by both Token Ring and FDDI networks. Token-passing networks do not have collisions because only one station at a time can transmit data.

Network Topologies

We have discussed media-access technologies, but you might be asking how these are actually implemented in a network architecture. The connectivity of the network cabling and devices is known as the topology. Network topologies fall into the following categories: bus, star, or ring.

Bus Topology

The bus topology is primarily used in smaller networks where all devices are connected to a single communication line and all transmissions are received by all devices. This topology requires the minimum amount of cable to connect devices. A repeater can be used to “boost” the signal and extend the bus configuration. A standard bus configuration can encounter performance problems if there is heavy network traffic or a large number of collisions. In addition, each connection to the bus network weakens the electrical signal on the cable. Cable breaks can cause the entire network to stop functioning. Figure 3.2 depicts a bus topology.

Bus topology.

Figure 3.2. Bus topology.

Star Topology

In a star topology, each device (node) is linked to a hub or switch, which provides the link between communicating stations. This topology is commonly used in networks today because it provides the capability to add new devices and easily remove old ones. In designing the network, the IT organization needs to ensure that the hubs/switches used will provide enough throughput (speed) for the devices communicating on the network. In contrast to a bus topology, a star topology enables devices to communicate even if a device is not working or is no longer connected to the network. Generally, star networks are more costly because they use significantly more cable and hubs/switches. If the IT organization has not planned correctly, a single failure of a hub/switch can render all stations connected incapable of communicating with the network. To overcome this risk, IT organizations should create a complete or partial mesh configuration, which creates redundant interconnections between network nodes. Figure 3.3 depicts a star topology.

Star topology.

Figure 3.3. Star topology.

Note

Star topology.

Ring Topology

A ring configuration is generally used in a Token Ring or FDDI network where all stations are connected to form a closed loop. The stations do not connect to a central device and depend on one another for communications. A ring topology generally provides high performance, but a single device or station can stop the network devices from communicating. In a Token Ring network, a failure might occur when one or more stations start beaconing. In beaconing, a message is passed to the devices on the network that a device is failing. This allows the remaining devices to automatically reconfigure the network, to continue communicating. Figure 3.4 shows a ring topology.

Ring topology.

Figure 3.4. Ring topology.

Note

Ring topology.

As an IS auditor, you will look at the network topology and protocols to determine whether they meet the needs of the organization. The IT organization should monitor performance on the LAN to ensure that it is segmented properly (reducing collision/broadcast domains) and that the bandwidth (10/100/1000Mbps) is sufficient for access to network services. The network should be designed in such a manner that device failures do not bring down the network or cause long delays in network communication (redundancy and disaster recovery). IT management should have a configuration-management plan and procedures in place, to establish how the network will function both internally and externally. This includes performance monitoring.

Note

Ring topology.

Wide Area Networks (WANs)

WANs provide connectivity for LANs that are geographically dispersed by providing network connectivity and services across large distances. WANs are similar to the series of interstates (highways) that can be accessed within individual states and are used to cross state boundaries.

The devices and protocols used in WAN communications most commonly work at the physical (Layer 1), data link (Layer 2), and network (Layer 3) layers of the OSI reference model. Communication on WAN links can be either simplex (one-way), half-duplex (one way at a time), or full duplex (separate circuits for communicating both ways at the same time). WAN circuits are usually network communication lines that an organization leases from a telecommunications provider; they can be switched or dedicated circuits. As an example, WAN connectivity could involve an organization with a headquarters in one state (say, Virginia) and smaller satellite offices in other states. The headquarters office could have all servers and their associated services in Virginia (email, file sharing, and so on). One way to enable communication with the satellite offices would be to install a WAN circuit. As with LANs, the WAN circuits utilize standard protocols to transmit messages. WAN can use message switching, packet switching, or circuit switching, or can utilize WAN dial-up services through Integrated Services Digital Network (ISDN) or the Public Switched Telephone Network (PSTN).

Virtual private networking (VPN) enables remote users, business partners, and home users to access the organization’s network securely using encrypted packets sent via virtual connections. Encryption involves transforming data into a form that is unreadable by anyone without a secret decryption key. It ensures confidentiality by keeping the information hidden from anyone for whom it was not intended. Organizations use VPNs to allow external business partners to access an extranet or intranet. The advantage of VPNs is that they can use low-cost public networks (Internet) to transmit and receive encrypted data. VPNs rely on tunneling or encapsulation techniques that allow the Internet Protocol (IP) to carry a variety of different protocols (IPX, SNA, and so on).

Note

Wide Area Networks (WANs)

Metropolitan Area Networks (MANs)

This type of network is larger than a LAN but smaller than a WAN. A MAN can be used to connect users to services within the same city or locality. MANs are similar to the surface roads used to travel from your residence or community to services such as department stores, grocery stores, and your place of business.

Networks exist to facilitate access to application services. The following are some of the common services available within an organization’s networking environment:

  • File sharing—This allows users to share information and resources among one another. File sharing can be facilitated by shared directories or groupware/collaboration applications.

  • Email services—Email provides the capability for a user population to send unstructured messages to individuals or groups of individuals via a terminal or PC.

  • Print services—Print services enable users to access printers either directly or through print servers (which manage the formatting and scheduling) to execute print requests from terminals or PCs connected to the network.

  • Remote-access services—. These services provide remote access capabilities from a user location to where a computing device appears; they emulate a direct connection to the device. Examples include Telnet and remote access through a VPN.

  • Terminal-emulation software (TES)—. TES provides remote access capabilities with a user interface as if that user were sitting on the console of the device being accessed. As an example, Microsoft Terminal Services connects to the remote device and displays the desktop of the remote device as if the user were sitting at the console.

  • Directory services—A directory stores information about users, devices, and services available on the network. A directory service enables users to locate information about individuals (such as contact information) or devices/services that are available within the organization.

  • Network management—Network management provides a set of services that control and maintain the network. It generally provides complete information about devices and services with regard to their status. Network-management tools enable you to determine device performance, errors associated with processing, active connections to devices, and so on. These tools are used to ensure network reliability and provide detailed information that enables operators and administrators to take corrective action.

Because of the complex nature of networking and the variety of standards both in use and constantly evolving, implementation and maintenance poses a significant challenge. Managers, engineers, and administrators are tasked to develop and maintain integrated, efficient, reliable, scalable, and secure networks to meet the needs of the organization. Some basic critical success factors apply to these activities:

  • Interoperability—. A large number of devices, system types, and standards usually support network communication. All the components must work together efficiently and effectively.

  • Availability—. Organizations need continuous, reliable, and secure access to network devices and services.

  • Flexibility—To facilitate scalability, the network architecture must accommodate network expansion for new applications and services.

The TCP/IP Protocol Suite

The Transmission Control Protocol/Internet Protocol Suite (TCP/IP) has become the de facto standard for the Internet, and most organizations use it for network communications. TCP/IP includes both network-communication and application-support protocols. As stated earlier, the TCP/IP protocol suite was developed and in use before the ISO/OSI model was developed and, as such, does not match directly with the layers of the OSI model.

The TCP/IP protocol is defined as follows:

  • Remote Terminal Control Protocol (Telnet)—. This terminal-emulation protocol enables users to log into remote systems and use resources as if they were connected locally.

  • File Transfer Protocol (FTP)—. FTP enables users and systems to transfer files from one computer to another on the Internet. FTP allows for user and anonymous login based on configuration. FTP can be used to transfer a variety of file types and does not provide secure communication (encryption) during login or file transfer.

  • Simple Mail Transfer Protocol (SMTP)—. This protocol provides standard electronic (email) transfer services.

  • Domain Name Service (DNS)—. This protocol resolves hostnames to IP addresses and IP addresses to hostnames. That is, www.lmisol.com would resolve to IP address 66.33.202.245. DNS servers have hierarchal distributed database systems that are queried for resolution. The service enables users to remember names instead of having to remember IP addresses.

  • Network File System (NFS)—This protocol allows a computer to access files over a network as if they were on its local disk.

  • Transmission Control Protocol (TCP)—. This transport-layer protocol establishes a reliable, full-duplex data-delivery service that many TCP/IP applications use. TCP is a connection-oriented protocol, which means that it guarantees the delivery of data and that the packets will be delivered in the same order as they were sent.

  • User Datagram Protocol (UDP)—. This transport-layer protocol provides connectionless delivery of data on the network. UDP does not provide error-recovery services and is primarily used for broadcasting data on the network.

  • Internet Protocol (IP)—. This protocol specifies the format of packets (datagrams) that will be transported on the network. IP only defines the format of packets, so it is generally combined with a transport protocol such as TCP to affect delivery.

  • Internet Control Message Protocol (ICMP)—This protocol is an extension of the Internet Protocol (IP). It supports packets that contain error, control, and informational messages. The ping command, used to test network connectivity, uses the ICMP protocol.

  • Address Resolution Protocol (ARP)—. This network-layer protocol is used to convert an IP address (logical address) into a physical address (DLC or MAC address). When a host on the network wants to obtain a physical address, it broadcasts an ARP request. The host on the network that has the IP address replies with the physical address.

  • X.25—. This is a data communications interface specification developed to describe how data passes into and out of switched packet networks. The X.25 protocol suite defines protocol Layers 1–3.

Firewalls

A firewall is a device (hardware/software) that restricts access between networks. These networks might be a combination of an internal and external network (organization’s LAN and the Internet) or might be within internal networks (accounting network and the sales network). A firewall is implemented to support the organizational security policy, in that specific restrictions or rules are configured within the firewall to restrict access to services and ports. If configured correctly, the firewall is the gateway through which all traffic will flow. The network traffic (or packets) then is monitored as it comes into the firewall and compared against a set of rules (filters). If the traffic does not meet the requirements of the access control policy, it is not allowed access and might be discarded or redirected.

Firewalls started out as perimeter security devices and protected the organization’s internal networks from external (such as, from the Internet) networks, similar to the way a moat was used to protect a castle. Often you will hear of this type of network security that “the network is hard and crunchy on the outside (perimeter firewall), and soft and chewy on the inside (organization’s internal network). Perimeter security is an important component of a comprehensive security infrastructure, but it is not the complete answer. Perimeter security assumes that a vast majority of the threats are external to the organization, which is not always the case.

It is important to keep in mind that the firewall can be considered a “choke point” on the network because all traffic must be checked against the rules before gaining access. As a result, the rules that are created for the network must take into account performance as well as security. Firewalls can filter traffic based on a variety of the parameters within the packet:

  • Source and destination addresses—The firewall can look at the source or destination address in the packet (or both).

  • Source and destination ports—The firewall can look at the source or destination port identifier of the service or application being accessed.

  • Protocol types—The firewall might not let certain protocol types access the network.

The level of granularity and types of rules that can be implemented vary among vendors. As an auditor, you will find that a wide variety of parameters can be configured, based on vendor implementation. A number of risk indicators are associated with firewalls:

  • The organization does not employ firewalls.

  • The firewall is poorly configured or misconfigured (affecting performance/security).

  • No audit or testing processes/procedures exist for monitoring firewall security.

  • The organization relies too much on perimeter firewall security.

  • Not all network traffic passes through the firewall (rogue modems, network connectivity, and so on).

Packet-Filtering Firewalls

The first generation of firewalls is known as packet-filtering firewalls, or circuit-level gateways. This type of firewall uses an access control list (ACL) applied at OSI layer 3. An ACL is a set of text-based rules on the firewall that the firewall can apply against incoming packets. A simple access control list could stipulate that all packets coming from a particular network (source address) 192.168.0.0 must be denied and discarded. In this instance, the firewall might have a text-based rule DENY ALL 192.168.0.0. Another type of rule might state that all packets trying to access a particular port, such as a web page request (port 80), be routed to a particular server, in this case, 172.168.1.1. In this instance, the firewall might have a rule that looks like PERMIT FORWARD ALL TCP Port 80 172.168.1.1.

Packet-filtering firewalls can compare the header information in packets only against their rules. As a result, they provide relatively low security compared to other options. The creation of rules in packet filtering involves both permit (or allow) and deny (or block) statements. Permit statements allow packets to be forwarded; deny statements discard the packet. Access lists are sequential: Statements are processed from the top of the list down until a statement condition that matches a packet is found. When the statement is found, no further statements are processed. As an IS auditor, you should review the access lists for completeness and correctness. This example shows both a correct and an incorrect access list:

Access list A (correct):

   access-list 1 permit host 192.168.32.1
   access-list 1 permit host 192.168.32.2

Access list B (incorrect):

   access-list 1 deny 192.168.32.0 0.0.0.255
   access-list 1 permit 192.168.32.1
   access-list 1 permit 192.168.32.2
   access-list 1 deny 192.168.40.0 0.0.255.255

In this scenario, we want to permit two IP addresses access to the internal network while denying the remainder of the subnet. In access list A, we allow both 192.168.32.1 and 192.168.32.2 to access the network. By default, routers and firewalls that can be configured to filter based on IP source or destination addresses deny traffic by default, and will not allow traffic unless it has been explicitly permitted. This default characteristic is referred to as the “implicit deny” statement at the end of every access control list. The list will be read in sequence from top to bottom, and because of the implicit deny statement at the end of the access list, any IP addresses that do not meet the criteria of the rules will be denied. In access list B, we are denying the entire subnet of 192.168.32.0, which includes 192.168.32.1 and 192.168.32.2. Because the first statement in access list B would technically match hosts 192.168.32.1 and 192.168.32.2, the later permit statements meant for these hosts would not be processed, and the packets from these source hosts would be discarded. Granular statements must precede global statements. The last rule in access list B is redundant with the first rule in the access list. Because no valid permit statements exist in access list B, no traffic from any source will be permitted due to the implicit deny statement at the end of every access list.

Note

Packet-Filtering Firewalls

Stateful Packet-Inspection Firewalls

Stateful packet-inspection firewalls are considered the third generation of firewall gateways. They provide additional features, in that they keep track of all packets through all 7 OSI layers until that communication session is closed. The first-generation packet-filtering firewalls receive a packet and match against their rules; the packet is forwarded/discarded and forgotten.

Remember from the discussion of the OSI model that a single communication (such as sending an email) can be broken down into several packets and forwarded to the receiving station. A stateful firewall is a bit more sophisticated because it tracks communications (or sessions) from both internal and external sources. A first-generation packet-filtering firewall can be set up to deny all packets from a particular network (as in the previous example), but a stateful firewall with the same rules might allow packets from that denied network if the request came from the internal network.

Proxy Firewalls

Proxy firewalls, or application-layer gateways, are used as the “middlemen” in network communications. The difference between a proxy-based firewall and packet filtering is that all packets passing to the network are delivered through the proxy, which is acting on behalf of the receiving computer. The communication is checked for access authorization according to a rulebase, and then passed to the receiving system or discarded. In essence, a proxy impersonates the internal (receiving) system to review packets before forwarding. Any communication that comes from the receiving computer is passed back to the proxy before it is forwarded externally. The actual process that takes place is that the proxy receives each packet, reviews it, and then changes the source address to protect the identity of the receiving computer before forwarding.

Proxies are application-level gateways. They differ from packet filtering in that they can look at all the information in the packet (not just header) all the way to the application layer.

Note

Proxy Firewalls

The firewall architecture for the organization depends on the type of protection the organization needs. The architecture might be designed to protect internal networks from external; it might be used to segment different internal departments and might include packet filtering, stateful packet inspection, proxy/application gateways, or a combination of these.

Note

Proxy Firewalls

In general, there are three basic types of firewall configurations:

  • Bastion host—. A basic firewall architecture in which all internal and external communications must pass through the bastion host. The bastion host is exposed to the external network. Therefore, it must be locked down, removing any unnecessary applications or services. A bastion host can use packet filtering, proxy, or a combination; it is not a specific type of hardware, software, or device. Figure 3.5 shows a basic bastion host configuration.

    Bastion host configuration.

    Figure 3.5. Bastion host configuration.

  • Screened host—A screened host configuration generally consists of a screening router (border router) configured with access control lists. The router employs packet filtering to screen packets, which are then typically passed to the bastion host, and then on to the internal network. The screened host (the bastion host in this example) is the only device that receives traffic from the border router. This configuration provides an additional layer of protection for the screened host. Figure 3.6 shows a screened host configuration.

    Screened host configuration.

    Figure 3.6. Screened host configuration.

  • Screened subnet—A screened subnet is similar to a screened host, with two key differences: The subnet generally contains multiple devices, the bastion host is sandwiched between two routers (the exterior router and the interior router). In this configuration, the exterior router provides packet filtering and passes the traffic to the bastion. After the traffic is processed, the bastion passes the traffic to the interior router for additional filtering. The screened subnet, sometimes called a DMZ, provides a buffer zone between the internal and external networks. This configuration is used when an external population needs access to services (web, FTP, email) that can be allowed through the exterior router, but the interior router will not allow those requests to the internal network. Figure 3.7 shows a screened subnet configuration.

    Screened subnet configuration.

    Figure 3.7. Screened subnet configuration.

Note

Screened subnet configuration.

Firewall architecture is quite varied. The organization might decide on hardware- or software-based firewalls to provide network protection. In the case of software-based firewalls, it is important to remember that they will be installed on top of commercial operating systems, which may have their own vulnerabilities. This type of implementation requires the IT organization to ensure that the operating system is properly locked down and that there is a process in place to ensure continued installation of security patches. Any unnecessary services or applications, as well as unneeded protocols, must be removed or disabled from the operating system.

Because the objective of a firewall is to protect a trusted network from an untrusted network, any organization that uses external communications must implement some level of firewall technology. The firewall architecture should take into consideration the functions and level of security the organization requires. Firewalls are potential bottlenecks because they are responsible for inspecting all incoming and outgoing traffic. Firewalls that are configured at the perimeter of the network provide only limited protection, if any protection, from internal attacks; misconfigured firewall rules could allow unwanted and potentially dangerous traffic on the network.

Note

Screened subnet configuration.

Routers

Routers are used to direct or route traffic on the network and work at the network layer (Layer 3) of the OSI model. Routers link two or more physically separate network segments. Although they are linked via router, they can function as independent networks. As in the discussion on firewalls, routers look at the headers in networking packets to determine source addresses (logical addresses). Routers can be used as packet-filtering firewalls by comparing header information in packets only against their rules. As stated earlier, the creation of rules in packet filtering involves both permit (or allow) and deny (or block) statements.

In determining the network design, the IT organization must consider where to place routers and leverage the speed and efficiencies of switches (discussed later in this chapter), where possible. When working at the different layers of the OSI model, the higher up you go, the more intelligent decision making is being accomplished. Routers can be standalone devices or software running within or on top of an operating system. They use routing protocols to communicate the available routes on the network. The routing protocols (RIP, BGP, and OSPF) relay information on routers that have gone down on the network, congested routes, or routes that are more economical than others. The information that is passed between routers via the routing protocols are route updates and are stored in a routing table. As packets enter the router, their destination addresses are compared to the routing table, and the packet is forwarded on the most economical route available at the time. As stated earlier in the discussion on firewalls, the fact that routers can look at header information in the packet enables the router to perform filtering capabilities via access lists, which can restrict traffic between networks. The criteria within access control lists can be IP addresses (source and destination), specific ports (such as TCP port 80 for HTTP), or protocols (UDP, TCP, and IP).

Routers are not as fast as hubs or switches for simply forwarding frames, since they need to look at the OSI layer 3 header information in all packets to determine the correct route to the destination address. This creates the possibility for bottlenecks on the network.

Note

Routers

Modems

Modem is short for modulator-demodulator. A modem is a device that converts data from digital format to analog format for transmission. Computer information is stored digitally and, when transmitted via the phone line, needs to be converted to analog waves to enable communication. Generally, modems are used for remote access to networks and devices. As a part of the IT infrastructure, modems can be used to access servers or routers to enable routine maintenance or troubleshooting. Users of the organization also can use modems for remote access to data and applications through dial-in virtual private networks (VPN) or to provide terminal services (access to console functions).

In reviewing the IT infrastructure, the IS auditor might find that modems fall outside the security procedures and, in fact, might bypass existing security controls. Modems are susceptible to “war dialing,” in which malicious hackers set software to dial a series of telephone numbers, looking for the carrier tone provided by a modem on connection. This technique might allow hackers to enter the network by bypassing existing security controls.

Note

Modems

A bridge works at the data link layer (Layer 2) of the OSI model and connects two separate networks to form a logical network (for example, joining an Ethernet and token network). They can store and forward frames. Bridges examine the media access control (MAC) header of a data packet to determine where to forward the packet; they are transparent to end users. A MAC address is the physical address of the device on the network (it resides on the network card [NIC] on the device). As packets pass through it, the bridge determines whether the MAC address resides on its local network; if not, the bridge forwards the packet to the appropriate network segment. Bridges can reduce collisions that result from segment congestion, but they do forward broadcast frames. Bridges are good network devices if used for the right purpose.

Note

Modems

Hubs and Switches

A hub operates at the physical layer (Layer 1) of the OSI model and can serve as the center of a star topology. Hubs can be considered concentrators because they concentrate all network communications for the devices attached to them. A hub contains several ports to which clients are directly connected. The hub connects to the network backbone and can be active (repeats signals that are sent through them) or passive (splits signals but does not repeat them).

A switch combines the functionality of a multi-port bridge and the signal amplification of a repeater.

Internet, Intranet, and Extranet

The Internet is accessed using the TCP/IP. The Hypertext Transfer Protocol (HTTP) is an application-level protocol that is used to transfer the collection of online documents known as the World Wide Web (WWW). This application service and combines the use of client software (browser) that can request information from web servers. Web-based applications can deliver static and dynamic content in the form of text, graphics, sound, and data; they can be used as an interface to corporate applications. Delivery of information via the Web can include the use of either client-side (servlets or applets) or server-side (common gateway interface scripts [CGI]) applications. The ease of implementation and use of the Web enables a variety of content-rich applications.

An intranet uses the same basic principles of the Internet but is designed for internal users. Intranets are web based and can contain internal calendaring, web email, and information designed specifically for the authorized users of the intranet. Extranets are web based but serve a combination of users. Extranets are commonly used as a place for partners (organization and external partners) to exchange information. A simple example of an extranet is one in which a supplier provides access to its partners to place orders, view inventory, and place support requests. The extranet usually sits outside the corporate border router and might be a screened host or might be maintained on a screened subnet.

With the wide use of web technologies, it is important to develop policies and procedures regarding proper use of the Internet, the intranet, and extranets. The ease of access via the Web opens the door to the organization’s network and could allow the download of virus-laden or malicious software. Users in the organization need to be aware of the risks associated with downloading potentially dangerous applets, servlets, and programs. The IT organization should monitor Internet access to ensure that corporate assets (bandwidth, servers, and workstations) are being used in a productive manner.

Risks and Controls Related to Network Infrastructure

The network infrastructure incorporates all of the organization’s data, applications, and communications. The IS auditor must assess the risks associated with the infrastructure and the controls in place to mitigate the risk. It is important to keep in mind that the threats associated with the network are both internal and external; they can include risks from misuse, malicious attack, or natural disaster. The IT organizational controls should mitigate business risk, which is the potential harm or loss in achieving business objectives. The risk-management strategy and risk assessment methodology should address all threats and vulnerabilities and their effect on network assets.

The IT organization should have standards in place for the design and operation of the network architecture. The auditor should identify the following:

  • LAN topology and network design

  • Documented network components’ functions and locations (servers, routers, modems, and so on)

  • Network topology that includes interconnections and connections to other public networks

  • Network uses, including protocols, traffic types, and applications

  • Documentation of all groups with access to the network (internal and external)

  • Functions performed by administrators of the network (LAN, security, DBA)

Review of this information enables the IS auditor to make informed decisions with regard to threats to the network and the controls used to mitigate the threats.

Administrative, physical, and technical controls should protect the network and its associated components. The physical controls should protect network hardware and software, permitting physical access to only those individuals who are authorized. When entering restricted areas, individuals with access to sensitive areas should be careful that they do not allow an unauthorized user to follow them in. Known as “piggybacking,” this occurs when an unauthorized user follows an authorized user into a restricted area. All hardware devices, software, and manuals should be located in a secure location. Network closets that contain cabling, routers, hubs, and switches should have restricted access. Servers and network components should be locked in racks or should be secured in such a manner that the devices and their components cannot be removed. The network operating manuals and documentation should be secured. All electrical equipment should be protected against the effects of static electricity or power surges (static mats/straps, surge protectors). Network equipment should be equipped with uninterruptible power supplies (UPS), in case of power failure, and facilities should be free of dust, dirt, and water.

Note

Risks and Controls Related to Network Infrastructure

The IT organization should have logical controls in place to restrict, identify, and report authorized and unauthorized users of the network. All users should be required to have unique passwords and to change them periodically. Access to applications and data on the network should be based on written authorization and should be granted according to job function (need to know). All login attempts (authorized and unauthorized) should be logged, and the logs should be reviewed regularly. All devices and applications within the network infrastructure should be documented, and the documentation should be updated when changes to hardware, software, configurations, or policies are implemented.

The controls that are implemented on the network should ensure the confidentiality, availability (CIA), and integrity of the network architecture. Confidentiality is the capability to ensure that the necessary level of secrecy is enforced throughout each junction of data processing, to prevent unauthorized disclosure. Integrity ensures accuracy and reliability of data and prevents unauthorized (intentional and unintentional) modification of the data. Availability ensures reliable and timely access to data and network resources for authorized individuals.

As an IS auditor, reviewing the network documentation and logs will provide you with a perspective of the risks associated with the network, but direct observation will provide a more reliable way to determine whether the controls protect the organization. The IS auditor should observe physical security controls and monitor IT resources during daily activities. These results can then be compared against the existing documentation collected to determine adherence.

Note

Risks and Controls Related to Network Infrastructure

Evaluating IS Operational Practices

As stated in Chapter 2, “Management, Planning, and Organization of IS,” the COBIT resources provide a framework for organizations, IT management, and IS auditors to realize best practices to reach business objectives. IS auditors should review the IT organization to ensure the use of formal risk management, project management, and change management associated with the implementation of IT infrastructures.

The COBIT framework provides 11 processes in the management and deployment of IT systems:

  1. Develop a strategic plan

  2. Articulate the information architecture

  3. Find an optimal fit between the IT and the organization’s strategy

  4. Design the IT function to match the organization’s needs

  5. Maximize the return on the IT investment

  6. Communicate IT policies to the user community

  7. Manage the IT workforce

  8. Comply with external regulations, laws, and contracts

  9. Conduct IT risk assessments

  10. Maintain a high-quality systems-development process

  11. Incorporate sound project-management techniques

Risks and Controls Related to IS Operational Practices

An IT organization should develop and maintain strategic planning processes (both long and short term) that enable the organization to meet its goals and objectives. The IT organization’s policies, procedures, standards, and guidelines are evidence of a detailed reflection of the strategic plan. The IT organization should have a clearly defined structure that outlines authority and responsibility, and should be documented in an organizational chart. Network devices, applications, and data should be maintained, and proper segregation of duties should be implemented. The IT organization should implement proper segregation of incompatible duties, keeping in mind that segregation between computer operators and security administrators, as an example, might not be possible in smaller environments. The use of compensating controls, such as audit trails, might be acceptable to mitigate the risk from improper segregation of duties. The auditor should review information pertaining to the organization structure, to ensure adequate segregation of duties.

Note

Risks and Controls Related to IS Operational Practices

The IS auditor should review policies and procedures because they ensure that organizational objectives are being met. In addition, the IS auditor should review the risk-management process to ensure that the organization is taking steps to reduce risk to an acceptable level (mitigation) and is maintaining that level of risk. The organization’s business plan should establish an understanding of the organization’s mission and objectives, and should be incorporated into the IT strategic plan. Organizational charts should establish the responsibility and authority of individuals, and job descriptions should define the responsibility of and accountability for employee actions. The policies and procedures should incorporate strategic objectives in operational activities.

Evaluating the Use of System Performance and Monitoring Processes, Tools, and Techniques

To ensure continued availability of both software and hardware, the IT department should implement monitoring processes. These processes should include performance, capacity, and network monitoring. The IT organization should have a performance-monitoring plan that defines service levels of hardware and software. The metrics associated with service levels generally include service availability (uptime), support levels, throughput, and responsiveness. The organization should compare stated service levels against problem logs, processing schedules, job accounting system reports, and preventive maintenance reports, to ensure that hardware availability and utilization meet the stated service levels. As an example, throughput should measure the amount of work that the system performs over a period of time. In looking at an online transaction system, the number of transactions per second/minute can be used as a throughput index.

Note

Evaluating the Use of System Performance and Monitoring Processes, Tools, and Techniques

The IS auditor might need to review specific reports associated with availability and response. This list identifies log types and characteristics:

  • System logs identify the activities performed on a system and can be analyzed to determine the existence of unauthorized access to data by a user or program.

  • The review of abnormal job-termination reports should identify application jobs that terminated before successful completion.

  • Operator problem reports are used by operators to log computer operations problems and their solutions. Operator work schedules are maintained by IS management to assist in human resource planning.

  • Capacity-monitoring software to monitor usage patterns and trends enables management to properly allocate resources and ensure continuous efficiency of operations.

  • Network-monitoring devices are used to capture and inspect network traffic data. The logs from these devices can be used to inspect activities from known or unknown users to find evidence of unauthorized access.

  • System downtime provides information regarding the effectiveness and adequacy of computer preventive maintenance programs and can be very helpful to an IS auditor when determining the efficacy of a systems-maintenance program.

Exam Prep Questions

1.

The offline print spooling feature of print servers should be carefully monitored to ensure that unauthorized viewing access to sensitive information is controlled and prevented. Which of the following issues is an IS auditor MOST concerned with?

A.

Some users have the technical authority to print documents from the print spooler even though the users are not authorized with the appropriate classification to view the data they can print.

B.

Some users have the technical authority to modify the print spooler file even though the users do not have the subject classification authority to modify data within the file.

C.

Some users have the technical authority to delete the print job from the spooler even though the users do not have the authority to modify the data output of the print job.

D.

Some users have the technical authority to pause the print jobs of certain information even though they do not have the subject classification authority to create, modify, or view the data output of the print job.

A1:

Answer: A. The question focuses on the confidentiality aspect of access control. A user with technical printer administration authority can print jobs from the print spooler, regardless of the user’s authorization to view the print output. All other answers are potential compromises of information integrity or availability.

2.

When reviewing firewall configuration, which of the following represents the greatest vulnerability for an IS auditor?

A.

The firewall software has been configured with rules permitting or denying access to systems or networks based upon source and destination networks or systems, protocols, and user authentication.

B.

The firewall software is configured with an implicit deny rule as the last rule in the rule base.

C.

The firewall software is installed on a common operating system that is configured with default settings.

D.

The firewall software is configured as a VPN endpoint for site-to-site VPN connections.

A2:

Answer: C. When auditing any critical application, an IS auditor is always concerned about software or an operating system that is installed according to default settings. Default settings are often published and provide an intruder with predictable configuration information, which allows easier system compromise. Installing firewall software onto an otherwise robust and fully functioning operating system poses a greater risk of firewall compromise. To mitigate this risk, firewall software is often installed onto a system using an operating system that has very limited functionality, providing only the services necessary to support the firewall software. An example of such an operating system is the ISO operating system installed onto Nokia routing/firewall appliances. ISO provides the functionality necessary to support installation of Check Point firewall software but little else. The remaining answers are normal firewall configurations and are not of concern to the IS auditor.

3.

An IS auditor strives to ensure that IT is effectively used to support organizational goals and objectives regarding information confidentiality, integrity, and availability. Which of the following processes best supports this mandate?

A.

Network monitoring

B.

Systems monitoring

C.

Staffing monitoring

D.

Capacity planning and management

A3:

Answer: D. Computer resources should be carefully monitored to match utilization needs with proper resource capacity levels. Capacity planning and management relies upon network, systems, and staffing monitoring to ensure that organizational goals and objectives regarding information confidentiality, integrity, and availability are met.

4.

Which of the following would be the first evidence to review when performing a network audit?

A.

Network topology chart

B.

Systems inventory

C.

Applications inventory

D.

Database architecture

A4:

Answer: A. Reviewing a diagram of the network topology is often the best first step when auditing IT systems. This diagram provides the auditor with a foundation-level understanding of how systems, applications, and databases interoperate. Obtaining the systems and applications inventory would be a logical next step. Reviewing the database architecture is much more granular and can be performed only after adequately understanding the basics of how an organization’s systems and networks are set up.

5.

An IS auditor needs to check for proper software licensing and license management. Which of the following management audits would consider software licensing?

A.

Facilities

B.

Operations

C.

Configuration

D.

Hardware

A5:

Answer: C. A configuration-management audit should always verify software licensing for authorized use. The remaining answers do not focus on software licensing.

6.

“Dangling tuples” within a database represent a breach in which of the following?

A.

Attribute integrity

B.

Referential integrity

C.

Relational integrity

D.

Interface integrity

A6:

Answer: B. It is important that database referential integrity be enforced, to avoid orphaned references, or “dangling tuples.” Relational integrity is enforced more at the record level. The remaining answers are misleading.

7.

Which of the following BEST supports communication availability, acting as a countermeasure to the vulnerability of component failure?

A.

Careful network monitoring with a dynamic real-time alerting system

B.

Integrated corrective network controls

C.

Simple component redundancy

D.

High network throughput rate

A7:

Answer: C. Providing network path redundancy is the best countermeasure or control for potential network device failures. Careful monitoring only supports timely response to component failure. Integrated corrective network controls is misleading and loosely describes simple component redundancy. High network throughput rate provides increased performance but does not address component failure.

8.

Which of the following firewall types provides the most thorough inspection and control of network traffic?

A.

Packet-filtering firewall or stateful inspection firewall

B.

Application-layer gateway or stateful inspection firewall

C.

Application-layer gateway or circuit-level gateway

D.

Packet-filtering firewall or circuit-level gateway

A8:

Answer: B. An application-layer gateway, or proxy firewall, and stateful inspection firewalls provide the greatest degree of protection and control because both firewall technologies inspect all seven OSI layers of network traffic. A packet-filtering firewall, also known as a circuit-level gateway, reliably inspects only through OSI Layer 3.

9.

Decreasing collisions because of network congestion is important for supporting network communications availability. Which of the following devices is best suited for logically segmenting and creating collision domains based upon OSI Layer 2 MAC addressing?

A.

Router

B.

Hub

C.

Repeater

D.

Switch

A9:

Answer: D. A switch is most appropriate for segmenting the network into multiple collision domains to achieve the result of fewer network communications errors because of congestion-related collisions. As OSI Layer 1 devices, repeaters and hubs cannot understand MAC addressing, which is necessary to logically segment collision domains. As an OSI Layer 3 device, a router segments the network according to logical network addressing.

10.

Which of the following network configurations BEST supports availability?

A.

Mesh with host forwarding enabled

B.

Ring

C.

Star

D.

Bus

A10:

Answer: A. Although it is not very practical because of physical implementation constraints, a fully connected mesh with host forwarding enabled provides the most redundancy of network communication paths.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset