Chapter 4

Networking Concepts and Trends

To successfully begin and navigate a career in networking, you need to understand a great deal about how various aspects of computing and networking technologies work. You don't need to understand everything in this chapter on day one, but the more you do know about the concepts behind the technologies, the easier it will be to learning more and do more as you increase your expertise and advance in networking.

Access Control

At its heart, access control is all about who (or what) is allowed to access something. In business, users are required to identify themselves — and prove their identity — before they can use workstations and applications. This section explains the concepts behind access control.

Basic concepts in access control

The basics of access control can be divided into two major categories:

  • Authentication: the technology that facilitates access to systems, data, and workspaces
  • Business processes used to manage access

Common terms in access control

access review

accumulation of privileges

authentication

biometrics

FTP, FTPS, and SFTP

hashing

key logger

multifactor authentication

password

password quality

password recovery

phishing

rainbow table

replay attack

salting

session hijacking

single-factor authentication

social engineering

telnet

token

user ID

watering hole attack

Authentication

Authentication is the process of asserting one's identity — including required proof such as a password, token, or biometric — to a system to access its resources. The identity takes the form of a user ID, which is a value assigned to a person or machine.

Single-factor authentication generally involves the presentation of a user ID and a password. This common form of authentication is more vulnerable to attack by adversaries due to its simplicity. The phrase “what you know” is associated with single-factor authentication because in this simplest form of authentication, the user has identified herself by presenting her user ID. The user then authenticates by stating something that she knows which is tied to her user ID (such as the current password).

A password is a secret word, phrase, or random characters used as a part of single-factor authentication. The quality of the password is an important factor that helps resist some forms of attack. Characteristics of password quality include length (how many characters), complexity (whether the password must contain lowercase letters, uppercase letters, numerals, and special characters), expiration (how much time may elapse before a user is required to select a new password), recovery (the process to follow when users forget their password), and reuse (whether previously used passwords may be used again).

What you know, have, and are

The concepts in single-factor and multifactor authentication are sometimes difficult to understand. Three phrases are often used to simplify these concepts:

  • What you know: User ID and password authentication. The user ID and password are something that a user would know.
  • What you have: Token or smart card authentication. The user must have the physical object (and use it properly) to log in.
  • What you are: Biometric authentication. This refers to some physical aspect of a user, such as a fingerprint, retina scan, or even voiceprint.

Passwords are typically stored in hashed form. Hashing is an irreversible cryptographic function that creates a large number representing the password without exposing the password. The hash value then facilitates the confirmation of a correct password during the login process but prevents the extraction of passwords. Hashing is explained in more detail in the “Cryptography” section, later in the chapter.

Multifactor authentication generally involves the presentation of a user ID, a password, and a token or biometric. This type of authentication is generally stronger than single-factor authentication. A token is a hardware device (or sometimes a smartphone application) that is used in multifactor authentication and represents a far stronger form of authentication than single factor Multifactor authentication can also use some form of biometric, such as a fingerprint, a palm scan, an iris scan, or a voiceprint The phrase “what you are” is associated with biometric authentication because you're using a part of your body to authenticate your presented identification.

Access control processes

Getting access control technology right is a challenge, but it's not the biggest concern. The business processes supporting access controls are critical. If not implemented and managed correctly, the best access control technology is of little use — similar to owning a car with the best burglar alarm and then parking the car unlocked with the keys in the ignition switch.

Key processes in access control are collectively known as identity access management (IAM) and include the following:

  • Access provisioning: The process of provisioning access for a user should follow a strict, documented process. Every request should be properly approved by one person or group and performed by a different person or group. Records for all steps must be retained.
  • Internal transfers: Access management personnel need to be notified when an employee is transferred to another position to prevent an accumulation of privileges.
  • Employee termination: Access management personnel need to be notified immediately when an employee leaves the organization, especially if the person is being terminated. All user accounts should be locked or removed and then double-checked.
  • Managing access controls for contractors, temps, and others: All personnel with access to organization systems and applications should be managed using the same set of processes. When an organization does a substandard job of managing temporary workers, user accounts may exist for people who no longer work in the organization.
  • Password recovery: Organizations need a solid process for users who forget their passwords. Otherwise, attackers may be able to use this process to take over an employee's user account.
  • Periodic access reviews: Every aspect of access management must be periodically reviewed to ensure that each instance of access provisioning, termination, and transfers is performed correctly. Ineffective access control processes can result in active user accounts with excessive privileges and user accounts associated with terminated personnel.

Access control attacks and countermeasures

Adversaries who are attempting to access resources in a target system frequently attack access controls. Methods of attack include the following:

  • Replay attack: An attacker intercepts an authentication, typically over a network, and replays the captured login credentials to try to gain unauthorized access to the target system. A replay attack can be successful even when some forms of token authentication are used, provided the attacker replays the captured login credentials soon after capturing them.
  • Stealing password hashes: The attacker obtains the database of hashed passwords from a system. If the hashing method is weak, the attacker may be able to employ rainbow tables or other techniques to obtain account passwords. A rainbow table is a simple but very large lookup table containing all possible password hashes and their corresponding passwords. The technique known as salting (mixing in random numbers when storing the hash of a new password) prevents the use of rainbow tables.
  • Interception of passwords in transit: An attacker may be able to intercept login credentials if they are transmitted “in the clear” (unencrypted) over a network. Older but still-used protocols such as Telnet and FTP (File Transfer Protocol) employ the transmission of login credentials without encryption. This threat is eliminated if you discontinue Telnet and FTP in favor of ssh (Secure SHell), FTPS (File Transfer Protocol Secure), and SFTP (Secure File Transfer Protocol).
  • Session hijacking: An attacker attempts to steal session cookies from a user's web session; if successful, the attacker will be able to hijack the user's session. The attacker may then be able to perform all functions that the user could perform. Session hijacking can be prevented with proper session management, including full session encryption and encryption of session cookies.
  • Key logger: An adversary may be able to use one of several methods to get key logger malware installed on a user's system. If successful, the key logger will be able to intercept typed login credentials and transmit them to the adversary, who can use them later to access those same systems. Multifactor authentication and advanced malware prevention (AMP) tools can help thwart key loggers.
  • Social engineering: Adversaries have a number of techniques available to trick users into providing their login credentials. Techniques include
    • Phishing: The attacker sends an email that attempts to trick the user into clicking a link that takes the user to a phishing site, which is an imposter site used to request login credentials. If the user provides those credentials, the attacker can use them to access the real site.
    • Watering hole attack: An attacker selects a website that he or she believes is frequented by targeted users. The attacker attacks the website and plants malware on the site that can, if successful, install a key logger or other malware on the victim's workstation.

Emerging issues in access control

Issues that keep networking professionals up at night include these:

  • Key logging malware
  • Stolen password hashes
  • Users who select poor (easily guessed) passwords
  • Users who reuse personal passwords on business sites

Telecommunications and Network Security

Networks are the lifeblood of every organization that utilizes computing in support of business processes. There's a lot to know about internal network technologies such as TCP/IP, Ethernet, and Wi-Fi. It's important that you also understand telecommunications technologies, including synchronous optical networks (SONET), multiprotocol label switching (MPLS), and wireless technologies such as WiMAX and LTE.

Understanding the security aspects of networking is vital as well. It's not just the security manager's job to protect networks — it's a networking professional job as well.

Basic concepts in telecommunications and network security

You need to understand the important concept of encapsulation because it is used throughout almost all network technologies. In encapsulation, messages of one protocol are placed in messages of another protocol. For example, SMTP (Simple Mail Transport Protocol) messages are placed in TCP (Transmission Control Protocol) datagrams, which are placed in IP (Internet Protocol) messages, which are placed in DS-1 frames, which are placed in OC-48 frames.

Here's an analogy. You write a message on a sheet of paper (SMTP message), place it in an envelope (TCP message), and place the envelope in a mailbox (IP message), where a mail truck (switch) picks it up and delivers it to a distribution center (router). There, the envelope (TCP message) is placed in a bin (IP message), which is driven to another distribution center (router). There, the bin (IP message) is placed in a larger bin (DS-1 frame) and driven to an airport (DACS), where the larger bin (DS-1 frame) is placed on an airplane (OC-48 frame) that flies through the air (optical fiber). At the other end of the flight, the process is reversed, and the recipient receives the note on the sheet of paper.

Network technologies

A plethora of network technologies exist; we discuss the important ones in this section.

Common terms in telecommunications and network security

ATM

CAT-6 cable

denial of service

DLP

DMZ

DOCSIS

DS-1

E-1

encapsulation

firewall

frame relay

IP address

IPS

ISDN

MAC address

MPLS

packet header

payload

POTS

PSTN

QoS

router

routing table

SONET

T-1

TCP/IP

VoIP

VPN

watering hole attack

WEP

WiMAX

WPA

WPA2

Wired telecom network technologies

Wired telecom networks connect homes, businesses, schools, and governments through technologies that use copper or fiber optic cabling to carry many types of signals. These signals include the following:

  • DS-1 (Digital Signal one), T-1, E-1: DS-1 is a family of multiplexed telecommunications technologies that have carried voice and data for decades in the United States and Europe. In the United States, T-1, which runs at 1.544Mbps, is the basic protocol. It's often multiplexed into 24 64Kbps voice channels for use by ordinary phone and fax lines, often known as POTS (plain old telephone service). In Europe, E-1 is the basic protocol, at 2.048bps, or 32 voice channels. Speeds higher than DS-1 are available, such as DS-3 (44.736Mbps), DS-4 (274.176Mbps), and DS-5 (400.352Mbps).
  • SONET (Synchronous Optical Networking): This new high-speed telecommunications backbone technology runs over fiber optic cables on land and in submarine cables. SONET runs at dizzying speeds, including OC-1 (48.960Mbps), OC-3 (150.336Mbps), OC-12 (601.344Mbps), OC-96 (4,810.752Mbps), and OC-192 (9,621.504Mbps).
  • DSL (Digital Subscriber Line): This family of protocols is delivered to homes and businesses over the same pairs of copper wires as telephone service but at a higher frequency.
  • DOCSIS (Data Over Cable Service Interface Specification): This family of technologies transports TCP/IP over cable television service to homes and businesses.
  • MPLS (MultiProtocol Label Switching): This packet-switched technology transports a variety of protocols such as TCP/IP, Ethernet, ATM, and VoIP over long distances. MPLS includes Quality of Service (QoS) settings to ensure that protocols such as voice and streaming video are transported without annoying interruption even when networks are congested.
  • Dark fiber: This option is not a technology but a telecommunications medium available to businesses. Businesses can connect their own telecom equipment to fiber optic cabling to connect their networks between buildings in a city or metropolitan area, using any protocol they want.

Older technologies you don't need to be too concerned with anymore (unless you're a technology history buff) include ISDN, ATM, frame relay, X.25, and PSTN.

Wireless telecom network technologies

Wireless telecom networks connect individuals, homes, and businesses through the use of several technologies, including the following:

  • GPRS (General Packet Radio Service): This technology is encapsulated in the GSM (Global System for Mobile communications, originally Groupe Spécial Mobile) cellular protocol.
  • LTE (Long Term Evolution): This wireless telecom standard provides voice and data service with speeds up to 300Mbps.
  • WiMAX (Worldwide Interoperability for Microwave Access): A wireless telecom standard that provides data rates up to 40Mbps for mobile stations and 1Gbps for fixed stations, WiMAX was developed as a wireless alternative to DSL and DOCSIS.

Other notable technologies include CDPD, CDMA, and packet radio.

Wired consumer and business network technologies

Although many standards have been used for wired network technologies, the long-term trend has been a general migration to TCP/IP on Ethernet over copper cabling or fiber optic cabling. The dominant technologies follow:

  • CAT-6 (Category 6) cabling: The darling of homes and businesses running wired networks over distances of up to 100 meters, CAT-6 cabling can run Ethernet speeds up to 10Gbps.
  • Fiber optics: Businesses often run fiber optics in larger buildings to connect networks from floor to floor, as well as from building to building. Fiber-optic cabling is made of glass and transmits signals as visible light, as opposed to CAT-6 and other metallic cabling, which transmit data as electrical signals.

Has-been cabling used in the past include Thinnet, Thicknet, Cat-3, and Cat-5.

Wireless consumer and business network technologies

Wireless network technologies are wildly popular in a number of typical settings. The technologies in use are

  • Wi-Fi: The IEEE 802.11 family of wireless protocols are widely used in small and large residences, businesses, retail stores, restaurants, government buildings, and even outdoors. A variety of security protocols are used on Wi-Fi, ranging from none (no encryption), WEP (Wired Equivalency Protocol, considered obsolete), WPA (Wireless Protected Access, which is just okay), and WPA2 (preferred by anyone thinking about security). The range of Wi-Fi is about 20 meters indoors and farther outdoors.
  • Bluetooth: This popular wireless protocol connects devices in close proximity. Wireless earsets and headsets were the first popular use of Bluetooth.
  • NFC (Near Field Communications): This very short-range (6 cm) wireless protocol was developed for use in contactless payment systems.

Runner-ups include iRDA (the infrared point-to-point technology, which has all but disappeared) and wireless USB (up and coming, and possibly a force in the future).

Software-defined networking (SDN)

Software-defined networking is an emerging technology that is simplifying network architectures and helping to reduce costs. Instead of purchasing a lot of separate specialized network devices such as load balancers, web application firewalls, intrusion prevention systems, routers, and switches, organizations can purchase a few OpenFlow-enabled switches and a centralized controller, and then build a network fabric that exists virtually in the switch, performing all those functions in harmony instead separately.

As of this writing, 90 percent of IT workers (including a lot of senior-level people) don't understand SDN, so you should consider this a ground-floor opportunity. If you're fortunate enough to get some SDN training or work alongside someone who is doing SDN, stay with it as long as you can.

TCP/IP

Developed in the 1970s as a robust military communications network that had some self-healing properties and resilience, TCP/IP has formed the basis for virtually every home, business, and commercial network, as well as the global Internet itself.

TCP/IP is a packet-based technology in which messages are bundled into packets that are routed to their destinations. A single packet has a source address, a destination address, a protocol number, and payload (the contents of a message).

TCP/IP addressing

The source address and destination address follow a numbering convention, with a global authority that assigns addresses to organizations. In TCP/IP version 4, the form of an address is

Image

where each xxx (octet) is commonly portrayed as a decimal value and can range from 1 through 255. In TCP/IP version 6, the form of an address is

Image

where each xxxx (hextet) is a hexadecimal value that can range from 0000 through FFFF.

TCP/IP routing

In TCP/IP, routers process packets as they make their way from their source to their destination. Think of a router as a traffic cop in a street intersection who tells you which way to turn. A router examines a packet's destination address, consults its routing table, and then sends the packet in the correct direction to get it closer to its ultimate destination.

Routers exchange routing information so that each router has a better idea about which direction to send each packet. They exchange this information through several routing protocols, such as RIP, BGP, IGRP, OSPF, or IS-IS. These routing protocols each have best uses; some are used by Internet backbone routers, and others are better suited for routers inside a company. Some become obsolete as newer and better ones are developed.

TCP/IP protocols

TCP/IP has two basic protocols, on top of which nearly all the others are used through encapsulation (the nesting process explained earlier). These two basic protocols are

  • UDP: Formally known as User Datagram Protocol, UDP is a simple connectionless protocol typically used to send a message in a single packet.
  • TCP (Transmission Control Protocol): This connection-oriented protocol is usually intended for a longer conversation between systems. A TCP session is established by something called a three-way handshake, which works something like this:

Image

The TCP and UDP protocols contain hundreds of established protocol standards, a few of which are well known and frequently used, such as

  • HTTP (HyperText Transport Protocol): Using port 80, transports user web traffic without encryption.
  • HTTPS (HyperText Transport Protocol Secure): Using port 443, transports web traffic with encryption.
  • SMTP (Simple Mail Transport Protocol): Using port 25, transports email without encryption.
  • DNS (Domain Name Service) protocol: Using port 43 (on both TCP and UDP services), translates domain names such as www.dummies.com into IP addresses.
  • FTP (File Transfer Protocol): Using ports 20 and 21, enables bulk file transfers without encryption.
  • SSH (Secure SHell) protocol: Using port 22, provides administrative access to systems and network devices.

Image To understand concepts such as encapsulation, you should become familiar with the OSI Network Model.

Network security

Several network-centric security devices protect systems, networks, and information from intruders. We describe the more common types in this section.

Firewalls

Firewalls are inline devices placed between networks to control the traffic that is allowed to pass between those networks. Typically, an organization places a firewall at its Internet boundary to prevent intruders from easily accessing the organization's internal networks.

A firewall uses a table of rules to determine whether or not a packet should be permitted. The rules are based on the packet's source address, destination address, and protocol number. Firewalls do not examine the contents of a message.

Firewalls are used to create a demilitarized zone (DMZ), which is half inside and half outside the networks in which organizations place Internet-facing systems such as web servers. This strategy helps protect the web server from the Internet and protects the organization in case an adversary compromises and takes over control of the web server. Figure 4-1 depicts a typical DMZ network.

Intrusion prevention system (IPS)

An intrusion prevention system (IPS) is an inline device that examines incoming and outgoing network traffic, looking for signs of intrusions and blocking such traffic when it detects an intrusion.

Unlike a firewall, an IPS examines the contents of network packets, not just their source and destination addresses. This approach is based on the principle that malicious traffic may be characterized by its contents, not merely its origin or destination.

Image

Figure 4-1: Typical DMZ network architecture.

Like a firewall, an IPS is typically placed at or near Internet entrance and exit points, so that all Internet incoming and outgoing traffic, respectively, can be examined and any malicious traffic blocked.

Data loss prevention (DLP) system

A data loss prevention (DLP) system primarily examines outgoing traffic, looking for evidence of sensitive data being sent out of the organization inappropriately. A DLP system is configured to look for specific patterns in outgoing data and to send alerts or just block traffic meeting certain criteria.

Web-filtering system

A web-filtering system examines the websites that an organization's personnel are visiting. The web-filtering system logs all web access by personnel and can also be configured to block access to various categories of websites (for example, pornography, gambling, weapons, or social media sites) as well as specific sites. The purpose for web-filtering systems is generally twofold: to prevent access to sites that are clearly not work related and to protect the organization from accessing sites that may be hosting malware.

Virtual private network

A virtual private network (VPN) is a technique used to encapsulate network traffic flowing between two systems, between a system and a network, or between two networks. Typically, encryption is employed along with encapsulation so that the contents of the traffic cannot be read by anyone who intercepts the traffic.

VPNs are most commonly used for remote access, as well as to protect information flowing over the Internet between specific organizations.

Attacks and countermeasures

Intruders are incredibly efficient at finding ways to break into an organization's networks. They do so to steal valuable data that they can easily monetize. The techniques used and the defensive countermeasures include the following:

  • Phishing: Adversaries compose realistic-looking emails to trick users into clicking links to phishing sites, which are malicious sites that will attempt to install malware on victims' workstations or steal login credentials. Countermeasures include spam filters, antimalware, intrusion prevention systems, and security awareness training.
  • Watering hole attack: Adversaries find websites that they think an organization they're targeting might visit. They take over those websites and install malicious software that visitors will unknowingly install, leading to an intrusion. Countermeasures include web-filtering systems, antimalware, and intrusion prevention systems.
  • Denial of service attack: Adversaries will attack a target system to incapacitate it, through either a high volume flood of data or malicious traffic designed to incapacitate the target system. Countermeasures include firewalls, intrusion prevention systems, and cloud-based denial-of-service defense services.

Emerging issues in telecommunications and network security

New developments that keep networking professionals on edge include the following:

  • The Internet of Things (IoT): Many new devices are being connected to the Internet, and we all know that many of them do not have well-implemented security. The Internet of Things in the business environment includes accessing and controlling manufacturing equipment, tracking the performance and location of company trucks, tracking use of utilities, such as water and lighting, and medical monitors. The race is to implement the solution, and worry about security later.
  • BYOD (bring your own device): Network interoperability and the proliferation of powerful consumer devices such as iPhone and Android mean that millions of workers are using their personal devices at work as work tools. Personally owned devices in a business network result in a wide range of issues, including malware control and data control.
  • IPv6: The shortage of available IP addresses and other issues are compelling organizations to migrate to IPv6. Although iPv6 is more secure by design, few network and security professionals are as familiar with IPv6 as they are with IPv4. Implementing IPv6 can lead to security holes through improper configuration.

Cryptography

Cryptography is the art and science of hiding data in plain sight, and plays a key role in protecting data from onlookers and adversaries. In this section, you examine this mysterious craft and discover how it's used to protect sensitive data.

Cryptography terms

block cipher

certificate authority (CA)

ciphertext

cryptanalysis

cryptosystem

decryption key

digital certificate

digital signature

encryption

encryption algorithm

encryption key

hashing

key management

key length

message digest

nonrepudiation

plaintext

pseudorandom number generator

public key cryptography

steganography

stream cipher

watermarking

Although cryptography is often used as part of a complex system, it's often easier to think of cryptography in isolation, in the simple-use case of a message sent in plain sight from a sender to a receiver.

Basic concepts in cryptography

Encryption is the process of transforming plaintext into ciphertext through an encryption algorithm and an encryption key. Decryption is the process of transforming ciphertext back into plaintext, again with an encryption algorithm and the encryption key. In part, the strength of encryption is based on the key length (the number of characters that make up the encryption key) and the complexity of the encryption key.

An implementation of encryption and encryption keys is known as a cryptosystem. An attack on a cryptosystem is called cryptanalysis.

Most encryption algorithms employ a pseudorandom-number generator (PRNG), which is a technique for deriving a random number for use during encryption and decryption.

Types of encryption

The two basic ways to encrypt data are by block cipher and by stream cipher. Details follow:

  • Block cipher: A block cipher encrypts and decrypts data in batches, or blocks. Block ciphers are prevalent on computers and on the Internet, where they encrypt hard drives and thumb drives, and protect data in transit with SSL and TLS. Notable block ciphers are
    • Advanced Encryption Standard (AES): Selected in 2001 by NIST (National Institute of Standards and Technology) to replace DES, AES is based on the Rijndael cipher and is in wide use today.
    • Data Encryption Standard (DES): The leading official encryption standard in use from 1977 through the early 2000s. DES was considered obsolete mostly because of its short key lengths.
    • Triple DES (3DES): Derived from DES, 3DES is essentially DES with a longer key length and, hence, more resistant to compromise than DES.
    • Blowfish: Developed in 1993, Blowfish was developed as an alternative to DES, which was nearly twenty years old. Blowfish is unpatented and in the public domain.
    • Serpent: Another public domain algorithm, Serpent was a finalist in the AES selection process.
  • Stream cipher: A stream cipher encrypts a continuous stream of information such as a video feed or an audio conversation. The most common stream cipher is RC4.

Image Block ciphers are most often used to encrypt Internet-based streaming services. On the Internet, everything is transmitted in packets, which are individually encrypted using block ciphers.

Hashing, digital signatures, and digital certificates

Hashing is used to create a short fixed-length message digest from a file or block of data; this is something like a fingerprint: Message digests are unique and difficult to forge. Hashing is often used to verify the integrity of a file, or the originator of a file, or both. Common hashing algorithms include the following:

  • MD-5 is a formerly popular hashing algorithm developed in 1992. It is now considered too weak for reliable use and obsolete.
  • SHA-1 is another popular hashing algorithm that was determined in 2005 to be too weak for continued use. By 2010, U.S. government agencies were required to replace SHA-1 with SHA-2.
  • SHA-2 is a family of hashing algorithms: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256. These are all considered reliable for ongoing use.

A digital signature is a hashing operation carried out on a file. Depending on the implementation, the digital signature may be embedded in the file or separate. A digital signature is used to verify the originator of the file.

A digital certificate is an electronic document that consists of a personal or corporate identifier and a public encryption key, and is signed by a certificate authority (CA). The most common format for a digital certificate is X.509. The use of digital certificates and other tools such as strong authentication can lead to the failure for an individual to be able to plausibly deny involvement with a specific transaction or event. This process is known as nonrepudiation.

Encryption keys

The two main types of encryption keys in use today are

  • Symmetric key: Both the sender and the receiver have the same encryption key.
  • Asymmetric key: Also known as public key cryptography, utilizes a pair of encryption keys — a public key and a private key. A user who creates a keypair would make the public key available widely and protect the private key as vigorously as one would protect a symmetric key.

Private keys and symmetric keys must be jealously guarded from adversaries. Anyone who obtains a private or symmetric encryption key can decrypt any incoming encrypted message. The management and protection of encryption keys is known as key management.

Software programs often employ passwords to protect encryption keys. Hence, the strength of the cryptosystem is only as strong as the password protecting its keys.

Encryption alternatives

Two encryption alternatives provide some of the same features as a cryptosystem:

  • Steganography (stego): A message is hidden in a larger file, such as an image file, a video, or sound file. Done properly, this technique can be as effective as encryption.
  • Watermarking: A visible (or audible) imprint is added to a document, an image, a sound recording, or a video recording. Watermarking is a potentially powerful deterrent control because it indicates that some other party owns the object.

Emerging issues in cryptography

Encryption is not a magic sleeping pill. Instead, there are numerous worries ranging from new types of attacks to official government misbehavior. Let's take a look at what's keeping networking professionals awake at night.

  • Man-in-the-middle attacks: Many attacks on cryptosystems involve a man-in-the-middle attack at the onset of a so-called secure communications session. Flaws in session initiation and key exchange can result in the attacker being able to easily read all encrypted communications between two endpoints.
  • Improper uses of cryptography: Cryptography, like any tool, is useful when used properly. Used improperly, cryptography gives us a false sense of security. Two examples are failing to salt (mixing in random numbers when calculating the hash of a plaintext message) when hashing passwords and failing to adequately protect an encryption key.
  • Brute-force attacks: Advances in distributed computing are making it easier for adversaries to build massive parallel computing machines to attack cryptosystems. A brute-force attack employs fast computers to guess every possible combination until the correct one is found. To stave off these attacks, key lengths are getting longer and longer. However, these longer key lengths require more computing power when performing legitimate encryption and decryption. It's a never-ending game of cat and mouse (in this case, we are the mouse).
  • Precompromised encryption algorithms: In 2012–2013, revelations uncovered the plausibility that various government-spying organizations have been able to subvert the development or implementation or both of certain encryption algorithms and cryptosystems. The result is a serious crisis of trust in the cryptosystems used to protect sensitive information from adversaries.
  • Persistent use of compromised cryptosystems: Encryption algorithms have a limited shelf life, after which some technique for compromising them is revealed.

Computing Architecture and Design

Every successful networking professional is familiar with the basics of computing architecture and design: how computers are architected internally, and the ways they are used, including virtualization and cloud computing. This is networking's bread and butter.

Common terms in computing architecture and design

bus

central processing unit (CPU)

cloud computing

dedicated public cloud

guest

hybrid cloud

hypervisor

infrastructure as a service

main storage

operating system (OS)

platform as a service

private cloud

process

secondary storage

software as a service

trusted platform module (TPM)

virtualization

Basic concepts in computing architecture and design

Networking professionals must understand how computers are designed and how they function. This applies to computers used on-site and computers that are a part of cloud-based services.

Computer hardware architecture

Networking professionals need to understand how computer hardware functions, so that they can ensure that the hardware is properly managed and used. Modern computers are made up of the following components:

  • Central processing unit (CPU): The component where computer instructions are executed and calculations performed.
  • Main storage: The component where information is stored temporarily. Often known as RAM (random access memory), main storage is usually volatile and its contents lost if power is removed or the computer turned off.
  • Secondary storage: The component — typically a hard drive or a solid-state drive (SSD) — where information is stored permanently. Information stored here is persistent even when the computer is switched off. Secondary storage is often organized into one or more file systems, which are schemes for the storage and retrieval of individual files.
  • Bus: The component where data and instructions flow internally among the CPU, main storage, secondary storage, and externally through peripheral devices and communications adaptors. Popular bus architectures include SCSI (small computer systems interface) SATA (serial ATA), IEEE1394 (also known as FireWire), and USB (universal serial bus).
  • Firmware: Software stored in persistent memory, generally used to store initial instructions that are executed when the computer is switched on.
  • Communications: for Ethernet, Wi-Fi, or Bluetooth) and display adaptors (for computers with human interfaces). Most computers include one or more communication components — otherwise, how would you get problems into it and results out of it?
  • Security hardware: Components for various security functions, such as a Trusted Platform Module (TPM), which is used to store and generate cryptographic keys, smart card readers, and fingerprint scanners. Some computers include specialized security hardware.

Computer operating system

A computer operating system (OS) consists of the set of programs that facilitate the operation of application programs and tools on computer hardware. The components of an OS include the kernel (the core software that communicates with the CPU, memory, and peripheral devices), device drivers (which facilitate the use of bus devices and peripheral devices), and tools (used by administrators to manage resources such as available disk space).

The main functions performed by an operating system are

  • Process management: Processes are the individual programs that run on a computer. The OS starts and stops processes and makes sure they do not interfere with each other.
  • Resource management: The OS allocates and manages the use of main storage, secondary storage, communications, and attached devices.
  • Access management: The OS manages authentication as well as access to resources such as files and directories in secondary storage.
  • Event management: The OS responds to events such as the insertion or removal of media and devices, keystrokes, or mouse movements.
  • Communications management: The OS manages communications to ensure that incoming and outgoing communications are handled and routed properly.

An operating system can run directly on computer hardware or through a scheme called virtualization, in which many separate copies of operating systems can run simultaneously on a computer. In virtualization, the main controlling program is called the hypervisor, and each running OS is called a guest. The hypervisor's jobs are to allocate computer hardware resources to each guest and to prevent guests from interfering with each other.

Virtualization permits an organization to make better use of its resources. Instead of running one operating system per server, multiple operating systems can run on each server, making better use of hardware investment and rack space.

With commercial virtualization tools, OS instances can be moved from one hardware platform to another (and even to one data center to another), and OS instances can be easily cloned to enable more running copies of a server if demand requires it.

Cloud services

The adoption of cloud services is in full swing despite the fact that many still don't understand how cloud services work. An organization using cloud computing has chosen to use computing or application resources that are owned by another organization and located away from the organization's premises.

The three common types of cloud services follow:

  • Infrastructure as a Service (IAAS): Service providers enable customers to lease virtual machines, servers, storage, network functions, and so on. Examples include Amazon Web Services, Microsoft Azure, and Google Compute Engine (GCE).
  • Software as a Service (SAAS): Service providers enable customers to use software applications managed by cloud service providers. Examples include Salesforce, Office365, and Cisco WebEx.
  • Platform as a Service (PAAS): Service providers run application software with application programming interfaces (APIs) to which customers can connect their application. Examples include Engine Yard, Google App Engine, and Microsoft Azure Web Sites.

An organization can utilize cloud services in the following ways:

  • Public cloud: An organization utilizes cloud services that are operated by and located at a cloud service provider's data center.
  • Private cloud: An organization builds its own cloud computing infrastructure using hardware assets that it owns, and locates it in its own data center. An organization that builds a private cloud wants the logical capabilities of cloud computing but also wants to retain ownership and control of the hardware supporting it.
  • Hybrid cloud: An organization utilizes a combination of public cloud services with its in-house resources. An organization that uses a hybrid cloud generally wants control of specific information or hardware assets.
  • Dedicated public cloud: An organization utilizes public cloud services on hardware dedicated to that organization. An organization that uses dedicated public cloud wants the flexibility of cloud services but is unwilling to share infrastructure with other tenants.

Emerging issues in computing architecture and design

Issues that tend to keep networking professionals on their toes include:

  • Internet of Things (IoT): We worry that insufficient work is put into developing sound security models and designs to prevent attacks in new Internet-connected products that.
  • Speed to market: Many organizations, in attempts to get newly developed products to market more quickly, skip security designs, reviews, and controls, thereby leaving products open to attack.
  • Flawed access control: Many organizations lack the skills to implement sound, effective access controls in their systems, resulting in unnecessary exposure of sensitive data.

IT Operations

IT operations is the heartbeat of organizations using computers and networks to support their key business processes. Many networking professionals begin their networking careers in IT operations; some spend their entire careers there.

Image The concepts in IT operations in this section follow the international standard called ITIL, or IT Infrastructure Library. To learn more about IT operations, get a copy of ITIL For Dummies by Peter Farenden.

Basic concepts in IT operations

Networking professionals spend their time working in one or more IT operations processes, which are summarized in this section.

Common terms in IT operations

antimalware

backup

change management

configuration management

continuous monitoring

data destruction

help desk

incident management

remote access

service desk

vulnerability management

Change management

Change management is a basic IT operations process concerned with the management and control of changes made in IT systems. A proper change management process has formal steps of request, review, approval, execution, verification, and recordkeeping. The purpose of change control is the discussion of proposed changes before they take place, so that risks and other issues can surface.

Image All process and procedures, including those in incident management, vulnerability management, and change management, should be formally documented.

Configuration management

Configuration management is a basic IT operations process that is concerned with the management of configurations on servers, workstations, and network devices. Configuration management is closely related to change management. Whereas change management is involved with the process of performing the change, configuration management is involved with the preservation of present and prior configurations of each device and system.

Service desk

The IT service (help) desk is a central point of contact for users who require information or assistance. Users call or send a message asking for help with some matter concerning the use of a computer, network, program, or application.

In many organizations, IT service desk personnel are trained to assist with several common and easily remedied issues, such as forgotten passwords, simple support issues related to office productivity tools, accessing central resources such as file servers, and printing. When service desk personnel are unable to assist directly, they will refer the issue to a specialist who can offer more advanced assistance.

Most organizations utilize a ticketing system to track open issues, the time required to fix issues, and resources used. When used properly, a ticketing system can be a rich source of statistics related to the types of assistance that uses are requesting and the resources and time required to fix them.

Continuous monitoring

The velocity of harmful events has accelerated to a point where data review of incident logs is no longer an effective means of detecting unwanted behavior. Besides, the volume of available log data (gigabytes in the smallest organizations and terabytes in larger ones) is too great for humans to review.

Organizations must centralize their logging. All devices and systems should send their log data to a purpose-built central repository, called a security incident and event management (SIEM) system. But more than that, organizations need tools that not only detect serious security events but also alert key personnel and even remediate the incident in seconds instead of weeks or months as is often the case.

Backup

Hardware problems, software problems, and disasters of many kinds make organizations wish they had a backup copy of critical information. Most organizations routinely perform backups, where important data is copied to backup media or servers in some remote location.

Data destruction

When data is no longer needed, it must be effectively destroyed so that unauthorized parties can't recover it. Some companies pay little attention to this task, but others do a thorough job of data destruction. No, you won't get to smash obsolete printers out in a field, but you might get to operate some cool equipment to destroy unneeded data.

Antimalware

Antimalware, antivirus, and advanced malware protection (AMP) refer to tools designed to detect and block malware infections and malware activities.

Organizations often combat malware by having several layers of control in place (known as a defense in depth), including the following:

  • Workstation antimalware: The front lines of the malware wars, every workstation (and even mobile devices) should have antimalware to block malware.
  • Server antimalware: File servers and other systems used to store programs and files should have antimalware, just in case malware sneaks through a workstation.
  • Email server antimalware: Email is a favored transportation route for malware, so blocking it at email servers is a good way to keep it from reaching workstations.
  • Intrusion prevention systems (IPS): Malware commonly “calls home” upon installation to await further instructions. An IPS can detect and block such communications, which prevents malware from becoming active and harmful.
  • Spam filtering: Because many attacks come in the form of phishing, spam filters can be effective at blocking most or all phishing messages.
  • Website filtering: These appliances block access to websites based on category (generally not having to do with business operations) and block websites known to be compromised with malware.

The recent potency of malware is leading many organizations to enact more controls in the form of intrusion prevention systems, which block the command-and-control traffic associated with malware.

Remote access

Remote access is both the business process as well as the technology that facilitates an employee's ability to remotely access information systems that are not accessible from the Internet.

In the business process sense, many organizations permit only a subset of their workers to remotely access the internal environment. In the technical sense, remote access usually includes encryption of all communications, as well as two-factor authentication to make it harder for an attacker to gain access to internal systems.

The fact that many systems are opting to use cloud-based systems instead of internal systems makes some aspects of remote access obsolete. Or, to put it another way, cloud-based systems turn everyone into remote workers because their information systems are no longer located in their internal networks.

Incident management

Incident management is an IT operations process used to properly respond to operational and security incidents. Organizations should have written incident response plans along with training and testing to ensure that these plans are accurate and that personnel understand how to follow them. Written playbooks for common incidents should be developed.

Security incidents are just a special type of incident management, requiring a few additional considerations such as regulatory requirements and privacy laws.

The steps in a mature incident management process are

  • Declaration
  • Triage
  • Investigation
  • Analysis
  • Containment
  • Recovery or mitigation or both
  • Debriefing

Not all security incidents require all these steps, but this framework ensures an orderly and coordinated response to reduce the scope and effect of an incident, as well as a debriefing and follow-up activities to reduce the likelihood or effect of a similar incident in the future.

Vulnerability management

Vulnerability management is an IT operations process concerned with the identification and mitigation of vulnerabilities in IT systems. An effective vulnerability management process includes procedures for identifying vulnerabilities, prioritizing them, and making changes (through change management) to eliminate vulnerabilities that could be used by an intruder to successfully attack the environment.

Other IT operations processes

Honorable mentions that we won't describe include the following:

  • Capacity management
  • Problem management
  • Budget management
  • Supplier management
  • Service-level management

Emerging issues in IT operations

Yes, a lot of issues in the world of security operations keep security managers lying awake at night.

Advanced malware

Innovations in malware packaging have rendered antivirus and antimalware software virtually ineffective against advanced malware. Additional layers of detection and prevention tools are needed to combat the threat. These tools have significant costs in terms of capital as well as manpower to maintain them.

The greatest fear is that malware creators will soon develop new ways of circumventing even advanced malware prevention (AMP) tools, thereby requiring even greater investment in effective defenses.

Bring your own device (BYOD)

With employees in so many organizations bringing personally owned devices to work for use in their daily job duties, organizations are finding it more difficult than ever to keep track of sensitive data and know when that data is leaving its control.

Physical and Environmental Security

Physical security is concerned with the protection of personnel at work locations, as well as information systems and related media and equipment. Supporting environmental controls and power protection are also a concern.

Although physical and environmental security systems are generally not a part of a networking professional's responsibility, being familiar with these systems will help you do your job better and keep you out of trouble!

Common terms in physical and environmental security

biometrics

digital video recorder (DVR)

electric generator

exterior lighting

fence

fire extinguisher

guard

guard dog

heating, ventilation, and air conditioning (HVAC)

inert gas fire suppression

key card

line conditioner

mantrap

PIN pad

razor wire

smoke detector

sprinkler system

tailgating

uninterruptible power supply (UPS)

video surveillance

visitor log

wall

Basic concepts in physical and environmental security

This section discusses the most common concepts in security measures that are employed to protect personnel and equipment.

Site access security

Organizations should implement a level of site access security commensurate with the value of information and assets in the facility. The following types of controls contribute to the security of a work location, whether a facility is a data center or primarily used by employees:

  • Key cards: Plastic cards with a magnetic strip, an RFID circuit, or an embedded processor and memory. Key cards are assigned to individual workers and are used to activate door locks to permit entry. With a key card system, a building can be divided into zones that restrict entry to specific areas or rooms as needed. Key card systems record successful and unsuccessful access attempts. Lost or stolen key cards can be deactivated in the system so that they will no longer function.
  • PIN pads: Keypads with numbers or letters usually used with key cards. PIN pads reduce the risk associated with a lost or stolen key card: On a door controlled by a key card reader and a PIN pad, both the key card and knowledge of the PIN are required to unlock the door.
  • Biometric access controls: Devices such as fingerprint readers, palm scanners, and iris scanners. These biometric access controls can be used as a more effective site access control than key cards and PIN pads alone because an intruder could steal a key card and obtain a PIN code.
  • Metal keys: Still used for individual offices, but no longer recommended for rooms where several personnel need to routinely enter because there is no way to know which person entered a room.
  • Mantraps: A set of two interlocked doors with a short passage between, to control movement of personnel through a door. A mantrap permits only one person at a time to pass, thereby preventing “tailgating,” where one or more people can follow an authorized person into a room or building.
  • Guards: Personnel with duties to protect facilities and personnel.
  • Guard dogs: An effective deterrent that can assist in searches for persons and in apprehending intruders.
  • Visitor logs: Written or electronic records of visitors to a building. Visitors can also be requested to present a government-issued identification to confirm their identity.
  • Fences and walls: Deterrent and preventive measures to protect the perimeter of a facility or areas of particular interest. A fence or wall at least 8 feet high with strands of barbed wire or razor wire will keep out all but the most determined intruders.
  • Video surveillance: Systems of cameras, monitors, and possibly recording equipment such as digital video recorders (DVRs) used to monitor key locations inside and outside a facility. A video system may include personnel who are observing in real-time, or it may be recording for later viewing when needed.
  • Exterior lighting: Protects a facility by illuminating areas where an intruder would otherwise be able to work in darkness in an attempt to enter a facility.
  • Visible notices: Posted signs and placards informing personnel of the presence of video surveillance, guards, guard dogs, and other controls. Visible notices can also inform visitors of the consequences of entering a facility.

Secure siting

Secure siting, also known as a site survey, is a process of searching for and analyzing a work site for nearby hazards and threats that could pose a risk to the security or safety of a work site and the personnel and equipment within.

Typical hazards that a site survey would identify include the following:

  • Transportation: nearby airports, railroads, and highways
  • Hazardous substances: nearby chemical facilities and petroleum pipelines
  • Behavioral: nearby sites where mass gatherings, riots, and demonstrations could take place
  • Natural: risk of flooding, landslide, avalanche, volcano, or lahar

Equipment protection

Measures need to be taken to protect equipment and personnel in work locations, including the following:

  • Theft protection: Locking doors, video surveillance, and cable locks
  • Damage protection: Earthquake bracing, and tip-over prevention
  • Fire protection: Smoke detectors, heat detectors, sprinklers, inert gas suppression, and fire extinguishers
  • Cabling security: Conduit or better siting to avoid exposure of communications or power cabling
  • Photography: Notices and intervention to discourage photography in sensitive areas

Electric power

Information-processing equipment (computers, network devices, and so on) is highly sensitive to even slight fluctuations in electric power. The following specialized equipment ensures a continuous supply of clean electric power:

  • Line conditioner: Absorbs noise present in utility power, such as spikes and surges.
  • Uninterruptible power system (UPS): Equipped with backup batteries that can supply power to computing equipment from several minutes to an hour or more.
  • Electric generator: Powered by gasoline, diesel fuel, natural gas, or propane and can generate electric power for hours, days, or more.

An electric generator and a UPS are typically used together to ensure continuous power. Because electric generators take several seconds to a minute or longer to activate, a UPS supplies power while the generator is starting up.

Many UPSs have built-in line conditioners, so standalone line conditioners are uncommon, except in environments where electric power is reliable but noisy.

Heating, ventilation, and air conditioning (HVAC)

People and information-processing equipment operate best within a narrow temperature and humidity range. (Humans are more tolerant of a wider range in temperature.)

Heating, ventilation, and air conditioning (HVAC) systems regulate temperature and humidity in buildings containing personnel, computers, or both. HVAC systems are especially important in data centers, which generate a considerable amount of waste heat that must be continuously moved away from computers to prevent overheating and premature failure.

Many newer data centers rely on circulation of outside ambient air (with particulate filtering) as opposed to refrigeration to provide cooling at a significantly lower cost.

Redundant controls

Many facilities incorporate redundant controls to ensure continuous availability of environmental needs. Redundancy allows for continuous protection in the event of equipment failure as well as routine maintenance. Examples of redundant controls follow:

  • Utility power feeds
  • UPSs
  • Generators
  • HVAC systems

Emerging issues in physical and environmental security

Issues in the physical and environmental security realm that keep security professionals awake at night include the following:

  • Use of cloud services: Organizations that adopt cloud services give up a large measure of control and visibility into the physical controls protecting equipment that stores and processes their data. Some cloud service providers do not readily provide detailed information that some organizations may need for compliance purposes.
  • Increased equipment density and available environmental controls: Newer servers and storage systems are constructed in smaller sizes, allowing for more servers and storage to be installed in a given area. Sometimes, however, data centers are unable to supply adequate power and cooling for this higher density equipment.

Regulations, Investigations, and Compliance

Because of their integral role in supporting business processes, information systems are in the crosshairs of laws and regulations. Computers are frequently involved in civil and criminal investigations, requiring forensic procedures when collecting evidence from computers and other electronic devices. Some networking professionals will have the opportunity to work in these areas.

Basic concepts in regulations, investigations, and compliance

Even though networking professionals are not attorneys, it is helpful for them to understand the laws, regulations, and other legal requirements that drive compliance efforts in organizations. It's also helpful for networking professionals to understand how security investigations should be conducted. This is fun stuff!

Computer crime laws

Many countries have enacted computer crime laws that define trespass, theft, and privacy in the context of information systems. In the history of law, computers are still new, and the development of laws is ongoing and changing frequently.

This high frequency in changes of laws, regulations, and legal standards presents a challenge to information security and legal professionals as they strive to be compliant with these laws and also to recognize cybercrimes when they occur.

Industry regulations

Regulations on many topics have been enacted for various industries. In the information technology world, regulations such as HIPAA (Health Insurance Portability and Accountability Act) require the protection of healthcare-related information and GLBA (Gramm-Leach-Bliley Act) require protection of customer information in the financial services industry.

Common terms in regulations, investigations, and compliance

chain of custody

COBIT

COSO

forensics

GLBA

HIPAA

ISO27002:2013

Managing compliance

Compliance is a matter of adhering to laws, regulations, contractual obligations, and policies. It takes a determined effort to know all compliance obligations in an organization, and more effort to achieve compliance. Many organizations develop or adopt a framework of controls to track compliance on an ongoing basis. Suitable frameworks include

  • COBIT (Control Objectives for Information and Related Technology): Developed by ISACA, COBIT is a highly regarded framework for IT operations.
  • COSO (Committee of Sponsoring Organizations of the Treadway Commission): Developed as a result of financial accounting scandals in the 1990s, COSO provides guidance for IT control frameworks for U.S. publicly traded companies.
  • ISO27002:2013: The international standard for information security management, which establishes a process of controls development and management

Security investigations and forensics

Security investigations are an organization's response to isolated security incidents that have little direct effect on business operations. Still, the events requiring investigation can be important in other ways because they can have significant legal implications.

Any event that takes place in an organization in the context of computers where possible future legal action is involved may require an investigation with forensic rules of evidence in play. These rules include

  • Evidence collection and preservation
  • Evidence chain of custody
  • Evidence collection recordkeeping
  • Evidence examination recordkeeping

For an organization to prevail in any related legal proceedings, a trained individual with dedicated tools and hardware must carry out these forensic procedures.

Emerging issues in regulations, investigations, and compliance

Following are two issues keeping networking professionals on their toes:

  • Rapid onset of new laws and regulations: New laws on computer operations, security and privacy are enacted and updated at a rate that makes it hard to keep up on their details, never mind figure out how to be compliant with them.
  • Jurisdictional issues: Many new laws have greater jurisdictional reach than in the past. For example, privacy laws in many U.S. states have jurisdiction across state lines, and international privacy laws affect many organizations not located in countries that passed the laws. These jurisdictional issues are all about cross-border privacy, where each country passes laws requiring the protection of private data associated with its citizens, applicable regardless of the location of the organization that has the data. This issue has many corporate counsels on a steady diet of coffee and Rolaids.

Factoring Nontechnical IT Issues

A number of trends in business affect jobs in networking. These trends go beyond the technical aspects already covered and include

  • Outsourcing
  • Telecommuting

Outsourcing IT

It is human nature for a company to want to have formal possession and control of the assets on which the company relies. Historically, companies have wanted to own everything involved in the creation of their products and services. Some companies have taken this approach to the logical extreme and seek to own everything from mining the materials to owning the stores where the products are sold.

A more modern approach is to own the parts of the process where you have your best differentiation from the competition, either in lower price or better quality, and rely on suppliers to provide commodities or other undifferentiated parts of your solution. For example, the Ford Motor Company refined its own steel starting in 1917. In the 1960s, they thought better of this approach and sold their steelmaking assets to focus on the parts of car making that they could do better.

This approach can be applied to information technology, and by extension, networking. A given company could look at data connections as a commodity, then look at their computer network as a commodity, and then look at their data centers as a commodity. What the heck, they could look at everything in IT, including application development and systems integration, as a commodity.

Companies can change their approach as circumstances in their business evolve. One year they can choose to outsource. Another year, they may choose to bring a part of IT back in-house and discontinue working with outsourcing vendors.

No single strategy as to the right degree of in-house management versus outsourcing applies at all times. The best you can hope is that companies will be judicious in adjusting their outsourcing strategy. If the company is not judicious, the result will be more layoffs and rehiring between the company and the vendors.

Employees un-telecommuting

The last major trend under consideration is a company's policy about telecommuting. Many cities and regions have implemented incentives to companies to encourage telecommuting. Providing companies these incentives is less expensive than building more roads.

Many employees prefer doing their job while wearing their jammies. Plenty of organizations have embraced this approach because it allows them to operate in a smaller office space. But it puts a burden on the network to ensure proper security and adequate resources.

The shift towards telecommuting has not been without problems. Primary among them is a loss of camaraderie among employees, resulting in a loss of collaboration opportunities. This is particularly a problem in companies that produce creative products and services and in engineering-oriented companies.

As a result, some senior executives are reversing policies on telecommuting. Some employees must work in the office a certain number of days per week while other positions that were formerly done by telecommuters must be done in the office.

Re-accommodating employees who were once telecommuting and are now in the office is not the hardest challenge to address. It involves reinstallation of standard communication services. Some policies are as fickle as fashion — and networking is expected to turn instantly to adapt to these policy changes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset