To successfully begin and navigate a career in networking, you need to understand a great deal about how various aspects of computing and networking technologies work. You don't need to understand everything in this chapter on day one, but the more you do know about the concepts behind the technologies, the easier it will be to learning more and do more as you increase your expertise and advance in networking.
At its heart, access control is all about who (or what) is allowed to access something. In business, users are required to identify themselves — and prove their identity — before they can use workstations and applications. This section explains the concepts behind access control.
The basics of access control can be divided into two major categories:
Common terms in access control
access review
accumulation of privileges
authentication
biometrics
FTP, FTPS, and SFTP
hashing
key logger
multifactor authentication
password
password quality
password recovery
phishing
rainbow table
replay attack
salting
session hijacking
single-factor authentication
social engineering
telnet
token
user ID
watering hole attack
Authentication is the process of asserting one's identity — including required proof such as a password, token, or biometric — to a system to access its resources. The identity takes the form of a user ID, which is a value assigned to a person or machine.
Single-factor authentication generally involves the presentation of a user ID and a password. This common form of authentication is more vulnerable to attack by adversaries due to its simplicity. The phrase “what you know” is associated with single-factor authentication because in this simplest form of authentication, the user has identified herself by presenting her user ID. The user then authenticates by stating something that she knows which is tied to her user ID (such as the current password).
A password is a secret word, phrase, or random characters used as a part of single-factor authentication. The quality of the password is an important factor that helps resist some forms of attack. Characteristics of password quality include length (how many characters), complexity (whether the password must contain lowercase letters, uppercase letters, numerals, and special characters), expiration (how much time may elapse before a user is required to select a new password), recovery (the process to follow when users forget their password), and reuse (whether previously used passwords may be used again).
The concepts in single-factor and multifactor authentication are sometimes difficult to understand. Three phrases are often used to simplify these concepts:
Passwords are typically stored in hashed form. Hashing is an irreversible cryptographic function that creates a large number representing the password without exposing the password. The hash value then facilitates the confirmation of a correct password during the login process but prevents the extraction of passwords. Hashing is explained in more detail in the “Cryptography” section, later in the chapter.
Multifactor authentication generally involves the presentation of a user ID, a password, and a token or biometric. This type of authentication is generally stronger than single-factor authentication. A token is a hardware device (or sometimes a smartphone application) that is used in multifactor authentication and represents a far stronger form of authentication than single factor Multifactor authentication can also use some form of biometric, such as a fingerprint, a palm scan, an iris scan, or a voiceprint The phrase “what you are” is associated with biometric authentication because you're using a part of your body to authenticate your presented identification.
Getting access control technology right is a challenge, but it's not the biggest concern. The business processes supporting access controls are critical. If not implemented and managed correctly, the best access control technology is of little use — similar to owning a car with the best burglar alarm and then parking the car unlocked with the keys in the ignition switch.
Key processes in access control are collectively known as identity access management (IAM) and include the following:
Adversaries who are attempting to access resources in a target system frequently attack access controls. Methods of attack include the following:
Issues that keep networking professionals up at night include these:
Networks are the lifeblood of every organization that utilizes computing in support of business processes. There's a lot to know about internal network technologies such as TCP/IP, Ethernet, and Wi-Fi. It's important that you also understand telecommunications technologies, including synchronous optical networks (SONET), multiprotocol label switching (MPLS), and wireless technologies such as WiMAX and LTE.
Understanding the security aspects of networking is vital as well. It's not just the security manager's job to protect networks — it's a networking professional job as well.
You need to understand the important concept of encapsulation because it is used throughout almost all network technologies. In encapsulation, messages of one protocol are placed in messages of another protocol. For example, SMTP (Simple Mail Transport Protocol) messages are placed in TCP (Transmission Control Protocol) datagrams, which are placed in IP (Internet Protocol) messages, which are placed in DS-1 frames, which are placed in OC-48 frames.
Here's an analogy. You write a message on a sheet of paper (SMTP message), place it in an envelope (TCP message), and place the envelope in a mailbox (IP message), where a mail truck (switch) picks it up and delivers it to a distribution center (router). There, the envelope (TCP message) is placed in a bin (IP message), which is driven to another distribution center (router). There, the bin (IP message) is placed in a larger bin (DS-1 frame) and driven to an airport (DACS), where the larger bin (DS-1 frame) is placed on an airplane (OC-48 frame) that flies through the air (optical fiber). At the other end of the flight, the process is reversed, and the recipient receives the note on the sheet of paper.
A plethora of network technologies exist; we discuss the important ones in this section.
Common terms in telecommunications and network security
ATM
CAT-6 cable
denial of service
DLP
DMZ
DOCSIS
DS-1
E-1
encapsulation
firewall
frame relay
IP address
IPS
ISDN
MAC address
MPLS
packet header
payload
POTS
PSTN
QoS
router
routing table
SONET
T-1
TCP/IP
VoIP
VPN
watering hole attack
WEP
WiMAX
WPA
WPA2
Wired telecom networks connect homes, businesses, schools, and governments through technologies that use copper or fiber optic cabling to carry many types of signals. These signals include the following:
Older technologies you don't need to be too concerned with anymore (unless you're a technology history buff) include ISDN, ATM, frame relay, X.25, and PSTN.
Wireless telecom networks connect individuals, homes, and businesses through the use of several technologies, including the following:
Other notable technologies include CDPD, CDMA, and packet radio.
Although many standards have been used for wired network technologies, the long-term trend has been a general migration to TCP/IP on Ethernet over copper cabling or fiber optic cabling. The dominant technologies follow:
Has-been cabling used in the past include Thinnet, Thicknet, Cat-3, and Cat-5.
Wireless network technologies are wildly popular in a number of typical settings. The technologies in use are
Runner-ups include iRDA (the infrared point-to-point technology, which has all but disappeared) and wireless USB (up and coming, and possibly a force in the future).
Software-defined networking is an emerging technology that is simplifying network architectures and helping to reduce costs. Instead of purchasing a lot of separate specialized network devices such as load balancers, web application firewalls, intrusion prevention systems, routers, and switches, organizations can purchase a few OpenFlow-enabled switches and a centralized controller, and then build a network fabric that exists virtually in the switch, performing all those functions in harmony instead separately.
As of this writing, 90 percent of IT workers (including a lot of senior-level people) don't understand SDN, so you should consider this a ground-floor opportunity. If you're fortunate enough to get some SDN training or work alongside someone who is doing SDN, stay with it as long as you can.
Developed in the 1970s as a robust military communications network that had some self-healing properties and resilience, TCP/IP has formed the basis for virtually every home, business, and commercial network, as well as the global Internet itself.
TCP/IP is a packet-based technology in which messages are bundled into packets that are routed to their destinations. A single packet has a source address, a destination address, a protocol number, and payload (the contents of a message).
The source address and destination address follow a numbering convention, with a global authority that assigns addresses to organizations. In TCP/IP version 4, the form of an address is
where each xxx (octet) is commonly portrayed as a decimal value and can range from 1 through 255. In TCP/IP version 6, the form of an address is
where each xxxx (hextet) is a hexadecimal value that can range from 0000 through FFFF.
In TCP/IP, routers process packets as they make their way from their source to their destination. Think of a router as a traffic cop in a street intersection who tells you which way to turn. A router examines a packet's destination address, consults its routing table, and then sends the packet in the correct direction to get it closer to its ultimate destination.
Routers exchange routing information so that each router has a better idea about which direction to send each packet. They exchange this information through several routing protocols, such as RIP, BGP, IGRP, OSPF, or IS-IS. These routing protocols each have best uses; some are used by Internet backbone routers, and others are better suited for routers inside a company. Some become obsolete as newer and better ones are developed.
TCP/IP has two basic protocols, on top of which nearly all the others are used through encapsulation (the nesting process explained earlier). These two basic protocols are
The TCP and UDP protocols contain hundreds of established protocol standards, a few of which are well known and frequently used, such as
To understand concepts such as encapsulation, you should become familiar with the OSI Network Model.
Several network-centric security devices protect systems, networks, and information from intruders. We describe the more common types in this section.
Firewalls are inline devices placed between networks to control the traffic that is allowed to pass between those networks. Typically, an organization places a firewall at its Internet boundary to prevent intruders from easily accessing the organization's internal networks.
A firewall uses a table of rules to determine whether or not a packet should be permitted. The rules are based on the packet's source address, destination address, and protocol number. Firewalls do not examine the contents of a message.
Firewalls are used to create a demilitarized zone (DMZ), which is half inside and half outside the networks in which organizations place Internet-facing systems such as web servers. This strategy helps protect the web server from the Internet and protects the organization in case an adversary compromises and takes over control of the web server. Figure 4-1 depicts a typical DMZ network.
An intrusion prevention system (IPS) is an inline device that examines incoming and outgoing network traffic, looking for signs of intrusions and blocking such traffic when it detects an intrusion.
Unlike a firewall, an IPS examines the contents of network packets, not just their source and destination addresses. This approach is based on the principle that malicious traffic may be characterized by its contents, not merely its origin or destination.
Like a firewall, an IPS is typically placed at or near Internet entrance and exit points, so that all Internet incoming and outgoing traffic, respectively, can be examined and any malicious traffic blocked.
A data loss prevention (DLP) system primarily examines outgoing traffic, looking for evidence of sensitive data being sent out of the organization inappropriately. A DLP system is configured to look for specific patterns in outgoing data and to send alerts or just block traffic meeting certain criteria.
A web-filtering system examines the websites that an organization's personnel are visiting. The web-filtering system logs all web access by personnel and can also be configured to block access to various categories of websites (for example, pornography, gambling, weapons, or social media sites) as well as specific sites. The purpose for web-filtering systems is generally twofold: to prevent access to sites that are clearly not work related and to protect the organization from accessing sites that may be hosting malware.
A virtual private network (VPN) is a technique used to encapsulate network traffic flowing between two systems, between a system and a network, or between two networks. Typically, encryption is employed along with encapsulation so that the contents of the traffic cannot be read by anyone who intercepts the traffic.
VPNs are most commonly used for remote access, as well as to protect information flowing over the Internet between specific organizations.
Intruders are incredibly efficient at finding ways to break into an organization's networks. They do so to steal valuable data that they can easily monetize. The techniques used and the defensive countermeasures include the following:
New developments that keep networking professionals on edge include the following:
Cryptography is the art and science of hiding data in plain sight, and plays a key role in protecting data from onlookers and adversaries. In this section, you examine this mysterious craft and discover how it's used to protect sensitive data.
block cipher
certificate authority (CA)
ciphertext
cryptanalysis
cryptosystem
decryption key
digital certificate
digital signature
encryption
encryption algorithm
encryption key
hashing
key management
key length
message digest
nonrepudiation
plaintext
pseudorandom number generator
public key cryptography
steganography
stream cipher
watermarking
Although cryptography is often used as part of a complex system, it's often easier to think of cryptography in isolation, in the simple-use case of a message sent in plain sight from a sender to a receiver.
Encryption is the process of transforming plaintext into ciphertext through an encryption algorithm and an encryption key. Decryption is the process of transforming ciphertext back into plaintext, again with an encryption algorithm and the encryption key. In part, the strength of encryption is based on the key length (the number of characters that make up the encryption key) and the complexity of the encryption key.
An implementation of encryption and encryption keys is known as a cryptosystem. An attack on a cryptosystem is called cryptanalysis.
Most encryption algorithms employ a pseudorandom-number generator (PRNG), which is a technique for deriving a random number for use during encryption and decryption.
The two basic ways to encrypt data are by block cipher and by stream cipher. Details follow:
Block ciphers are most often used to encrypt Internet-based streaming services. On the Internet, everything is transmitted in packets, which are individually encrypted using block ciphers.
Hashing is used to create a short fixed-length message digest from a file or block of data; this is something like a fingerprint: Message digests are unique and difficult to forge. Hashing is often used to verify the integrity of a file, or the originator of a file, or both. Common hashing algorithms include the following:
A digital signature is a hashing operation carried out on a file. Depending on the implementation, the digital signature may be embedded in the file or separate. A digital signature is used to verify the originator of the file.
A digital certificate is an electronic document that consists of a personal or corporate identifier and a public encryption key, and is signed by a certificate authority (CA). The most common format for a digital certificate is X.509. The use of digital certificates and other tools such as strong authentication can lead to the failure for an individual to be able to plausibly deny involvement with a specific transaction or event. This process is known as nonrepudiation.
The two main types of encryption keys in use today are
Private keys and symmetric keys must be jealously guarded from adversaries. Anyone who obtains a private or symmetric encryption key can decrypt any incoming encrypted message. The management and protection of encryption keys is known as key management.
Software programs often employ passwords to protect encryption keys. Hence, the strength of the cryptosystem is only as strong as the password protecting its keys.
Two encryption alternatives provide some of the same features as a cryptosystem:
Encryption is not a magic sleeping pill. Instead, there are numerous worries ranging from new types of attacks to official government misbehavior. Let's take a look at what's keeping networking professionals awake at night.
Every successful networking professional is familiar with the basics of computing architecture and design: how computers are architected internally, and the ways they are used, including virtualization and cloud computing. This is networking's bread and butter.
Common terms in computing architecture and design
bus
central processing unit (CPU)
cloud computing
dedicated public cloud
guest
hybrid cloud
hypervisor
infrastructure as a service
main storage
operating system (OS)
platform as a service
private cloud
process
secondary storage
software as a service
trusted platform module (TPM)
virtualization
Networking professionals must understand how computers are designed and how they function. This applies to computers used on-site and computers that are a part of cloud-based services.
Networking professionals need to understand how computer hardware functions, so that they can ensure that the hardware is properly managed and used. Modern computers are made up of the following components:
A computer operating system (OS) consists of the set of programs that facilitate the operation of application programs and tools on computer hardware. The components of an OS include the kernel (the core software that communicates with the CPU, memory, and peripheral devices), device drivers (which facilitate the use of bus devices and peripheral devices), and tools (used by administrators to manage resources such as available disk space).
The main functions performed by an operating system are
An operating system can run directly on computer hardware or through a scheme called virtualization, in which many separate copies of operating systems can run simultaneously on a computer. In virtualization, the main controlling program is called the hypervisor, and each running OS is called a guest. The hypervisor's jobs are to allocate computer hardware resources to each guest and to prevent guests from interfering with each other.
Virtualization permits an organization to make better use of its resources. Instead of running one operating system per server, multiple operating systems can run on each server, making better use of hardware investment and rack space.
With commercial virtualization tools, OS instances can be moved from one hardware platform to another (and even to one data center to another), and OS instances can be easily cloned to enable more running copies of a server if demand requires it.
The adoption of cloud services is in full swing despite the fact that many still don't understand how cloud services work. An organization using cloud computing has chosen to use computing or application resources that are owned by another organization and located away from the organization's premises.
The three common types of cloud services follow:
An organization can utilize cloud services in the following ways:
Issues that tend to keep networking professionals on their toes include:
IT operations is the heartbeat of organizations using computers and networks to support their key business processes. Many networking professionals begin their networking careers in IT operations; some spend their entire careers there.
The concepts in IT operations in this section follow the international standard called ITIL, or IT Infrastructure Library. To learn more about IT operations, get a copy of ITIL For Dummies by Peter Farenden.
Networking professionals spend their time working in one or more IT operations processes, which are summarized in this section.
Change management is a basic IT operations process concerned with the management and control of changes made in IT systems. A proper change management process has formal steps of request, review, approval, execution, verification, and recordkeeping. The purpose of change control is the discussion of proposed changes before they take place, so that risks and other issues can surface.
All process and procedures, including those in incident management, vulnerability management, and change management, should be formally documented.
Configuration management is a basic IT operations process that is concerned with the management of configurations on servers, workstations, and network devices. Configuration management is closely related to change management. Whereas change management is involved with the process of performing the change, configuration management is involved with the preservation of present and prior configurations of each device and system.
The IT service (help) desk is a central point of contact for users who require information or assistance. Users call or send a message asking for help with some matter concerning the use of a computer, network, program, or application.
In many organizations, IT service desk personnel are trained to assist with several common and easily remedied issues, such as forgotten passwords, simple support issues related to office productivity tools, accessing central resources such as file servers, and printing. When service desk personnel are unable to assist directly, they will refer the issue to a specialist who can offer more advanced assistance.
Most organizations utilize a ticketing system to track open issues, the time required to fix issues, and resources used. When used properly, a ticketing system can be a rich source of statistics related to the types of assistance that uses are requesting and the resources and time required to fix them.
The velocity of harmful events has accelerated to a point where data review of incident logs is no longer an effective means of detecting unwanted behavior. Besides, the volume of available log data (gigabytes in the smallest organizations and terabytes in larger ones) is too great for humans to review.
Organizations must centralize their logging. All devices and systems should send their log data to a purpose-built central repository, called a security incident and event management (SIEM) system. But more than that, organizations need tools that not only detect serious security events but also alert key personnel and even remediate the incident in seconds instead of weeks or months as is often the case.
Hardware problems, software problems, and disasters of many kinds make organizations wish they had a backup copy of critical information. Most organizations routinely perform backups, where important data is copied to backup media or servers in some remote location.
When data is no longer needed, it must be effectively destroyed so that unauthorized parties can't recover it. Some companies pay little attention to this task, but others do a thorough job of data destruction. No, you won't get to smash obsolete printers out in a field, but you might get to operate some cool equipment to destroy unneeded data.
Antimalware, antivirus, and advanced malware protection (AMP) refer to tools designed to detect and block malware infections and malware activities.
Organizations often combat malware by having several layers of control in place (known as a defense in depth), including the following:
The recent potency of malware is leading many organizations to enact more controls in the form of intrusion prevention systems, which block the command-and-control traffic associated with malware.
Remote access is both the business process as well as the technology that facilitates an employee's ability to remotely access information systems that are not accessible from the Internet.
In the business process sense, many organizations permit only a subset of their workers to remotely access the internal environment. In the technical sense, remote access usually includes encryption of all communications, as well as two-factor authentication to make it harder for an attacker to gain access to internal systems.
The fact that many systems are opting to use cloud-based systems instead of internal systems makes some aspects of remote access obsolete. Or, to put it another way, cloud-based systems turn everyone into remote workers because their information systems are no longer located in their internal networks.
Incident management is an IT operations process used to properly respond to operational and security incidents. Organizations should have written incident response plans along with training and testing to ensure that these plans are accurate and that personnel understand how to follow them. Written playbooks for common incidents should be developed.
Security incidents are just a special type of incident management, requiring a few additional considerations such as regulatory requirements and privacy laws.
The steps in a mature incident management process are
Not all security incidents require all these steps, but this framework ensures an orderly and coordinated response to reduce the scope and effect of an incident, as well as a debriefing and follow-up activities to reduce the likelihood or effect of a similar incident in the future.
Vulnerability management is an IT operations process concerned with the identification and mitigation of vulnerabilities in IT systems. An effective vulnerability management process includes procedures for identifying vulnerabilities, prioritizing them, and making changes (through change management) to eliminate vulnerabilities that could be used by an intruder to successfully attack the environment.
Honorable mentions that we won't describe include the following:
Yes, a lot of issues in the world of security operations keep security managers lying awake at night.
Innovations in malware packaging have rendered antivirus and antimalware software virtually ineffective against advanced malware. Additional layers of detection and prevention tools are needed to combat the threat. These tools have significant costs in terms of capital as well as manpower to maintain them.
The greatest fear is that malware creators will soon develop new ways of circumventing even advanced malware prevention (AMP) tools, thereby requiring even greater investment in effective defenses.
With employees in so many organizations bringing personally owned devices to work for use in their daily job duties, organizations are finding it more difficult than ever to keep track of sensitive data and know when that data is leaving its control.
Physical security is concerned with the protection of personnel at work locations, as well as information systems and related media and equipment. Supporting environmental controls and power protection are also a concern.
Although physical and environmental security systems are generally not a part of a networking professional's responsibility, being familiar with these systems will help you do your job better and keep you out of trouble!
Common terms in physical and environmental security
biometrics
digital video recorder (DVR)
electric generator
exterior lighting
fence
fire extinguisher
guard
guard dog
heating, ventilation, and air conditioning (HVAC)
inert gas fire suppression
key card
line conditioner
mantrap
PIN pad
razor wire
smoke detector
sprinkler system
tailgating
uninterruptible power supply (UPS)
video surveillance
visitor log
wall
This section discusses the most common concepts in security measures that are employed to protect personnel and equipment.
Organizations should implement a level of site access security commensurate with the value of information and assets in the facility. The following types of controls contribute to the security of a work location, whether a facility is a data center or primarily used by employees:
Secure siting, also known as a site survey, is a process of searching for and analyzing a work site for nearby hazards and threats that could pose a risk to the security or safety of a work site and the personnel and equipment within.
Typical hazards that a site survey would identify include the following:
Measures need to be taken to protect equipment and personnel in work locations, including the following:
Information-processing equipment (computers, network devices, and so on) is highly sensitive to even slight fluctuations in electric power. The following specialized equipment ensures a continuous supply of clean electric power:
An electric generator and a UPS are typically used together to ensure continuous power. Because electric generators take several seconds to a minute or longer to activate, a UPS supplies power while the generator is starting up.
Many UPSs have built-in line conditioners, so standalone line conditioners are uncommon, except in environments where electric power is reliable but noisy.
People and information-processing equipment operate best within a narrow temperature and humidity range. (Humans are more tolerant of a wider range in temperature.)
Heating, ventilation, and air conditioning (HVAC) systems regulate temperature and humidity in buildings containing personnel, computers, or both. HVAC systems are especially important in data centers, which generate a considerable amount of waste heat that must be continuously moved away from computers to prevent overheating and premature failure.
Many newer data centers rely on circulation of outside ambient air (with particulate filtering) as opposed to refrigeration to provide cooling at a significantly lower cost.
Many facilities incorporate redundant controls to ensure continuous availability of environmental needs. Redundancy allows for continuous protection in the event of equipment failure as well as routine maintenance. Examples of redundant controls follow:
Issues in the physical and environmental security realm that keep security professionals awake at night include the following:
Because of their integral role in supporting business processes, information systems are in the crosshairs of laws and regulations. Computers are frequently involved in civil and criminal investigations, requiring forensic procedures when collecting evidence from computers and other electronic devices. Some networking professionals will have the opportunity to work in these areas.
Even though networking professionals are not attorneys, it is helpful for them to understand the laws, regulations, and other legal requirements that drive compliance efforts in organizations. It's also helpful for networking professionals to understand how security investigations should be conducted. This is fun stuff!
Many countries have enacted computer crime laws that define trespass, theft, and privacy in the context of information systems. In the history of law, computers are still new, and the development of laws is ongoing and changing frequently.
This high frequency in changes of laws, regulations, and legal standards presents a challenge to information security and legal professionals as they strive to be compliant with these laws and also to recognize cybercrimes when they occur.
Regulations on many topics have been enacted for various industries. In the information technology world, regulations such as HIPAA (Health Insurance Portability and Accountability Act) require the protection of healthcare-related information and GLBA (Gramm-Leach-Bliley Act) require protection of customer information in the financial services industry.
Common terms in regulations, investigations, and compliance
chain of custody
COBIT
COSO
forensics
GLBA
HIPAA
ISO27002:2013
Compliance is a matter of adhering to laws, regulations, contractual obligations, and policies. It takes a determined effort to know all compliance obligations in an organization, and more effort to achieve compliance. Many organizations develop or adopt a framework of controls to track compliance on an ongoing basis. Suitable frameworks include
Security investigations are an organization's response to isolated security incidents that have little direct effect on business operations. Still, the events requiring investigation can be important in other ways because they can have significant legal implications.
Any event that takes place in an organization in the context of computers where possible future legal action is involved may require an investigation with forensic rules of evidence in play. These rules include
For an organization to prevail in any related legal proceedings, a trained individual with dedicated tools and hardware must carry out these forensic procedures.
Following are two issues keeping networking professionals on their toes:
A number of trends in business affect jobs in networking. These trends go beyond the technical aspects already covered and include
It is human nature for a company to want to have formal possession and control of the assets on which the company relies. Historically, companies have wanted to own everything involved in the creation of their products and services. Some companies have taken this approach to the logical extreme and seek to own everything from mining the materials to owning the stores where the products are sold.
A more modern approach is to own the parts of the process where you have your best differentiation from the competition, either in lower price or better quality, and rely on suppliers to provide commodities or other undifferentiated parts of your solution. For example, the Ford Motor Company refined its own steel starting in 1917. In the 1960s, they thought better of this approach and sold their steelmaking assets to focus on the parts of car making that they could do better.
This approach can be applied to information technology, and by extension, networking. A given company could look at data connections as a commodity, then look at their computer network as a commodity, and then look at their data centers as a commodity. What the heck, they could look at everything in IT, including application development and systems integration, as a commodity.
Companies can change their approach as circumstances in their business evolve. One year they can choose to outsource. Another year, they may choose to bring a part of IT back in-house and discontinue working with outsourcing vendors.
No single strategy as to the right degree of in-house management versus outsourcing applies at all times. The best you can hope is that companies will be judicious in adjusting their outsourcing strategy. If the company is not judicious, the result will be more layoffs and rehiring between the company and the vendors.
The last major trend under consideration is a company's policy about telecommuting. Many cities and regions have implemented incentives to companies to encourage telecommuting. Providing companies these incentives is less expensive than building more roads.
Many employees prefer doing their job while wearing their jammies. Plenty of organizations have embraced this approach because it allows them to operate in a smaller office space. But it puts a burden on the network to ensure proper security and adequate resources.
The shift towards telecommuting has not been without problems. Primary among them is a loss of camaraderie among employees, resulting in a loss of collaboration opportunities. This is particularly a problem in companies that produce creative products and services and in engineering-oriented companies.
As a result, some senior executives are reversing policies on telecommuting. Some employees must work in the office a certain number of days per week while other positions that were formerly done by telecommuters must be done in the office.
Re-accommodating employees who were once telecommuting and are now in the office is not the hardest challenge to address. It involves reinstallation of standard communication services. Some policies are as fickle as fashion — and networking is expected to turn instantly to adapt to these policy changes.