CHAPTER 5: INFORMATION SECURITY

5.1 Context

“I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We’ve created life in our own image.” – Professor Stephen Hawking, theoretical physicist and author.

Information is precious and we should guard it well. For example, while writing this chapter I knocked on an elderly neighbour’s door only to discover a post-it note stuck to the porch window: “Dear Postman, I am not at home today. Please leave the parcel I’m expecting at the depot”.

This chapter will therefore examine threats to one of our most critical types of asset: information. An employee’s ability to readily access any necessary company information is a vital ingredient to the running of a fast-moving, inspirational enterprise. Yet, on the contrary, security and IT departments will need to ensure that some, or most information held by the company, is kept as safe and secure as possible. This is not just for reasons of legal compliance that have been broadly covered in Chapter 3: Legislation and Regulations. But, it is also due to the fact that an unguarded release of important information – either caused accidentally or deliberately – can lead to such knowledge being lethally exploited by our adversaries. It was the US War Office that famously produced some stark propaganda posters during World War Two which warned: “Loose Lips Might Sink Ships”. This was mainly a warning to civilians and resting soldiers against idle chat and the disclosure of sensitive information in public places. Nevertheless, in the digital media era, many companies are waking up to a stark corporate reality: idle security planning around information assets will, equally, threaten individual and corporate survival.

When most of us depend on ICT to make a living, how can we begin to mitigate against critical information loss? This chapter will also outline some core procedures and concepts that are designed to protect our business critical information and provide ICT systems with continuity. Concepts, such as intelligence, espionage and insider threat will be examined in this chapter. What do they mean? Are they realistic? How can we adapt to diverse, multilateral threat vectors in order to achieve the right balance between corporate defence (security and IT) and market-facing business functions, such as, sales, business development and marketing? Finally, this chapter offers some recommendations in order to allow readers to plan for a proportionate and integrated set of countermeasures. This chapter is broken down into several sub-sections including:

5.1  Context

5.2  Why target our information?

5.3  Intelligence and espionage

5.4  Insider threats

5.5  Counterfeiting and IP

5.6  Technical security countermeasures

5.7   Cyber security

5.8   Mitigation: developing a security policy and standards

5.9   Wrap-up: why consultants appear to be succeeding or failing in the emerging cyber security market

5.2 Why target our information?

Philosopher, Friedrich Nietzsche, perhaps came up with one of history’s most unforgettable quotes when he pontificated that “all knowledge is power”. Another lesser known truism was reported by accountancy company Deloitte: “Many companies lack a clear understanding of exactly what their intellectual property is …” (1). This conclusion is unsurprising but it also serves as a stark warning to security managers. Consider in the widest possible terms where your company’s most important corporate information is located. Only then can we prioritise protections across certain domains of people, property and assets.

Case study: Who might target our information?

The following five real-world scenarios demonstrate just how diverse threats and risks to personal and company information can be:

•   Under-pressure, company researchers in the Pakistan defence sector downloaded counterfeit anti-virus software, produced by a neighbouring Chinese company. They were under severe time restraints when trying to fix a pre-contract delivery design fault but they failed to carry out due diligence or pre-plan continuity issues that could have prevented sensitive research data being sent abroad.

•   A government special adviser from a NATO member-state lost their Blackberry in a suspected ‘honey trap’ incident while visiting China. The unencrypted phone included sensitive diplomatic, professional and personal data. Many emails and texts related to senior government ministers and other NATO member-state political leaders (2).

•   A tracking device reportedly became attached to the car of an England football captain. The player believed that family and friends were leaking his whereabouts to the media (3).

•   A leading television comedian and script writer had his main laptop stolen. Script drafts and vital editorial comments for the latest hit television series were not backed-up (4).

•   A cyber-attack group, known as Anunak or Carbanak, reportedly sent malicious software (malware) to hundreds of US-based banking staff. This enabled them to steal more than £300m from customer bank accounts. The same group had previously attacked major US retailer companies. The malware installed a variety of network reconnaissance tools, including key loggers and other ‘bespoke attack software kits’. Video footage was used to establish bank employee work patterns. Further ‘painstaking research conducted by the attackers places it on a comparable level with cyber attack campaigns attributed to nation states’, reported business security group, CSARN (5).

In its popular annual Data Loss Barometer publication, accountancy company KPMG point to information loss now being a way of life for anybody with an online profile: “Over the past five years, more than one billion people globally have been affected by data loss incidents” (6), researchers found. Securing our valuable information has perhaps become the most kaleidoscopic of all tasks. Because whichever way we tilt the lens when we look at risk, a host of other interconnected risks fall into play. For example, handwritten scripts by a treasured TV producer could easily burn in a house-fire or fall into the hands of competitors or burglars. Moreover, they would still need typing up and emailing to all of the production team, including the actors!

Two common challenges occur for security practitioners as they interface with information and cyber security issues. Like any hazard identification stage of risk assessment, it remains extremely difficult to identify threats and vulnerabilities. The naked dependency of most organisations upon ICT, and the sheer speed and agility of organised cyber attackers, means that – at the time of writing – initiative very much rests with the aggressors, who are often mercurial in structure and motivated in purpose. Companies and security contractors are increasingly turning to the information security management system (ISMS) Standard ISO27001 in order to introduce proportionate and sensible controls across ICT platforms and the work domains that any high-tech machinery is supposedly there to serve (7). Compliance with the international Standard is also becoming a prominent benchmark for customers in order to assess the suitability and credibility of contractors within their supply chain. Because information security risks are so overarching, familiarity with ISO27001 is therefore strongly recommended for any security practitioner.

What assets need protection?

According to expert Alan Calder, information security can be defined as the “preservation of confidentiality, integrity and availability of information”. Calder also recommends that information security “cannot be achieved through technology alone” (8). Intellectual property assets that typically require some level of planned protection include: customer data (including bank card and personal contact details); contracts, deals and negotiations; passwords; staff information; research and development findings; business/market intelligence; designs and processes, including the organisation’s own business continuity and crisis management plans; IT systems and infrastructure. We may also need to take into account plans for further protections around certain key people or sites within an organisation. In turn, property and assets held by key persons, or at core facilities, such as vehicles, mobile phones and computers, are particularly attractive targets for adversaries. The knowledge and information chain extends well into family networks, friends, suppliers, and even routine locations used by those holding precious information, or who are targets for prying eyes. For example, several newspapers reported that a hidden listening device had been found at a favoured restaurant, and preferred table, used by Premier League football manager, José Mourinho (9).

In sum, the entire success of a security management function is very much dependent upon its contribution to data protection issues and cultures within any project or organisation. This can only be done by evaluating and anticipating weaknesses and then closing the gaps. Details matter. The loss of a principal’s smartphone, or failure to recover an unencrypted memory stick which contains sensitive data, can nowadays be just as embarrassing for a security function as a serious physical security breach might well be.

Nevertheless, if ICT security procedures are too constrictive, the enterprise might also stall, customer response times may lengthen and investigations and prosecutions may grind to a halt. Disproportionate access restrictions therefore can play to the law of unintended consequences, where compliance actually deteriorates because exhausted and frustrated employees seek to ‘cut corners’. They may even parachute themselves out of the gridlocked office window into friendlier and more relaxed corporate climes. According to accountancy company Deloitte: “Employees without access to flexible IT policies are less satisfied with their job. Only 62% of employees without access to flexible IT policies report feeling satisfied at work. Up to 83% of employees with access to flexible IT policies (such as social media access) report feeling satisfied at work” (10).

Calculating the value of information

Understanding the value of information to an organisation is by no means an easy task. Indeed, some critics say that it can’t sensibly be done. The Information Assurance Advisory Council (IAAC) however provide a huge array of useful advice in this sphere to help companies and contractors manage risk and provide ‘assurance’ in delivering governance, security and process integrity to information and data systems (11).

Great efforts and convoluted process are laid on by some companies to protect sensitive data. Moreover, many clients won’t realistically want to share critical information with incoming security contractors, at least in the first instance. Despite a client’s reluctance to share the type of information that can positively help to inform risk assessment and security planning, security contractors must crack on with their job and be content to make some informed assumptions. For instance, pharmaceutical or drinks recipes, foreign bank account numbers, embarrassing complaints against senior employees, luxury assets and company heirlooms, are often stowed away for safe keeping by company owners and chief executives. Even if the client keeps a security contractor ignorant of such important details, the disappearance of such assets will hardly endear a security function to the executive team afterwards!

The discipline of valuing company information is sometimes referred to as infonomics. Information analyst, Doug Laney, has developed several models that calculate the value of information which can be of particular help to chief information officers (CIO). Data can be separated into two categories argues Laney. The first, non-financial, or economic data, may not have an actual price tag but can be prioritised and ranked relative to what its ‘intrinsic value is’ (12). The second category is tangible financial data which can be modelled along the lines of established accounting practices, in order to arrive at a numerical cost and/or quantity of loss. Finding the cost value of information was applied by many companies after the 9/11 terrorist atrocities in New York. A value is given to the data “by measuring lost revenue and how much it would cost to acquire the data”, reports Laney (13). If you’re struggling to do this, then adopt the approach of any decent private investigator. Step into the shoes of a potential adversary. Look at the scenario from their mind’s eye. What are they seeking to take from your organisation? What are they seeking to achieve? What do they believe they will gain? Loss or ‘shrinkage’, an American retail idiom, can be calculated in financial denominations. But other denominations can include loss of market share, profit or revenue, customer life value (CLV). Moreover, reputational loss could even be equated to a percentile loss of support, perhaps for an NGO or political party. (This may, in turn, cause a future revenue loss due to a reduction of subscribers.) A variety of predictive analysis techniques – such as financial modelling, data mining and decision trees – may already be in use by the client, in order to establish information assurance. One would hope that this is the case with any medium or larger size corporation. Security contractors should consider making themselves aware of any information assurance scoping work and policies that are undertaken by any client; a company’s approach to information assurance is often reflective of its wider culture and management approaches.

Predicting loss calculations can be a straightforward exercise in some cases. For example, the cost of research and developing a new medical drug for the global market can be around $5bn (14). There is some information out there that can be described as ‘perfect information’ for businesses because there is a large degree of certainty around costs and reasonable benchmarks for comparisons (15). For instance, the theft of assets or property destruction can be calculated by insurers and accountants with brutal efficiency. Yet with any of these scenarios, the loss of information and intellectual property can cause incalculable further damage, perhaps unseen in the aftermath, and only more visible with the passage of time. We therefore cannot always reasonably anticipate every business loss or count the cost beforehand because there are often additional human and operating environmental impacts that can reverberate in a very unpredictable manner for many years afterwards. For example, can we ever really know how many government contracts a large company lost solely because its employees had an inconvenient habit of leaving unencrypted memory sticks and company laptops with client details in restaurant bars and commuter trains? There may have been many other reasons for losing potential future contracts, such as the rise of competitors, or a change in procurement policy. Prolonging the search for perfect, quantifiable information – in contrast to accepting imperfect data – may well delay strategy, or direct us down a path of inquiry that is not central to our task: holistic information security. Don’t get buried by maths and algorithms. Elevating the overall principle of information security to a consistent priority action among security practitioner teams is, perhaps, the safest overall calculation to make from the outset.

5.3 Intelligence and espionage

In a security context, the words intelligence and espionage are often used interchangeably; although to do so is not always accurate.

For many, both terms conjure up images of clandestine operations, or a murky undercover world of mutual assured deception. When intelligence and espionage activities become publicised they can attract a high degree of profile, alarm, apprehension and incredulity, or, admiration and amazement, such as revelations in the 1950s that British and American spies had tunnelled under Soviet-controlled East Berlin and were reading diplomatic telegrams to and from Moscow.

Critics have seized on the many obvious ethical and operational pitfalls to gathering intelligence and also the practice of the tradecraft known as espionage. Yet many security practitioners, business organisations and policy-makers see significant value in supporting and deploying intelligence work and operations. This willingness is, in part, because the overwhelming majority of data upon which intelligence is borne seems to be open source (OSINT) and freely available. As with so many topics that attract mass media hyperbole, the narrative is often far more frightening than the rather mundane reality!

Moreover, many who depend on, or deploy intelligence, may feel closer to the frontline of a risk scenario than their detractors who may well be more ignorant as to the perceived ground truth. Consumers and producers of intelligence may well perceive themselves and their colleagues as the last defensive line in terms of mitigating a threat. They might even be under clear and present danger themselves. They might also seek to ‘get ahead’ of unfolding events that will directly impact them. They therefore seek to learn more about their operating environment and developing a better level of knowledge than any market competition. While some or all of this is the case, clients and recipients may not necessarily wish to be too closely associated with the sources of information. One former head of the British domestic security service testified (Rimington: 190) (16):

“Ministers for their part, may well have thought that although the intelligence services were essential, they were a potential embarrassment to the government and the less they knew about them the better.”

It is usually stacked against this sensitive, publicity-shy, backdrop that those tasked with directing, collecting, processing and disseminating intelligence must operate.

What is meant by the terms intelligence and espionage?

In a security context, intelligence suggests the gathering, collating, synthesising and secure dissemination of information. Intelligence is refined information which is supposedly of an advanced quality, and must fulfil a somewhat abstract criteria of being timely, accurate and relevant. Intelligence products are therefore actionable for the client: they stimulate activity and decision making, even if that decision is ‘an NFA’ in security parlance, no further action.

Espionage is a method of gathering intelligence. This trade involves clandestine and covert spying to gather information which can be processed into intelligence. Espionage is getting information from somebody covertly (17). Despite the end of the Cold War (circa 1947-91), the conduct of economic, diplomatic and military espionage by many governments remains popular. Espionage can conjure up such exciting images as static and mobile surveillance, moles undertaking slow-burn sleeper roles whereby companies or public authorities are penetrated over a longer duration of time. Moreover, many cases of espionage are conducted by insiders; usually, but not always, disgruntled or avaricious employees who pass on critical information to adversaries.

Why do organisations gather intelligence?

•   To achieve competitive advantage

•   Better information can help target scarce resources and/or reduce duplication

•   To achieve prior warning around a major incident which could save lives, prevent injuries or huge financial losses

•   Identify threats and exposure to risks

•   To detect and prosecute criminals and terrorists

•   To defend the realm and prevent subversion

•   Acquire sensitive knowledge to ‘control’ the subject and/or carry out blackmail

•   Material or financial benefit.

All of these reasons boil down to one single, overall aim for gathering intelligence: to reduce uncertainty.

How is the Intelligence Cycle Organised?

Like most conceptual applications in the security sphere, there is no single, authoritative conceptual model that can be said to illustrate the end-to-end development of an intelligence product. Given the kinetic nature of security management scenarios, and vastly different operational conditions from task to task, there simply cannot be a one-size-fits-all approach. But the CIA model outlined below does draw for us a convenient and plausible intelligence model that is often emulated by other organisations:

The ‘Action-on’ CIA Intelligence Model

Image

Source: CIA website (18)

Intelligence cycles tend to include at least four core components:

Direction: The beginning of the process. The client informs the intelligence manager what they need or wish to know. A recurring challenge is ‘information void’, whereby decision makers often do not know enough about what they need to know, in order to give clear direction in the first instance. This is because they begin the task with a negligible amount of useful existing intelligence. This stage can also be used to issue feedback and change tactical direction in response to prior intelligence shortcomings.

Collection: Raw data and information is collected from a variety of sources which may help intelligence analysts at the next stage of the cycle. Sources could be: Geospatial (GEOINT), Human (HUMINT), Imagery (IMINT), Intercepted Communications (SIGINT), Measurement and Signature (MASINT) or Open Source (OSINT).

Processing: The human analysis and assessment of available information, collated, synthesised and processed into timely, relevant and accurate intelligence. Successful assessment and interpretation of data by intelligence analysts is akin to correctly piecing together available jigsaw pieces, within a given timeframe, to establish the most accurate situational picture possible for the client. The intelligence product is also sanitised before being disseminated, in order to remove details which may compromise sources or methods used to gather information or expose the client. For example, an intelligence agency may insert aliases to protect the identity of sources, or those under observation.

Dissemination: The process of moving and communicating the actionable intelligence reports to the client and/or any given end-user. This must happen in a secure manner which will not compromise any of the three prior phases of the intelligence cycle.

Information hierarchy

Of course, production of any relevant information and appropriate knowledge sharing can be very useful to business and government decision makers within any security operating environment. But merely gathering useful information, or employing company researchers to do so, still potentially falls far short of the specific benefits that disciplined business intelligence processes can usher in. Business intelligence is the impartial pursuit of insightful and highly refined information that will reduce operational uncertainty for organisations. For example, reports from daily newspapers may well tell us that hackers were attacking IT networks run by commercial banks. This will, indeed, be very useful for a security manager in Manhattan but they might not be able to action this information other than to issue a general warning to staff, perhaps to urge them to be more vigilant and report anything suspicious. Yet more specific and insightful knowledge, such as prior methods, trends and timings, the most popular targets, the scale of expertise and research used by perpetrators, or if there was a suspected underlying reason why some banks or staff networks were prioritised as a target, could all be fairly construed as snippets of actionable business intelligence.

As such, intelligence is considered by many to sit at the apex of knowledge management models, see Figure 20.

Image

Figure 20: The knowledge hierarchy

Knowledge hierarchy explained

Data: Raw information gathered or received from a third party which has not been analysed or subject to any form of processing.

Information: This could be data that has been sifted, or been subject to a basic to moderate degree of processing, such as translation, cross-referencing, formatting, comparative analysis and double-checking.

Knowledge: Supporting information, or pre-existing understanding and comprehension, which is required to help assess, evaluate, organise and process incoming data and information into actionable intelligence.

Intelligence outcomes

We have seen that intelligence products are principally designed to provide critical insights into defined ‘subjects’ or target areas. Demands for intelligence can often come in the form of priority information requests (PIRs) or slightly less urgent information requests.

As we have established, intelligence is information which is refined, assessed and disseminated, usually with the intention to trigger or assist certain action by the client/end-user. Conversely, intelligence can also be used to keep a watching brief on potential adversaries, or monitor certain threats and risks in order to confirm NFA. Intelligence gathering therefore helps to provide context and clarity to our decision making. It can help to clear a logical pathway through an operating environment riddled with uncertainty; a domain that the great Prussian military strategist, Carl von Clausewitz, famously described circa 1873 as the “fog of war’ (20). Forms of intelligence products can include:

•   Identify incumbent organisational and personnel weaknesses and courses of action (COA)

•   Identifying risks to employees and the organisation through threat assessments

•   Target group profiling (TGP) and monitoring

•   Issuing early warning to decision makers

Why is business intelligence becoming more popular?

Usually companies are looking to sensibly and proportionately allocate their resources in order to achieve strategic objectives. In striving for their outcomes, against ferocious competition, they may ask for employees and business divisions to work to SMART objectives; these being … specific, measurable, achievable, relevant and time-bound. Management is often implemented by the use of individual employee targets that are established as key performance indicators (KPIs), where specific individual and team targets and performance requests can be measured and recorded. Such methods are used by companies to drive forward momentum, a sense of corporate interoperability and to reinforce quality controls and a sense of fairness; because everybody supposedly fits into one clear system of corporate rules. KPIs and corporate management quantification techniques are on the rise, asserted management author, GT Doran, and that was back in the early 1980s (21)!

The growth of opportunities for security contractors to make extra revenue lines from business intelligence products, corresponds to these increasingly scientific business management cultures. Information research departments are nowadays expected to directly support and align to core business goals in any company. Moreover, there is little tolerance for random ‘background briefings’ and snippets from in-house subject matter experts which may have impressed and reassured executive boards in the past. Significant reputational loss, punitive legal penalties, market sanctions, and bottom line cash loss, are the consequences that any business can expect if it stumbles haphazardly into a security-related crisis.

Several influencing factors have converged, at the time of writing, which have established a global-market-wide desire for intelligence-led decision making. The quantification of business management approaches, ICT convergence, the expansion of corporate litigation, and the spending squeeze hewn by the latest international financial crisis, are all factors that have driven up demand by executive boards and corporate governance committees to insist that they become equipped for decision making with timely, accurate and relevant information. The idea of an activity trap has been posed by crisis management academic, Geary Sikich and others, who carried out isomorphic learning lessons on the 2010 Deepwater Horizon oil disaster, mentioned earlier in this book, caused accidentally by an explosive blow-out of an oil platform owned by a subsidiary of the oil major, BP. Management academics noted that in some large companies, the overall commercial goal becomes eclipsed by an entrenched fulfilment to traditional process. Companies become automated and inflexible, and unable to adapt to changing conditions within their operating environment. Poor ground intelligence has been identified as a contributor in many corporate disasters. But this is often not the case. It’s the inability of a big organisation to adapt fast enough to the new information or incoming ‘intelligence’ that is so often the problem (22).

In response to such concerns, many companies and senior executives now seek specific business intelligence reporting, either in house or contracted in. The Business Innovation Council has adopted “intelligence-led” security approaches, where a blend of “current, estimative and research” intelligence products are woven together to help inform executive decision makers about security strategy and courses of action to take, reports Computer Weekly magazine (23). Some CEOs report that their entire corporate agenda setting is intelligence-led because they could not possibly fix medium- to long-term objectives without the most appropriate, up-to-date insights, which can help to facilitate the best possible decision-making cultures.

Where does intelligence fit into the risk management process?

Image

Figure 21: Risk management model

In contrast to merely utilising intelligence at the beginning of the risk management process (Identify & Analyse Exposures), intelligence activity and reporting can play a critical role throughout the entire risk management process and its core five phases (see Figure 21). Organisations will wish to stick to strategy and objectives. But those that are more adaptive – and are therefore able to avoid or mitigate emerging risks, or exploit emerging opportunities – tend to be those that have embraced and embedded business intelligence techniques within their own organisational cultures. Just as people who carry out regular exercise and health checks tend to enjoy a better longevity, in the sphere of organisational resilience, it’s precisely the same; active knowledge-gathering of upcoming problems, followed by exercise and adaptive techniques to evade the risk, are also core ingredients for corporate survival and prosperity. “Facts do not cease to exist because they are ignored”, the Brave New World author, Aldous Huxley, poignantly wrote (25).

Integrity and compliance

Gathering intelligence is controversial. The integrity of information, source protection and legal compliance are all critical responsibilities that will help safeguard business intelligence functions and an organisation’s overall reputation. If information is gathered or handled illegally, or source identities become compromised, the consequences for a corporate body, or government organisation, can be calamitous. While for individuals directly impacted, intelligence failures can often lead to tragedy. Before undertaking any intelligence function you will need to be aware of different local, regional, national and intergovernmental protocols, rules and legislation which may support or constrain your initiative (see Chapter 3: Legislation and Regulations).

In some sectors, such as banking and insurance, business intelligence functions are ubiquitous cantilevers that underpin and influence almost every other sphere of work. Business intelligence functions can include financial risk monitoring, anti-fraud detection and due diligence around new depositors; these measures are warmly welcomed by many consumers and stakeholders, including the media and political decision makers, especially since the 2007/8 banking crash, caused in part by too much bad debt poisoning the wells of the banking sector. However, extra tiers of red tape for customers, and use of private investigators by private companies, has also attracted public ire. Moreover, the term intelligence can often provoke unsettling perceptions of sinister Big Brother-esque controls, either in workplace or public domains.

Due to negative connotations which resonate from intelligence gathering functions, terminology used by an organisation to identify its intelligence functions is often different to a straightforward, brazen identification of a particular unit’s purpose; namely, ‘intelligence’. For example, one security manager explained to me: “The current organisation that I work for is a large retail organisation and we use intelligence gathering and management, often referred to as horizon scanning, in a number of ways in order to anticipate future threats and plan the response. The company have a dedicated team of six analysts who look at horizon scanning and the information is then managed by the business continuity and executive protection managers” (26). Conversely, some organisations may prefer to opt for greater transparency, or even a public demonstration of force projection, by stating that they do have intelligence units; it may be that this dissuades and deters various security risks, including fraud, money laundering and cyber attacks. One major bank recently hired a former UK MI5 chief onto its board and remains one of many to operate a ‘financial intelligence unit’ (27).

Corporate espionage

Corporate espionage, sometimes referred to as industrial, commercial or economic espionage, is a covert method of business intelligence gathering carried out in order to provide companies with information that they hope will give them competitive advantage over their rivals. This practice, unlawful and unethical in many but not all domains, is usually driven by a desire to reduce research and development costs and time.

Case studies of alleged corporate espionage

Company-on-company espionage

In 2012, manufacturer Dyson took a case to London’s High Court and alleged that designs and product assembly information for its high speed motors and microchip technology on brushless vacuum cleaners had been sold by an internal employee to German competitor, Bosch (28). Product research and development reportedly cost Dyson around £100m and the corporate know-how for these valuable products was collected over 15 years. Bosch denied the claims.

During 2008, the US State Department was so concerned that American business travellers would experience the theft of valuable corporate information while visiting the Olympic Games in Beijing, that they authorised the use of formal warning messages from the Overseas Security Advisory Services. In response, Washington DC duly received the diplomatic wrath of China, the US’s largest bilateral trading partner, reported Forbes magazine (29).

Government-on-company espionage

China is deemed by US defence officials to be the world’s biggest state-sponsor in ‘economic espionage’ according to the Pentagon’s annual report on the PRC’s military (30). Recent allegations levelled at Chinese State-owned companies for buying trade secrets include Pangang Group allegedly acquiring DuPont’s titanium dioxide pigment technology, also known as ‘whiteners’, reported the Asia Times in 2012 (31). Computer security company ESET claim that computer worms that originated in China were targeted at their AutoCAD software which was used by architects and designers in Peru. The company claimed that thousands of technology blueprints were transferred to two China based internet service providers. China’s Government-run National Computer Virus Response Centre was reported to have helped ESET by blocking compromised email accounts. But this was not before the company concluded: “From our analysis of all the used email accounts we can derive the scale of the attack and conclude that tens of thousands of AutoCAD drawings (blueprints) were leaked” (32).

Employer-on-employee espionage

Employers appear to be increasingly turning to monitoring practices that could be defined by some as surveillance or espionage.

In the UK and many similar legal jurisdictions, covert spying on employees is lawful so long as specific limitations and conditions are satisfied. Several cases have hit the headlines. In 2002, one British corporate intelligence agency was reportedly employed to monitor the activities of senior staff at law companies to establish if lawyers were working for competitors, and to establish other details, reported the International Intelligence and Law Gazette (33). Two European supermarket companies, Lidl and Aldi, were exposed by the Daily Mail in 2013 for allegedly deploying secret cameras in staff lockers at some European locations. Staff alleged that personal information was then used to pressurise certain staff to leave their jobs. The companies firmly denied that information relating to private details was sought (34).

Monitoring of employees in the UK may only be carried out for the following given reasons:

– To comply with wider legal requirements, including health and safety, the prevention of violence, or other crimes, including those with a potential corporate culpability lawsuit

– To identify staff training, customer service and productivity improvements

Small business adviser, Lesley Furber, writes that UK employers need to comply with the following laws when they choose to carry out monitoring or surveillance of employees:

•   The Regulation of Investigatory Powers Act 2000

•   The Telecommunications (Lawful Business Practice) (Interception of Communications) Regulations 2000

•   The Data Protection Act 1988 (including the 2003 Code, Monitoring at Work) – Employers must act in accordance with the Act and its eight key principles (see Chapter 3).

Nevertheless, the consideration of human rights laws is also crucial. According to Furber: “The [UK] Human Rights Act 1998 also plays an important role here as it gives individuals’ a right to privacy and the UK’s laws try to recognise that employees may feel that monitoring by their employer at work is intrusive”. He continues: “… employers need to find a balance between an employee’s legitimate expectation to privacy and the employer’s interests when they monitor their staff, in any way” (35).

5.4 Internal risks

The European Network and Information Security Agency reported in 2009 that human errors are consistently the most likely reason for a breach of information (36). This feature of self-sabotage is unlikely to alter in the short term. A lack of risk management awareness among non-security minded employees can be a relatively easy business case to make for security managers. This is borne out by a plethora of reports that illustrate ignorance or lassitude toward information management in the approaches of many employees and, indeed, systems access controls. PricewaterhouseCoopers found that 75% of large companies “allow staff to use smartphones and tablets to connect to their systems”. According to accountancy company PwC, one public sector organisation found that it was leaking sensitive information through the front door via social media platforms, rather than via any previously-suspected back-door skulduggery (37).

Nevertheless, security community practitioners are hardly immune themselves from perpetrating inadvertent or even deliberate losses of critical information. For example, a security guard at weapons-manufacturer BAE Systems, offered to sell research and design secrets from the company’s Stanmore centre to a Russian buyer, who actually turned out to be an MI5 investigator. During the 2001 court case, it transpired that documents, reports and printouts had mainly been pilfered from office desks. According to a court reporter: “One document referred to specifications for Prophet Asic, an advanced electronic warfare surveillance system designated a NATO secret. Two documents were marked ‘UK Eyes Only’” (38). When the names, addresses, national insurance numbers and personal bank details of 25 million people – kept on disks by UK HM Revenue and Customs – were reported missing in 2007, a leading politician was able to say that the incident was “another blow to people’s trust in this government”. The organisation’s reputation was dealt a further blow when it was revealed by a national newspaper that missing data included both the original and newly assigned names of approximately 350 people sheltered by police witness protection programmes (39).

The risk of failure to comply with strict information protection laws, such as RIPA 2000, or the 1998 Data Protection Act, or historical antecedents, can also bring security practitioners into disrepute. Such cases are not so much attacks upon internal information systems but nevertheless can reasonably be construed as internal threats to information security, due to the grave consequences that any such compromises bring, including the subsequent embarrassment, notoriety, legal punishments and flushing out of that organisation’s entire operating processes in a public courtroom. A pre-eminent modern example is the successful prosecution of private investigators and journalists who unlawfully intercepted and listened into voicemail messages on behalf of the now-defunct weekly tabloid News of the World. More than 100 reporters, private investigators, security contractors and police officers were arrested for crimes associated with the Computer Misuse Act 1990, payments to public officials, bribery of public officials, conspiracy to intercept communications and, breaches of the Data Protection Act and RIPA laws. At least one investigator illegally accessed subscriber information from telecommunications company databases (40).

History also reminds us about ‘pressure cooker’ operating environments where internal security functions can be driven into disreputable conduct. In America, President Richard Nixon was forced to resign after ordering a mole hunt to find out who was leaking the Pentagon Papers. These documents were detailed defence department accounts from national intelligence estimates and survey reports gathered by US and Vietnamese officials during the Vietnam War (1963-75). Nixon’s own chief of staff sanctioned the burglary of several office buildings by private investigators including Watergate, home to the opposition Democratic Party National Committee in 1972. This whole saga ended for Nixon’s administration as a self-destructive farce, but a salutary lesson for all security professionals who can become consumed by the urgency of here and now. The ultimate irony, being, that the papers were always intended for public consumption.

Several years before, Defence Secretary, Robert McNamara, personally authorised the Pentagon Papers to be compiled in order for future American policy makers to ‘learn the lessons’ of Vietnam which he hoped would never be repeated (41). Moreover, Nixon was caught out by his own, recently beefed-up, surveillance systems. Just months before, the same President had personally authorised tape-recording within the White House’s famous Oval Office and across executive branches in order to catch out any so-called ‘moles’. Audio tapes eventually released to the House of Congress revealed that the President sanctioned covert listening operations upon his own staff, elected Congressman and other elected opponents (42). The abuse of power was vast, and Nixon was at his career peak. He had just won re-election in 1972 by a record majority. He established successful negotiations with both the USSR and Communist China which helped to end the Vietnam war. Despite his ascendancy, the President was forced to quit by 1974. It was felt by Congress that he and his team (some of whom were jailed) had clearly broken his own country’s constitution and laws; including the prohibition of hostile surveillance on US soil (43). On the eve of Congressional impeachment hearings, Nixon resigned his office. Yet he passionately argued, until his death, that such investigations served a higher purpose, to prevent publication of materials useful for the enemy. This purpose may well, in his view, have been grounded in a genuine moral perspective. The problem was that in terms of legal compliance, Nixon and his team failed to have a lawful defence.

The overweening power of those in authority (or those without public permission) to conduct surveillance on organisation’s information systems is generating an interesting new line in opportunities for cyber security educationalists and consultants; namely, how to become as anonymous and bulletproof as possible. Invasive information gathering techniques from both government, and less legitimate gatherers of data, is to some extent fuelling a revised approach to online security which is more guarded and less confident in specific or bespoke mitigation counter-measures. At least one organisation has coined this new phenomenon as a ‘zero trust’ era. IT network management specialists, Cryptzone, say that: “IT today is in the middle of a paradigm shift … It used to be taken for granted that an IT department could draw a line between trusted and untrusted environments. Now those boundaries are becoming blurred – a development that has exposed organisations to countless new cyber attack patterns” (44).

5.5 Cyber security

The US cyber security division, located within the DHS, called it right when the agency wrote that: “Today’s world is more interconnected than ever before. Yet, for all its advantages, increased connectivity brings increased risk of theft, fraud, and abuse” (45). The UK Government in its Cyber Strategy document speaks in similarly sober tones: “There is a growing realisation that technologies contain vulnerabilities that are being attacked and exploited and as awareness of this problem spreads, the dependence of modern economies and societies on internet technologies has become alarmingly clear” (46). During March 2015, UK police forces arrested more than 50 individuals on charges related to cyber crime. Attempts had been made to attack several prominent websites including Police Scotland, the US Department of Defence and a popular search engine. “Further arrests were made in connection with various cyber fraud campaigns, the theft of intellectual property from a financial services company in London and the purchase, use and distribution of various cyber attack tools (47).” Given that there are more than 200 national police jurisdictions, and two billion internet users in the world, it’s a safe bet to assume that the cases above can be fairly described as a visible pinhead on the tip of an iceberg!

In the US, the Secret Service division maintains responsibility for several electronic crimes task forces. The agency runs a cyber intelligence section which it states has, “directly contributed to the arrest of transnational cyber criminals responsible for the theft of hundreds of millions of credit card numbers and the loss of approximately $600 million to financial and retail institutions” (48). This is an impressive track record indeed. But, again, the cases above can fairly be described as the visible tip of an iceberg. Towards the end of this chapter, we take a closer look at mitigation methods and management frameworks for repelling the type of cyber attacks that are fast becoming defined as top priority security risks for corporations, governments and individual citizens alike. Indeed, the British Government’s Department for Business, Innovation and Skills, has begun to keep a close eye on cyber-related threats. Its 2013 report, Information Security Breaches Survey, found that attacks against small businesses had increased by ten percent in just one year, costing some six percent of turnover (49). US-based financial security experts at Javelin Strategy and Research found that identity theft occurred every two seconds in America in 2013 (50). Experts attribute the upward trend in online fraud to several contributors, including a surge in data and money transfers carried out by using applications (apps) and mobile devices. For example, at the London 2012 Olympics, some 50% of web users were using mobile platforms (51). There is a perception, driven by western media organisations, that most cyber threats emanate in the eastern hemisphere and are inflicted upon victims in the western hemisphere. In truth, risks are far more complex and pervasive. For example, in preparing for the 2014 Sochi Winter Games, Russia’s finance ministry declared that one in five malware attacks were targeted against financial services. Furthermore, one in five Russian nationals had experienced fraud as a consequence of mobile app usage (52). It’s something to bear in mind for security managers who are responsible for operations and personnel in any domain.

So what other types of cyber attack might security managers want to keep an eye out for? Towards the end of 2014, one of the first significant cases of cyber terrorism occurred, as Sony Pictures Entertainment acted as a lightning conductor for alleged North Korean wrath, over plans to air a movie which depicted a plot to assassinate its national leader, Kim Jong-un.

Case study: Sony Pictures entertainment hack and terror alert

On 24 November 2014, a group of hackers calling themselves the ‘Guardians of Peace’, released confidential data belonging to Sony Pictures Entertainment. The data included personal details including salaries, communications between employees and unreleased film scripts. One news organisation reported that the data hoist also stole some 47,000 social security numbers. Among other uses, these numbers can be used for bank account access verification in the US. One of the most vivid emails leaked was from a movie producer who described actress Angelina Jolie as ‘a minimally talented spoilt brat’. After eight tranches of ‘data dumps’ the scenario took a sinister twist. The Guardians of Peace issued a threat on 8 December 2014:

“We will clearly show it to you at the very time and placesThe Interview’ be shown, including the premiere, how bitter fate those who seek fun in terror should be doomed to. Soon all the world will see what an awful movie Sony Pictures Entertainment has made. The world will be full of fear. Remember11 September 2001. We recommend you to keep yourself distant from the places at that time. (If your house is nearby, you’d better leave.) … Whatever comes in the coming days is called by the greed of Sony Pictures Entertainment. All the world will denounce the SONY.”

This was the first reference to a movie titled The Interview, which was a comedy set to depict an attempt to assassinate the leader of North Korea. Two lead actors, Seth Rogan and James Franco, cancelled media appearances relating to the film. Many cinema chains cancelled airing the movie. On 18 December, the White House confirmed that it was treating the hack as “a serious national security matter” (53). Sony set aside $15m to deal with business disruptions that followed. The method of attack was well-planned and took more than one year to accomplish. Malicious software, dubbed wiper because it destroys data on a target’s hard drive, had been implanted into Sony’s network infrastructure. Some computers were rendered inoperable and several related social media accounts were hijacked. Monetary compensation was demanded from the outset but emails had been missed or ignored by executives. Responsibility for the attack is, so far, unproven.

The latest attack upon Sony is an exemplar of just how detailed cyber attack planning can become. After all, cyber crime is a very lucrative business. Researchers at Group-IB in Moscow found that Russian-speaking hackers accounted for one third of the world’s cyber crime market in 2011, making some $4.5bn (54). One tenth of this revenue came from mobile internet banking fraud. Spamming campaigns, particularly for counterfeit products, raised a similar amount in revenue. For hackers, who attack both domestic and international markets, and who are often subcontracted by clients via the dark web to target specific institutions or sectors, the risks are far lower and rewards far higher. Cyber crime groups organise into a centralised ‘management system’ and outsource work to ‘specialist teams of hackers’, report Group-IB. “This trend leads to the merging of the two criminal worlds with the subsequent resource allocation from the mafia’s traditional areas of control – prostitution, drug and arms trafficking and so on – in favour [sic.] of cyber crime (55).” IT services provider, IBM, produces an annual Cyber Security Intelligence Index, which reported in 2014 that US companies were attacked, on average, 16,856 times during the previous year (56). IBM also point to under-reporting issues: “some victims aren’t even aware they’ve been compromised”, the report concedes (57). This will certainly be the case with several thousand US citizens during 2014 and 2015 who have had their online tax-return accounts hacked by cyber criminals who have filed inaccurate data on their behalf and pocketed millions in fraudulent tax rebates well before the victim or Inland Revenue Service became aware. Although one in five businesses told IBM that cyber attacks caused them ‘lost productivity’, many companies, conversely, fear that taking further loss prevention measures to address cyber-related risks might actually harm productivity to an even greater extent, and thus engender a law of diminishing returns (58). Moreover, the case of American, Edward Snowden, an American systems administrator who leaked more than one million classified intelligence documents to a variety of media organisations during 2013, demonstrated that perhaps the biggest single point of failure to any IT system remains the ‘insider threat’. Snowden, who remains more famous than his own country’s president in virtually every country outside of the US, has caused many to change the way ‘we view security’ writes American company Panda Security (59). The Canadian Centre for International Governance Innovation found in their comprehensive survey of international internet service users that “some 39% of respondents claimed they regularly change their passwords, and that they do so more frequently than in the previous year” (60). A humdinger of a dilemma for the security consultant follows from this feeble statistic: what about the other 61%? How do we wake them up to the clear and present dangers posed by public cyber security inertia?

Insider threats and risks

In Maroochy Shire, Queensland, millions of gallons of raw sewage were pumped into the local environment by a computerised waste management system. “Marine life died, the creek water turned black and the stench was unbearable for residents”, reported the Australian Environmental Protection Agency (61). Police later found that a contractor at the sewage plant, Vitek Boden, held software on his hard drive that enabled him to control the sewage management system. According to the UK Centre for the Protection of National Infrastructure (CPNI), Boden had used the internet, wireless radio, and his inside knowledge, to carry out an attack over a remotely controlled SCADA system, which he had attempted 46 times over several weeks beforehand (62). Boden was jailed for two years in 2001 for his crime. His motivation: revenge for being turned down for a permanent job within the city council.

In 2008, five men were jailed for the largest known cash heist in British criminal history. Four gangsters, dressed as police officers and disguised by facial prosthetics, kidnapped the cash depot manager, his wife and young son at gunpoint. They then raided the facility in Tonbridge, Kent, locked up its terrified employees in cages, and escaped with more than £53m in bank notes. Soon attention came to focus upon a security contractor, Emir Hysenaj, who had worked at the depot. CPNI reported that Hysenaj was accused of providing information to the criminal network ahead of the raid, using a hidden camera to film the inside of the deport. Shift patterns and interior protective security arrangements were recorded. It was only after the event that colleagues purportedly recollected Hysenaj’s heightened interest in the security arrangements at the depot (63). Hysenaj was sentenced on three criminal charge counts to 20 years’ imprisonment.

These two selected cases highlight just how devastating risks from inside security lapses can be. In each scenario, the resilience of entire organisations and human lives were at stake. CPNI carefully define the phrase ‘insider risk’ as, “The potential damage that can be caused to an organisation from an ‘insider’ within their workforce who uses their legitimate access for unauthorised purposes ...” (64). Henrik Kiertzner, at QinetiQ, the UK defence research company, explains that managing the risk from information security violations should also consider acts which are not always deliberate. Insider threat can be summed up as the, “Threat of compromise of your internal systems from an internal source ... generally speaking it falls into two categories: malicious and non-malicious” (65).

Network and cyber security specialists have also been working with the concept of insider threats several decades before revelations from Wikileaks (founded 2006) and Edward Snowden exploded into the public domain. The SANS Institute began its mission in 1989 as a co-operative education and research organisation for computer security professionals. It has since grown to be the largest information security certification body in the world. Perhaps some of its most prominent work has been to establish 20 so-called Critical Security Controls which were published in 2013 following collaboration meetings of US and international experts, agencies and corporations (66). This menu of information security protections has been particularly useful in assisting American companies and agencies to respond, both in strategy and practice, to the US President’s Executive Order 13636 for ‘Improving Critical Infrastructure Cyber Security’ (67). SANS also offer a clear definition for an internal threat: “An insider is a trusted member of your organisation such as an employee or a contractor who intentionally causes harm ... such as infecting our computers, causing our network to crash or stealing confidential information” (68).

Motivations are almost countless, and this means that the same can be said for the potential amount of targets. For example, Rory Byrne, a security adviser to humanitarian organisations, explains that his, “experience has uncovered that insider threats – like disgruntled employees or paid cover sources like cleaners or security guards – are becoming a common intelligence tactic used against human rights NGOs by governments” (69).

Is there anything we can do to anticipate insider risks?

According to information security specialist, Ramkumar Chinchani, self-taught ‘insider’ perpetrators are often more difficult to defend against, because the nature of attack is more subtle; the targeting pattern is more often a soft probing of system or personal/organisational vulnerabilities rather than the type of ‘brute force’ targeting that network defences and modelling may pick up beforehand (70). In order to bring a sense of process control and security strategy for companies and critical infrastructure, dedicated risk assessment processes around specific insider risks are well advised, say government agencies including CPNI (71). Further risk assessment models outlined by this author in Chapter 6 (6.2) will also assist security planners to identify, analyse, evaluate and (hopefully) mitigate internal risks.

When contemplating, addressing and treating insider risks, it is vital to understand that legal, cultural, environmental and situational factors are at all times taken into account. For seldom are the manifestations of insider threats as clearly and neatly cut for airtime production, as newspaper copy writers and prosecutors might have us believe. The lines of distinction between strange practice and malpractice in the office are very often blurred. Security management staff can be at their most vulnerable if they make a bad ‘call’ about any threat, let alone one from the inside. A falsely accused employee, or clumsily monitored manager, will justifiably feel embarrassment and anguish if they – or others – come to realise they are under suspicion. So before we close this section, let’s pause and reflect on the ‘grey areas’ of insider risk management. Employees and colleagues who may raise alarm bells for various behavioural patterns on network systems, may not be criminals but could be: uncovering malpractice which is not being properly addressed internally; quietly carrying out permitted tasks with senior approval, such as secretly testing business continuity; over-enthusiastic researchers; bored and underworked; suffering from mental health issues, and so on.

5.6 Mitigation: Developing a security policy

“Cyber risk is not so much about the vehicle, but more about the individual using it.” – City of London Police Commissioner, Adrian Leppard, to a CSARN business conference, July 2014 (72)

Developing any form of mitigation strategy can seem daunting. Particularly with insider threats, where the identifiable lead-in time (reconnaissance) before any attack is usually minimal to non-existent. The scale of the problem seems so vast, and in most modern civilian workplace environments, there is an employee assumption that access and convenience has commercial primacy over security and controls. Upwards of 80% of adults in your workplace use the Internet every day. On average, an employee uses two to three mobile data devices. More than four out of five employees use smartphones – often their own, or their families – to access work documentation (73). Authorities in the UK estimated that some £27bn was lost from the national economy due to cyber crime which included malicious attacks (74). In relation to global data, Britain’s plight is merely the tip of a never-ending iceberg. IT security specialists, MacAfee, reported in 2013 that the world economy was damaged by anything from $300bn - $1tr by cyber-enabled attacks (75). Complexity around data protection policies spawn when one considers an emerging trend of ‘remote workers’. By 2009, the UK had seen an upsurge from almost 2.3 million remote workers to 3.7 million over the prior decade (76). Britain’s mobile employee culture is similar to dozens of other jurisdictions; in fact the term ‘jurisdiction’ is almost becoming dormant for many employees and businesses. The growth of international business travel, covered in Chapter 8, and executives who depend on ease of access to company data in order to win new contracts or drive forward new investments, does add at least one extra layer of vulnerability to information security risks.

Therefore, as US-company Cryptzone envisage, an era of risk mitigation based upon zero trust (of anything or anyone) may well be arriving. Security managers will already be aware that some military, police and government environments are already, perhaps, at an advantage when it comes to establishing sensible processes and policies around information security.

This is because there is a keen public expectation that sensitive national information – which can also include one’s own personal details, such as medical, tax, criminal and passport records – should be protected. Yet, as we have noted in this book previously, if anything, civilian business environments have become less hierarchical and more relaxed in recent times. Some specific sectors or premises, such as luxury fashion or jewellery retailers, may well be able to enforce so-called ‘spot checks’ upon employees. But could such random IT testing, which possibly included email and social media scanning, ever become acceptable in a civilian work environment? It may be an inconvenient proposal for future employers and employees to consider together.

Access controls – A fun test of reverse psychology

Perhaps a task of security contractors’ is to ask busy fellow colleagues to pause and reflect about rights and responsibilities for a moment. For it may be that we need to reverse the psychology of IT access entitlement pervasive in some work cultures for a minute! If colleagues are so happy for the company to grant them open-ended access to ICT, then, in turn, would they consider enabling their company and executives unlimited access to their own personal devises and data? What do I mean? Well, if you think for a moment that this author has become adversarial, just consider the following case:

During 2014, brute force attacks levelled by hackers against an iCloud provider, enabled the perpetrators to access and distribute very private photos of celebrities, including nude and other deeply personal images. Many victims were unaware that their phone provider automatically backed up their photo albums to the Cloud. (Very helpful, no doubt, if the device gets lost or stolen.) In the first instance, vivid photos were circulated to a controlled audience of gleeful hackers within the dark web. Several days later however they were re-leaked onto a publicly accessible online bulletin board. Thousands of sensitive images immediately went viral. The world woke up to an inconvenient fact; one that many security practitioners grasped a long time ago. Namely, that there is literally no such thing as private electronic data. Not in the national security arena. Not in a film star’s bathroom. Not anywhere. The question, thus, to put to staff that use their own mobile data devises on workplace systems, or oppose any policy to prohibit such, might be to ask them if they are comfortable with their own most personal files being uploaded for all to see in the workplace domain, or even onto the Internet? Because any Hacker targeting their workplace will hardly have the scruples to differentiate between some bland corporate data stored on the company hard-drive, or the very personal data of employees, who may have inadvisably or inadvertently plugged in a smartphone or a tablet. In fact, they and their online friends may well like your personal photos more!

How do we assess vulnerability?

The risk assessment process for identifying information or cyber security threats is no different to other forms of security risk assessment. Some basic modelling is shown in our next chapter (6.2), including a basic risk management wheel. For business continuity purposes, security practitioners will seek to work with colleagues across the business to both identify the threats and also minimalise (or eradicate) the impact. The core security risk assessment phases involve:

1.   Identify the threats and hazards

2.   Decide who or what may be harmed and how

3.   Evaluate the risks and decide on treatment and/or exposure

4.   Implement safeguards

5.   Monitor and review

When considering appropriate safeguards, the Institution of Engineering and Technology and the Centre for Protection of National Infrastructure recommend:

Avoidance: perhaps by deciding not to pursue the deployment of a vulnerable piece of equipment

Reduction: take steps to minimalise the likelihood of any risk, or to lessen the impact of a threat scenario

Sharing: spread the risk with other partners, contractors or insurance

Retention: retain the risk in-house but put in place contingency measures, in case it does occur (77)

Mitigation approaches – CIA triad

Ascertaining precisely what key building blocks an information security policy requires, has stimulated widespread debate within the information security sphere. By far the most prominent security model is the memorably named ‘CIA triad’, and this is not solely because it shares its acronym with a certain famous US-based government agency. The triad is recommended by many experts and the ISO27001 information security management system standard, for its balance and ease of applicability, on the one hand, and rather robust security management framework, on the other. Here’s how it works:

Confidentiality: Keeping information stored in a manner that is secure and only accessible by authorised individuals who have permission and purpose to access it. Information is often discretely separated and should in no way be accessible to non-intended parties. Typical examples in HR domains could be health records, personal addresses, and spouse and next of kin details.

Integrity: Information is complete and, so far as possible, is unalterable by unintended or unauthorised parties. Such information could be credit and store card records with retailers, health records held by doctors, and academic achievement records held by schools, colleges and universities. (For example, a university student in the UK was jailed in 2015 for hacking into his departmental academic records and altering his marks from 57% to 73%.)

Availability: Policies to achieve maximum up-time and to thwart power outages, cyber attacks and hard drive failures. Protecting business continuity and access to information that the business may need during periods of downtime.

Security practitioners should also consider the limitations of the CIA model. First, it deals only by providing a framework to protect and safeguard information. It does not consider changes to your specific physical or personnel environment that may well get considered by additional empirical risk considerations and assessments. Furthermore, the CIA framework is a distillation of various information security management models that have been debated and banded about for several decades. In summary, the CIA triad are three great building blocks towards a robust information security policy but they should probably not be considered as the completely finished information security policy product.

Mitigation approaches in medium and higher risk environments

Security contractors will be cognisant of the many features used in secure zones often operated within, or around, government, police and military establishments. Features to promote information security can include: restricted access areas, rigid network information firewalls, ‘clean’ zones and buildings where electronic devices are prohibited, strict procedures to prohibit entry by outside visitors and provisions to prevent movement of furniture. Moreover, since the 2013 revelations by renegade US defence contractor, Edward Snowden, peer observation for network users on classified materials is being considered writ large across many secure domains. The difficulty is that such cases only become famous because they impact high-profile security establishments, such as the CIA or the US Pentagon. At street level, most employees simply don’t absorb how such cases of information insecurity relate to them, as evidenced by a reporter at the IT Security Blog:

Many people fail to appreciate that value of the data they have gathered. They fail to appreciate the value of a strict IT policy mainly because all they care about is a workstation to use and opening files (both internal and external) as they please. So if you put all these things together, you can imagine the problems that an IT guy has to work with.” (78)

Another difficulty with location-based protective security arrangements is that such a ‘ring of steel’ can hardly extend beyond the perimeter walls of any installation. The introduction of such physical information security issues can leave employees much more vulnerable to physical hostile surveillance, burglaries, extortions and even ‘honey traps’ before and after working hours.

Companies should have little to fear from learning lessons available from other cyber security practitioners who are experienced in operating in complex and higher physical threat environments. In such environments, discipline around data protection does undoubtedly save lives and livelihoods. In the mitigation methods section below, we draw together some cyber and information security mitigation measures which have been deployed by a number of contractors in complex and hostile operating environments:

Technology

Biometrics: Access controls and authentication can be predicated on a range of options: finger and palm prints, DNA swabs, retinal scans, facial recognition and photographing.

Computer and network access: Establish induction and regular training courses, and an internal ‘driving permit’ for network computer use. From the outset use this process to explain, detail and commit staff (perhaps by way of a users’ code of conduct) to acceptable practices and embed the overall cyber security strategy. Be clear on network access firewalls: who has access to areas and why. Explain and regularly reinforce messages as to ‘why’ data security is so critical (79). Emphasise that all employees own the process and are responsible for its overall success. Start out by issuing equipment where floppy drives and USB ports are either disabled or missing then access has to be granted by the IT or security function (80).

Encryption: By using widely available software, companies are able to codify information so that it cannot be easily understood by unauthorised persons. This process can be made more secure by the use of key code words, known only to those within an organisation, to reinforce data security against outsiders.

E-purge: Reset mobile data devices to factory restore settings before being moved off-site or reassigned.

Radios: Deploy those with frequency jumper capabilities (81).

Notepads and tablets: Prohibit multiple use and mixed social/business use of work-issue mobile devices. No personal mobile devises, including notepads and tablets, should be permitted to record confidential work notes. Use a personal unblocking code (PUC) on every work device.

Passwords: Introduce and embed single use and personal ownership of passwords: devices are not to be shared and default allocated passwords must be changed immediately. Establish dual or triple authentication layers with different, user-unique passwords for: protecting the hard drive, individual network access and various software and databases.

Moving devices between sites: Consider prohibiting the removal of work-based devices and memory sticks (thumb drives) from work premises, except by authorised individuals and in an audited and documented manner.

Thumb drives: Can be signed for on being issued, and encrypted before and after use (82).

People and processes

Clear desk policy: Those who are too busy to keep a tidy desk, are, unwittingly the best friends of others who may be more hostile, equipped with audio bugs, have a penchant for stealing important files, or even placing mini video cameras.

Filing cabinets: Can be locked at all times, with designated employees as key holders. Original documents to be retained within the bureau, with sign-in procedures for access and to make copies. Further security barriers to this information will be locked inner and outer doors within the site.

Interaction: Work topics – including descriptions of the operating environment and access controls – should not be discussed beyond the workplace, particularly in public or social situations.

Hard-copy and paper disposal: Burn bags and shredding devices should be available to all employees, and their daily use should be encouraged to prevent loss of confidential or important documents. Burn bags should be incinerated and also the actions supervised by trained individuals (83).

Shared work and rest areas: Enforce closure of workstations and devices while the employee is away from their desk. Be aware that many data leakages stem from internal sources, accidental and deliberate eavesdropping, and also accidental and deliberate disclosure.

Telephone conversations: If sensitive, take calls in private offices or spaces where doors can be closed. Be aware that in today’s highly litigious and voyeuristic working environments, it is not uncommon for people to record phone conversations for all manner of motivations.

Awareness and training

Familiarisation and orientation: From the outset, show employees around accessible zones and actively identify any inaccessible zones where access controls are in place.

Cultural awareness: Help to develop emotional intelligence, team reassurance and knowledge around any cultural differences that may exist among employees. Without clear exposure to them from the outset, work or social cultural differences can nurture difficulties in the longer-run. Such communication gaps may lead to the non-reporting, or misreporting, of security risks.

Personal development: Carry out a skills gap analysis from the outset to fully challenge and examine potential employee weaknesses and vulnerabilities which could expose the organisation to risk. Provide reassurance and training to address any skills gaps.

Values: Explaining and reinforcing the critical importance of collective security: drily put by former UN Secretary General, Kofi Annan, “We all share responsibility for each other’s security, and only by working to make each other secure can we hope to achieve lasting security for ourselves(84).

Embed and follow-up: Create an open and inclusive culture where colleagues are allowed to openly and informally provide ‘feedback’ to other team members, if they believe that they see a lapse in security.

Auditing and testing: Risks can only be minimised by testing in unison all three domains of cyber security: people, processes and technology.

Management mindsets: Information security policy

RAF: Risk management framework for information security controls

The following three core management principles – that conveniently adhere to the mnemonic RAF – can be applied when we aim to devise our information security plan for medium to higher risk environments:

Robust: carry out a thorough assessment of vulnerabilities and design robust, realistic responses that can absorb or repel all threat vectors. Far better to fix the bar high and adjust – if necessary – to a lower notch later.

Awareness: be clear to employees and network users around acceptable rules and processes. Be open and transparent, and provide regular training and reminders for users. Create a positive culture around reporting accidents, risks and misuse. Remind people of the Computer Misuse Act 1990 (85).

Facilitate: “If you don’t want to be replaced by a computer, don’t act like one”, so the saying goes. Remember: the IT network and you are both there to serve the business – not the other way around!

Source: Antoni D Bick and Richard Bingley, 2014

Further learning and support

As we have discovered, cyber and information security planning is one of the key growth areas for security management professionals. Many companies and public authorities have chosen to entirely merge the information technology and security management functions, with either the chief security officer, CIO, or CISO now taking the lead, when bringing the blended issues of security and information protection into any executive boardroom. Increasingly, security practitioners are being asked to manage, or work in close collaboration with, a myriad of information technology job roles including: information security and risk analysts, IT security managers and network security consultants. Those security practitioners that can articulate reasonable information risk management strategies, and also interface proficiently with this sphere’s subdisciplines – such as identity theft, network security and cryptography – are perhaps set to become corporate security’s new aristocracy. With so much at stake – possibly one quarter of the world’s GDP, says leading global management company, McKinsey – there can be little doubt as to the future of security management tasking (86).

Hundreds, if not thousands of websites and recruitment agencies make an income from advertising cyber security related jobs, with such diverse vacancy titles as senior penetration tester, security architect, information assurance lead practitioner, senior information security specialist, information security team leader … the list of cyber pageantry has become endless (87). The expectation by clients upon security contractors to be able to have a handle on the brave new world of cyber threats, is emerging as ‘desirable’ or even ‘essential’ criteria for employment and contracts. Such is the dependency of people and property upon the security and continuity of their ICT systems that the security sector will continue to move inexorably towards becoming more technologically driven. Therefore, this book will now bring together a series of sources for further information, and support, in order to assist the reader to further develop their knowledge and skills:

Academic centres of excellence in cyber security research: There are 13 universities in the UK that are formally recognised as ACEs by CESG, the UK Government’s technical authority for information assurance. At the time of writing these are: University of Bristol, Imperial College London, University of Kent, Lancaster University, University College London, Queen’s University Belfast, Royal Holloway – University of London, University of Southampton, University of Surrey, University of Birmingham, University of Cambridge, University of Oxford and Newcastle University (88).

Bruce Schneier website and blog: A chief technology officer and academic, Schneier has been a prolific and prominent author and soothsayer on IT security since his first monthly newsletter went live in 1998 (89).

Buckinghamshire New University: Foundation degrees in cyber security will be delivered from 2015 in classroom taught sessions and with flexible and distance learning options available (90).

CESG: The UK Government’s technical authority for information assurance which describes itself as “the Information Security arm of GCHQ”. CESG, which is an acronym for Communications-Electronics Security Group, advises government departments on information security but also works “with industry to ensure that appropriately assured products, services and people are available”. CESG provides training and awareness briefings, policy and guidance, and other products and services (91).

CPNI: The UK Centre for Protection of National Infrastructure produces various guidance on carrying out due diligence and background checks around ‘insider’ threats. Among several excellent guides are the Pre-Employment Screening: A Good Practice Guide (Edition 5: 2015), The Secure Procurement of Contract Staff: A Good Practice Guide for the Oil and Gas Industry (2011), and Resilience and Cyber Security of Technology in the Built Environment (2013), produced by the Institution of Engineering and Technology in partnership with CPNI (92).

CREST: CREST is a not-for-profit organisation that serves the technical information security marketplace. According to the organisation’s website, “CREST provides organisations wishing to buy penetration testing services with confidence that the work will be carried out by qualified individuals with up-to-date knowledge, skill and competence of the latest vulnerabilities and techniques used by real attackers” (93).

Graham Cluley: Cluley is a fast emerging prominent internet security expert that runs a popular newsletter named GCHQ (not to be confused with the British Government’s own government communications headquarters). Cluley is independent, topical and well-informed, and offers practical mitigation advice, just like fellow authors, Schneier, Krebs and Hunt (94).

Information asset protection and pre-employment screening group: This influential group of practitioners is run by ASIS, the American Industrial Security Society. The forum advises and produces protection of assets manuals and coined the phrase ‘information asset protection’ in its 2008 PoA manual (95).

Information Assurance Advisory Council: The IAAC is a non-profit group that brings together UK policy makers, corporate leaders, law enforcement and researchers, in order to create and maintain “a safe and secure information society” (96).

ISO 27001:2013: Is the International Standards Organisation’s response to what it describes as the ‘plague’ of ‘cyber threats’ against businesses and governments around the world. This excellent guidance document provides a coherent management framework for “assessing and treating risks, whether cyber-oriented or otherwise, that can damage business, governments and even the fabric of a country’s national infrastructure”, explains lead author and ISO convenor, Professor Edward Humphreys (97).

Krebs on security: Brian Krebs is a former Washington Post reporter who was attacked by cyber criminals in 2001. This incident made him “intensely interested in computer security”. Krebbs now runs one of the most successful blogs on cyber crime under the banner ‘In-depth security news and investigation’ (98).

NIST Special Publication 800-53 (Revision 4) ‘Security and Privacy Controls for Federal Information Systems and Organizations’: published by the National Institute of Standards and Technology, at the US Department of Commerce. This is a comprehensive catalogue of standards for information security professionals to apply in the workplace (99).

Open University: The OU offers a number of undergraduate and postgraduate programmes in computer and network security topics.The institution also offers, at the time of writing, a free ‘Introduction to Cyber Security’ workshop for those wishing to learn about protecting their ‘digital life’ at home and work (100).

Troy Hunt: Slightly more technical and with a bent for software developers, Hunt describes his website as “Observations, musings and conjecture about the world of software and technology”. Hunt has launched a security-related newsletter Security Sense and provides entertaining advice: step into the shoes of your adversary and ‘hack yourself first’ (101).

UK cyber security: The role of insurance in managing and mitigating the risk, is a report by HMG and Marsh, one of the world’s leading insurance companies, which highlights the exposure of companies to cyber risk, including from within their own supply chain (102).

Chapter 5: Wrap-up

In closing this chapter on Information and Cyber Security, we reflect on some of the approaches and attributes that will help your company gain the competitive edge. These include:

1.   Companies and security contractors are increasingly turning to the information security management system (ISMS) standard ISO27001, in order to introduce proportionate and sensible controls across ICT platforms and the work domains that any high-tech machinery is supposedly there to serve.

2.   The entire success of a security management function is very much dependent upon its contribution to data protection issues and cultures within any project or organisation. This can only be done by evaluating and anticipating weaknesses and then closing the gaps. Details matter. The loss of a principal’s smartphone, or failure to recover an unencrypted memory stick which contains sensitive data, can nowadays be just as embarrassing for a security function as a serious physical security breach might well be.

3.   Many security practitioners, business organisations and policy-makers see significant value in supporting and deploying intelligence work and operations. This willingness is, in part, because the overwhelming majority of data upon which intelligence is borne seems to be Open Source (OSINT) and freely available.

4.   The risk assessment process for identifying information or cyber security threats is often no different to other forms of security risk assessment. For business continuity purposes, security practitioners will seek to work with all colleagues across the business to both identify the threats and also minimalise (or eradicate) the impact. Because it is not the technology that we need protecting from, but the end-user.

5.   Companies should have little to fear from learning lessons available from other cyber security practitioners who are experienced in operating in complex and higher physical threat environments. One software company feels that we are moving into an era of ‘zero trust’ information security strategies.

6.   The expectation by clients upon security contractors to be able to have a handle on the brave new world of cyber threats is emerging as ‘desirable’ or even ‘essential’ criteria for employment and contracts.

7.   Those security practitioners that can articulate reasonable information risk management strategies, and also interface proficiently with this sphere’s subdisciplines – such as identity theft, network security and cryptography – are in all likelihood set to become corporate security’s new aristocracy.

References

(1)   Shaw, E, (2014), address to the CSARN Corporate Espionage Conference, City of London, 10/07/2014

(2)   Daily Telegraph (20/07/2008), ‘Downing Street aide in Chinese Honeytrap sting’, accessed and downloaded on 09/03/2015 at: www.telegraph.co.uk/news/politics/labour/2437340/Downing-Street-aide-in-Chinese-honeytrap-sting.html

(3)   BBC News (10/06/2011), ‘John Terry’s car ‘had tracking device attached’’, accessed and downloaded on 09/03/2015 at: www.bbc.co.uk/news/uk-england-surrey-13734330

(4)   BBC News (18/10/2009), ‘Enfield comedy show ideas stolen’, accessed and downloaded on 09/03/2015 at: http://news.bbc.co.uk/1/hi/entertainment/8313116.stm

(5)   CSARN (06-19/02/2015), The Monitor, accessed and downloaded with kind permission from CSARN

(6)   KPMG (2012), ‘Data Loss Barometer 2012’, accessed and downloaded on 09/03/2015 at: www.kpmg.com/US/en/IssuesAndInsights/ArticlesPublications/Documents/data-loss-barometer.pdf

(7)   ISO27001 (2013), accessed and downloaded on 09/03/2015 at: www.iso.org/iso/home/standards/management-standards/iso27001.htm

(8)   Calder, A, (2012), ‘Implementing Information Security based on ISO27001/ISO27002’, Zaltbommel: Van Haren

(9)   Evening Standard (14/09/2006), ‘Mourinho’s favourite restaurant is bugged’, accessed and downloaded on 09/03/2015 at: www.standard.co.uk/sport/mourinhos-favourite-restaurant-is-bugged-7209704.html

(10)   Deloitte (2013), ‘The Connected Workplace: War for Talent in the Digital Economy’, accessed and downloaded on 20/03/2015 at: www2.deloitte.com/content/dam/Deloitte/au/Documents/finance/deloitte-au-fas-connected-workplace-2013-240914.pdf

(11)   Information Assurance Advisory Council can be accessed at: www.iaac.org.uk/

(12)   Techtarget online (n.d), ‘Six Ways to measure the value of your information assets’, accessed and downloaded on 09/03/2015 at: http://searchcio.techtarget.com/feature/Six-ways-to-measure-the-value-of-your-information-assets

(13)   Ibid.

(14)   OP. Cit., Shaw

(15)   Kirkwood, (n.d.), ‘Chapter 3: The Value of Information’, accessed and downloaded on 20/03/2015 at: www.public.asu.edu/~kirkwood/DAStuff/decisiontrees/DecisionTreePrimer-3.pdf

(16)   Rimington, S, (2002) Open Secret: The Autobiography of the Former Director-General of MI5, London: Random House

(17)   Leppard, A, (2014) Keynote address of CSARN Corporate Espionage Conference, City of London; 10/07/2014

(18)   CIA website (2014) teaching Intelligence Analysts in the UK, accessed and downloaded on 14/11/2014 at: www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/vol-52-no-4/teaching-intelligence-analysts-in-the-uk.html

(19)   Knowledge Management (KM.com), accessed and downloaded on 13/06/2013 at: www.skyrme.com/kmbasics/ktypes.htm#hierarchy

(20)   Von Clausewitz, C, 1973, On War, London: N. Trubner, accessed and downloaded on 02/03/2013 at: www.clausewitz.com/readings/OnWar1873/TOC.htm

(21)   Doran, GT (1981), ‘There’s a S.M.A.R.T. way to write management’s goals and objectives’. Management Review, Volume 70, Issue 11(AMA FORUM), pp. 35-36

(22)   Sikich G, (2010), ‘Enterprise risk management lessons from the BP Deepwater Horizon catastrophe’: Continuity Central: www.continuitycentral.com/feature0790.html

(23)   Wenham, P, ‘Security Think Tank: Intelligence Led Security is About Risk Management’, Computer Weekly: downloaded on 9 November 2012 at: www.computerweekly.com/opinion/Security-Think-Tank-Intelligence-led-security-is-about-risk-management

(24)   Striking Project Management, Qualitative Risk Analysis, accessed and downloaded on 11/11/2014 at: http://strikingprojectmanagement.com/qualitative-risk-analysis/

(25)   Huxley, A, (1932) Brave New World, London: Chatto & Windus

(26)   Interview with Security manager at leading UK retailer conducted on 25/11/2014

(27)   International Business Times (31/05/2015), ‘HSBC Hires ex-MI5 Spy Jonathan Evans to Help Fight Financial Crime’, accessed and downloaded on 24/03/2015 at: www.ibtimes.co.uk/hsbc-jonathan-evans-mi5-spy-money-laundering-473324

(28)   Daily Telegraph, (24 October 2012), ‘Vacuum maker Dyson claims a spy was selling secrets to a German rival’, downloaded from: www.telegraph.co.uk/finance/newsbysector/retailandconsumer/9631492/Vacuum-maker-Dyson-claims-a-spy-was-selling-secrets-to-German-rival.html

(29)   Forbes Magazine online (04/08/2008), ‘Commercial espionage ‘Travelers Beware’: 4 August 2008 downloaded from: www.forbes.com/2008/04/08/viator-corporate-espionage-oped-cx_slw_0408viator_print.html

(30)   FT.com: ‘China linked to ‘industrial Espionage’’, downloaded from: www.ft.com/cms/s/0/92d6032a-a108-11e1-9fbd-00144feabdc0.html#

(31)   Asia Times online, ‘China Tangled up in industrial espionage’, 11 February 2012, downloaded from: www.atimes.com/atimes/China/NB11Ad01.html

(32)   Zweinenberg, R, in ‘Helpnet Security Online’, (21/06/2012), ‘AutoCAD worm steals blueprints, sends them to China’, accessed and downloaded on 14/11/2014 at: www.net-security.org/malware_news.php?id=2153

(33)   International Intelligence article on UK law firms employing corporate intelligence agency was accessed and downloaded on 12/11/2012 at: www.international-intelligence.co.uk/media/law-gazette-september-2002/ And: Law Gazette, Law Firm Call in ex-SAS Personnel: 5 September 2002 was accessed and downloaded at: www.international-intelligence.co.uk/media/law-gazette-september-2002/

(34)   Mail Online (06/01/2013), ‘Aldi hid cameras to spy on staff’, accessed and downloaded on 14/11/2014 at: www.dailymail.co.uk/news/article-2258161/Aldi-hid-cameras-spy-staff-Detective-claims-company-wanted-details-employees-relationships-finances.html

(35)   Furber, L, in Crunch Love Accounting (2014), ‘Workplace surveillance: can your employer spy on you at work?’, Accessed and downloaded on 14/11/2014 at: www.crunch.co.uk/small-business-advice/2012/10/31/workplace-surveillance-can-your-employer-spy-on-you-at-work/

(36)   European Network and Information Security Agency (2009), Information security awareness in financial organisations guidelines and case studies: Luxembourg: Luxembourg publications office. Full report accessed and downloaded on 03/09/2014 from: www.enisa.europa.eu/publications/archive/ar-book-09/at_download/fullReport.

(37)   Potter C, and Waterfall, G, (2012) Information Security Breaches Technical Report: London: Pricewaterhouse Coopers. Full report accessed and downloaded on 03/09/2014 from: www.pwc.co.uk/audit-assurance/publications/uk-information-security-breaches-survey-results-2012.jhtml

(38)   Norton Taylor, R, (2001) ‘Guard stole secret weapons papers’, The Guardian (18/12/2001), accessed and downloaded on 03/09/2014 from: www.theguardian.com/uk/2001/dec/18/richardnortontaylor

(39)   Porter, A. (2007) ‘Lost data discs ‘endanger protected witnesses’’, The Telegraph (05/12/2007), accessed and downloaded on 03/09/2014 from: www.telegraph.co.uk/news/uknews/1571536/Lost-data-discs-endanger-protected-witnesses.html

(40)   The Wire, (2011), ‘A complete list of the arrests and resignations in the News Corp. scandal’, accessed and downloaded on 21/03/2015 at: www.thewire.com/global/2011/07/complete-list-arrests-and-resignations-news-corp-scandal-so-far/40082/

(41)   McNamara, R (1996) ‘In Retrospect: The tragedy and lessons of Vietnam’, New York: Vintage Books: ISBN-10:0679767495

(42)   Bernstein, C, and Woodward, B (2014) ‘All the President’s Men’, New York: Simon & Schuster reissue edition: ISBN-10 1476770514

(43)   Gaddis, J, L, (2007) ‘The Cold War: The deals, the spies, the lies, the truth’, London: Penguin: ISBN-10 0141025328

(44)   Cryptzone (2015), ‘Preventing Cyber Attacks with a Layered Network Security Model: Risk mitigation based on the principles of Zero Trust’, can be accessed by applying via the organisation’s website at: www.cryptzone.com/forms/preventing-cyber-attacks-layered-network-security-whitepaper

(45)   US DHS online, Cyber Security division, accessed on 26/03/2015 at: www.dhs.gov/combat-cyber-crime

(46)   UKTI (2012), ‘UK Cyber Security: A Strategic Approach to Exports’, UK HMG Stationery Office.

(47)   CSARN Monitor: 5: 12: (20/03/2015)

(48)   Op. Cit., US DHS online

(49)   BIS (2013), ‘Information Security Breaches Report’, accessed and downloaded on 26/03/2015 at: www.gov.uk/government/publications/information-security-breaches-survey-2013-technical-report

(50)   Javelin Strategy and Research, (2014), ‘A New Identity Fraud Victim Every Two Seconds in 2013 According to Latest Javelin Strategy and Research study’, accessed and downloaded on 26/03/2015 at: www.javelinstrategy.com/news/1467/92/A-New-Identity-Fraud-Victim-Every-Two-Seconds-in-2013-According-to-Latest-Javelin-Strategy-Research-Study/d,pressRoomDetail

(51)   Bingley, R, (08/02/2014) ‘Information Security Threat Assessment’ for client at Sochi Winter Olympics, redacted from wider publication

(52)   Ibid.

(53)   Entertainment Weekly, (18/12/2014), ‘White House is treating Sony hack as ‘serious national security matter’’, accessed and downloaded on 26/03/2015 at: www.ew.com/article/2014/12/18/white-house-sony-interview-north-korea

(54)   Bloomberg Business, (24/04/2012), ‘Russian hackers Gain Third of Global Cybercrime Market, IB Says’, accessed and downloaded on 26/03/2015 at: www.bloomberg.com/news/articles/2012-04-24/russian-hackers-made-4-5-billion-last-year-vedomosti-says

(55)   Ibid.

(56)   IBM Security Services, (2014), ‘Data Breach Statistics’, accessed and downloaded on 26/03/2015 at: www-935.ibm.com/services/us/en/it-services/security-services/data-breach/

(57)   Ibid.

(58)   Ibid.

(59)   Panda Security (18/12/2014), ‘The Snowden effect: Has cyber-espionage changed the way we view security?’, accessed and downloaded on 26/03/2015 at: www.pandasecurity.com/mediacenter/security/snowden-effect-cyber-espionage-changed-way-view-security/

(60)   Ibid.

(61)   The Register (31/10/2001), ‘Hacker jailed for revenge sewage attacks: Job rejection caused a bit of a stink’, accessed and downloaded on 26/06/2015 at: www.theregister.co.uk/2001/10/31/hacker_jailed_for_revenge_sewage/

(62)   CPNI: THE SECURE PROCUREMENT OF CONTRACTING STAFF: A GOOD PRACTICE GUIDE FOR THE OIL AND GAS INDUSTRY: April 2011. Downloaded on 13/3/2013 from: www.cpni.gov.uk/documents/publications/2011/2011012-gpg_contracting_staff-oil_and_gas.pdf?epslanguage=en-gb

(63)   Ibid.

(64)   Ibid.

(65)   Kiertzner H, (26/3/12), ‘Qinetiq Cyber Security Industry Expert Insights’ downloaded on 14/03/13 from: www.youtube.com/watch?v=5XbSNeVsQYc

(66)   SANS Critical Security Controls – Version 5: was accessed and downloaded on 26/06/2015 at: www.sans.org/critical-security-controls/

(67)   Executive Order 13636 (12/02/2013), ‘Improving Critical Infrastructure Cybersecurity’, was accessed and downloaded on 26/06/2015 at: www.gpo.gov/fdsys/pkg/FR-2013-02-19/pdf/2013-03915.pdf

(68)   SANS ‘Insider threats and Need for Fast and Directed Response’ can be accessed and downloaded at: www.youtube.com/watch?v=GQnueERQ31c

(69)   Byrne, R, (2014) in ‘Communications Technology and Humanitarian Delivery: Challenges and Opportunities for Security Risk Management’, (EISF: p.16), can be accessed at: www.eisf.eu/library/communications-technology-and-security-risk-management/

(70)   Ramkumar Chinchani et al., (2005), ‘A Target-Centric Formal Model For Insider Threat and More’, Department of Computer Science and Engineering State University of New York at Buffalo

(71)   Op. Cit., CPNI. For an example risk assessment matrix please read p.29

(72)   Leppard, A, (2014) Keynote address of CSARN Corporate Espionage conference, City of London; 10/07/2014

(73)   Cisco and Forester (2013), Enterprise IT Guide, Managing your mobile devices: accessed and downloaded on 11/09/2014 from: www.enterpriseitguide.com/connectivity/managing-your-mobile-devices/

(74)   Op. Cit., Leppard

(75)   Ibid.

(76)   Office of National Statistics (2009), ‘Labour Force Survey’ and UK National Crime Agency (2014) Presentation at the CSARN Corporate Espionage conference, City of London; 10/07/2014

(77)   CPNI and IET (2013), ‘Resilience and Cyber Security of Technology in the Built Environment’, was accessed on 28/03/2015 at: www.cpni.gov.uk/documents/publications/2013/2013063-resilience_cyber_security_technology_built_environment.pdf?epslanguage=en-gb

(78)   IT Security Blog (2010), accessed and downloaded on 20/03/2015 at: www.it-security-blog.com/it-security-basics/implement-a-strict-it-policy/

(79)   Bick, A (2013): ‘Information Security’ formative paper, Buckinghamshire New University (redacted)

(80)   Op. Cit., IT Security Blog

(81)   Op. Cit., Bick

(82)   Ibid.

(83)   Ibid.

(84)   Kofi Annan, quoted from: www.quotationsource.com/q-52-Collective-responsibility.htm

(85)   Computer Misuse Act (1990), to find out more about this UK act go to: www.legislation.gov.uk/ukpga/1990/18/contents

(86)   Op. Cit. IHLS

(87)   Cyberjobstite.com, was accessed on 20/03/2015 at: www.cybersecurityjobsite.com

(88)   A full list and contact details of Academic Centres of Excellence in Cyber Security Research can be accessed via the CESG website at: www.cesg.gov.uk/awarenesstraining/academia/Pages/Academic-Centres.aspx

(89)   Bruce Schneier’s website was accessed at: www.schneier.com/

(90)   Wood P, (09/07/2014), Buckinghamshire New University Cyber Security Foundation Degree, was accessed on 25/03/2015 at: https://buckssecurity.wordpress.com/2014/07/09/new-foundation-degree-in-cyber-security/

(91)   CESG’s website was accessed on 20/03/2015 at: www.cesg.gov.uk/Pages/homepage.aspx

(92)   CPNI, (2015), ‘Pre-Employment Screening: Good Practice Guide’, was accessed on 28/03/2015 at: www.cpni.gov.uk/documents/publications/2015/pre-employment%20screening%20edition%205%20-%20final.pdf?epslanguage=en-gb CPNI, (2011), ‘The Secure Procurement of Contract Staff: A Good Practice Guide for the Oil and Gas Industry’, was accessed on 28/03/2015 at: www.cpni.gov.uk/documents/publications/2011/2011012-gpg_contracting_staff-oil_and_gas.pdf?epslanguage=en-gb CPNI and IET (2013), ‘Resilience and Cyber Security of Technology in the Built Environment’, was accessed on 28/03/2015 at: www.cpni.gov.uk/documents/publications/2013/2013063-resilience_cyber_security_technology_built_environment.pdf?epslanguage=en-gb

(93)   CREST website was accessed on 20/03/2015 at: www.crest-approved.org/

(94)   Graham Cluley’s website and newsletter can be accessed at: https://grahamcluley.com/

(95)   The Information Asset Protection and Pre-Employment Screening Group website was accessed on 20/03/2015 at: www.asisonline.org/Membership/Member-Center/Councils/iapps/Pages/Members.aspx?rpage=1&k=

(96)   Information Assurance Advisory Council website was accessed on 20/03/2015 at: www.iaac.org.uk/

(97)   Humphreys, E, (09/10/2013), ‘The new cyber warfare’, accessed and downloaded on 26/03/2015 at: www.iso.org/iso/home/news_index/news_archive/news.htm?refid=Ref1785

(98)   Krebbs on Security was accessed at: http://krebsonsecurity.com/

(99)   NIST Special Publication 800-53 (Revision 4) ‘Security and Privacy Controls for Federal Information Systems and Organisations’: was accessed and downloaded on 26/06/2015 at: http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf

(100)   Open University Cyber Security introductory information and courses can be accessed at: www.futurelearn.com/courses/introduction-to-cyber-security

(101)   Troy Hunt’s website was accessed at: www.troyhunt.com/

(102)   UK Cabinet Office and Marsh (March 2015), ‘UK Cyber Security: the role of insurance in managing and mitigating the risk’, was accessed and downloaded on 28/03/2015 at: www.gov.uk/government/news/cyber-security-insurance-new-steps-to-make-uk-world-centre

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset