© Raymond Pompon 2016

Raymond Pompon, IT Security Risk Control Management, 10.1007/978-1-4842-2140-2_10

10. Talking to the Users

Raymond Pompon

(1)Seattle, Washington, USA

There is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things.

—Niccolò Machiavelli, The Prince

Despite their great importance to the organization, users are best known to security people for just one thing: trouble. Users download malware along with their questionably funny cat videos. Users hate to patch their software. Users click links in e-mail. Users choose passwords based on the name of their dog. Users click through security warnings so they can get their job done. Users resist new security programs. Users break security, along with many other things.

When trying to take over a system, you want to exercise control at the element with the greatest variety of behavioral responses. In most organizations , this element isn’t the technology but the users. Both the defenders and the attackers know this. Attackers take advantage of this by tricking and subverting the users. Security professionals are faced with attempting to build controls for the unwilling and the unaware. It’s a Sisyphean struggle.

Users are not passive objects that do what you command. How the system functions can affect their daily workflow significantly. Users need to get their jobs done just like everyone else. In addition to all the problems they have with their job, they don’t want to struggle with the security system as well. Consider the very real example of my spouse who is a research scientist involved in human subject testing. In order to access some of her research systems, she has to use three different username/password combinations as well as a token. Think about how often a non-techie, who is in a hurry to get work done, has to remember which authentication codes to use at which gateway, without locking themselves out. It’s no wonder the users hate security.

Specific Challenges for the Users

My intent in this chapter is to show that the problem—and more importantly, the solution—lies not with the users but with the security team. Here are some specific hurdles that users encounter when dealing with security systems.

Complexity

The use of security systems, as designed by IT security professionals, can easily overwhelm the average person. Consider the common security maxim of using at least eight-character passwords with a combination of uppercase and lowercase letters, and rotated every three months. Oh, and you should always use a different password for each system you use. And never write them down. The very fact that “password managers” software exists tells you how daunting this task can be. Yet we subject the users to this every day. But hey, picking on passwords is just low-hanging fruit. There are other unreasonable things that we expect users to do, such as the following:

  • Transferring confidential information securely over the Internet. Can they e-mail it? No, not unless it’s encrypted. How can they tell if it’s encrypted properly? In many cases, the user can’t know because some mail encryption happens within the infrastructure. Can they upload it? It depends where and how. Can they use a file-drop service? No, not unless we say that it’s safe.

  • Knowing what is confidential and what isn’t. Remember the long descriptions of PII from the chapter on scope? Now imagine a user having to keep track of that. I hope that everything that is confidential is correctly marked confidential.

  • Keeping software patched. It’s one thing to approve the system patches and let them run. It’s another when secondary apps like browsers and browser plug-ins need patching. Browsers are complex; patching may also break useful functionality. On top of that, there are evil web sites that spoof patch update messages to trick users into getting infected with malware.

  • Understanding what software is supposed to be running on their computer and when a program should be considered suspicious. Even better, being able to tell if their computer is running slow because there’s a system problem vs. when it’s infected.

  • Being able to tell if an e-mail or web site (complete with the company logo) is real or a phish. Meanwhile, attackers are actively working on new and better ways to impersonate and trick people.

  • Being aware of when encryption, antivirus, firewalls, and backups are working properly. All of these services work invisibly to the user. Most of the time the services work fine, except when they don’t. But users assume that they are always being protected.

Different Paradigm, Different Goals

Users come from a different world than IT and security. They have a different perspective based solely on how the technology presents itself to them and their daily workflow. You should never expect users to understand how systems work or the limitations of infrastructure.

The user’s perspective may not necessarily include working to keep data private and free from corruption. They are likely to assume that security is someone else’s problem (probably yours) and they are just there to do their job. Some users forget security advice and work around the controls if they need to get something done. There is a behavioral concept called diffusion of responsibilitythat describes the tendency of people not take helpful action when alone. Normal people have an implicit assumption that they are not responsible to do anything not explicitly assigned to them, so they just move on. Writer Douglas Adams nailed this concept brilliantly in his Hitchhiker’s Guide to the Galaxy, by harnessing this power for a sci-fi camouflage screen called Somebody Else’s Problem .1

Since users have different motivations, users tend not to notice security or risk requirements. If a user does notice security controls, they often feel like they’re impeding their useful work. This is where the sentiment that security gets in the way of business comes from. They have limited time and resources to deal with what they are tasked with finishing. Pop-up security warnings and authentication prompts can be distractions at best, roadblocks at worse. Users quickly learn to see the controls designed to protect them as obstacles that need to be worked around. This tends to happen in systems with rigid, unworkable, or vague rules imposed by security without context. Unfortunately, some security administrations see this rule-beating behavior as rebellion. They respond by trying to overpower the resistance with even stronger enforcement and tighter rules, which only creates more resentment. Often the problem is that the user’s goals and security goals are inconsistent and work against each other.

Culture Clashes

When the security team redesigns a business process, they run the risk of damaging the value of that process. This risk is more likely when the change is made without careful and through analysis of the goals and legacy of that process. There is a concept called Chesterton’s fence ,2 which warns that if you don’t understand the purpose of a fence, don’t pull it down; a bull could run you over. The next step beyond that is the cobra effect ,3 which refers to a poorly conceived change that makes things worse. This does not mean you should never alter or remove a process. It means you should be careful and make sure that the new process still achieves that same goal as before . Usually this simply means working with the users and taking the time to observe the process in action.

Tools for Helping Users

Given the gulf between our expectations of the users and the reality, we need some tools to bridge the gap.

Empathy

Empathy builds trust, and trust is your lubricant for working with users. The goal of empathy is not to feel bad for the other person. The goal is to understand the other person. It’s about understanding and finding a connection. Remember your listening skills: don’t interrupt, let them talk, and summarize back what they’ve told you. At this point, it’s not about solving the problem or making the pain go away. It’s about acknowledging what they’re telling you.

Don’t blame the victim. When someone gets phished, malware infected, or has a laptop stolen, the first reaction from some security people is to berate the user for being careless or stupid. People don’t set out to be tricked or robbed. Think about you’d feel if it happened to you. So don’t judge right off the bat. You can always educate later.

It’s easier to empathize if we can experience the behavior itself. When trying out new security processes, take away your elevated system access and special tools (like password managers and non-company standard browsers ). You can also try timing yourself and see how long it takes. If it’s a new authentication system, try locking yourself out. What does the messaging look like? How does the reset process function?

One last trick to understanding through the user experience of a process is to document it, step-by-step, for a complete newbie. Every user action should become a bullet point in your how-to list. You’ll find that a number of familiar things that were invisible to you now jump out as difficult or confusing. Sometimes even the sheer length of the process, once written down, can seem daunting for an unfamiliar user. All this information can feed back into your final design.

Let the Work Flow Smoothly

Redesigned processes can gain an additional leverage by following the natural flow of existing systems or task sequences. Take advantage of the existing plumbing and make the secure way be the most natural way to do things. The progression of decisions and arrangement of tasks can have a huge effect on how a system operates. Other times, you can redesign an entire process from scratch if the old system was based on paper-driven workflow. This is where you can take advantage of automation and have forms auto-filled, enforce task sequences, and monitor for bottlenecks in the flow. Another technique is to push decisions and power to the edges of the system, to give users more control over the process. Here’s an example that embodies these ideas:

Oldmanual process

  1. Manager e-mails a note to IT for new user access.

  2. IT receives request and forwards to HR for approval.

  3. HR checks records and send back approval to IT.

  4. IT responds back to manager with note saying approved and ETA for change.

  5. IT creates new user account and temporary password.

  6. IT e-mails manager with new account name and asks them to schedule new user training.

  7. Manager contacts new user to tell them to call IT for training.

  8. New user contacts IT, schedules training, and receives temporary password.

  9. Auditors reconcile paper trail of e-mails with existing user database.

All right, this looks like a process based on an old memo paper process. It’s thorough but slow and cumbersome. Maybe we can automate this with a simple workflow.

Newautomated process

  1. Manager enters request for new user in user management system (UMS). UMS automatically notifies HR for approval.

  2. HR approves in UMS and IT is automatically notified.

  3. IT creates account and temporary password, and enters into the UMS that it is done.

  4. UMS notifies manager to schedule training.

  5. Manager contacts new user to tell them to call IT for training.

  6. New user contacts IT, schedules training, and receives temporary password.

  7. Auditors receive a report from UMS to reconcile with existing user database.

Perhaps we can use automation to create the account as well and shave off more back-and-forth work. We’ll make sure that the system is keeping detailed records and sending notifications to keep the auditors happy.

Improved automated process

  1. Manager enters request for new user in user management system (UMS). UMS automatically notifies HR for approval.

  2. HR approves in UMS and account is automatically created with temporary password. Notification is sent back to manager to schedule training. UMS also notifies IT about a new user and training that needs to be done.

  3. Manager contacts the new user to tell them to call IT for training.

  4. New user contacts IT, schedules training, and receives temporary password from UMS.

  5. Auditors receive a report from UMS to reconcile with existing user database.

Sometimes the best control for a risk is inconspicuous. When faced with a task, consider looking at a policy control instead of a technical control. Maybe it’s really expensive and error-prone to do automated security source code scanning on every developer submission. A new policy that requires another developer to sign-off on a peer code review for each submission is cheaper and can be more useful. Or maybe not. Why not test out both and see which works better? Vendors are often willing to offer proof-of-concept tests of their tools. See if they can beat a manual system.

Work with the Users

If you can’t walk in the user’s shoes, then walk alongside them: do user acceptance testing. . User acceptance testing has additional benefits beyond security and is often a key part of the software development process. Sometimes this process can be formal and done in a lab environment with recording equipment. Other times, you can do it more informally. It is important to ensure that all aspects of the working system are explored in testing. You can get a lot of good feedback from handing your instructions to some key users and asking them to walk through it while you observe. Not only does this give you good feedback on what’s understandable and what’s clunky, but it also fosters better user relations. When users can give you direct feedback, it makes them feel more comfortable about being included in the design process.

As with many things in security, one of the key characteristics of a good system is simplicity. Simple things are easy for users to understand and easy for auditors to review. Simple things have clarity, where security decisions and warnings are relevant and apparent. Security warning banners are one area where things can become cluttered and confusing. Banners are powerful tools if used judiciously. Banners work best when crossing key barriers, like moving onto a scoped system or logging in from the Internet. A warning banner on every single system login results in users blindly clicking through them without reading. Worse, it trains users to click through banners in general, so when they change or something important is added, they do not see it.

Users need to be given the clear and relevant information they need to make security decisions. Perhaps the most interesting example of this has been the evolution of insecure HTTPS certificate warning alerts over the years for web browsers. The early messages were full of useful information, but confused users:

Data you send to this site cannot be viewed or altered by others. However, there is a problem with the security certificate. The name on the certificate is invalid or does not match the name of the site. Do you want to proceed?

Over time, things were simplified and included some direct advice:

There is a problem with this certificate - it is not trusted. We recommend you close this page and do not continue.

Now the message is dead simple:

Your connection is not secure - [Go back]                                                                 or [Advanced]

Any extra information, as well as the option to continue, is underneath the Advanced button that should (legitimately) deter inexperienced users from moving forward.

One thing you can do with user adoption is to provide resources. This can include short training videos (use a screen recorder) that aren’t mandatory but simply available. An intranet-posted frequently asked question (FAQ) list is a useful thing as well. You should always include a note that provides a way for the users to ask for help or send feedback. It is critical to make it possible for users to get help without feeling stupid for asking, or risking being belittled. When they do provide feedback, if it is sarcastic or angry, don’t take it personally. Even with the best of design work, some processes are just going to be difficult and slow. Just let them vent and do what you can to make things easier.

The alternative to a counter-intuitive, ineffective process is to remove the control. This sounds like heresy but it is occasionally worth considering. You can shift your resources from enforcement and training to another control somewhere else, that compensates for the risk. Perhaps a combination of detective and corrective controls is a better use of energy. Maybe it’s worth revisiting if the risk could be accepted or the assets moved out of scope.

Get Users on Your Side

One of the most powerful things you can do in security is to align the user’s goals with security. Sometimes management is willing to help with this by leading by example or providing incentives. Things like free coffee cards and periodic security awards given to users who exemplify good security behavior are useful. They do cost money and may not work for some users. There are other powerful techniques that can help enlist people to fight the good fight, as described next.

Explain Why You’re Doing This

It may seem obvious, but actually demonstrating the risks and dangers of hacking can sink in the importance of what we’re doing. Make the examples tangible and personal, showing exactly how an attacker would phish the user and how their stolen credentials would be used. Explain to them exactly how antivirus software works and its limitations. Show them how cyber-crooks buy and sell private information on darknets. Go into detail about how auditors will be reviewing their actions and what a finding write-up will look like. In many cases, the success of an audit may directly affect sales and user’s employment status, so you can mention this as well. You may scare people a little. That too can help make the lesson stick.

Explain the intention and vision of controls , so that people understand what things you are trying to prevent. There is also a tendency for users to do things if a control allows them. Explaining the intention of a control can help keep them on the right path if a control doesn’t work as intended. For example, explaining why the credit card numbers are scrambled with encryption will signal to users that if they see credit card being stored unencrypted, something is wrong. You can also explain how the control and risk fit into the overall business objective, so they can tie the security activity back to their own work and livelihood.

Nothing like a Good Analogy

Engaging the users to help you with technical security matters is difficult. If they could be made to understand in some manner, things would be easier. Analogies are a powerful tool for explaining things.

A good example is confidential data leakage , which I covered in the chapter on assume breach. Confidential data can be copied everywhere invisibly throughout an organization. As you read in the chapter in scope, whatever system is holding confidential information immediately falls into scope. There are also issues of confidential data left behind on old media, which needs to be properly deleted. How do we convey this non-intuitive concept to users?

Well, you can use the analogy of fecal matter. Tell users that confidential data is like dog flop. It stinks, it smears, and no one wants it outside of a sealed container. Anything that it gets on needs to be scrubbed thoroughly. If it’s on your shoe (or your laptop), you can unwittingly spread it around the carpets and floors. You don’t want a nugget of it rolling around with all of your mail. Even the tiniest speck of it will contaminate whatever it’s adhered to. If you even think you smell it, you want to find it quickly and contain it. You don’t want to be near it. You sure as heck do not ever want to touch it. If you spot it, you need to call the special people (security or IT) to come clean it up quickly. Do you see how you can build the imagery to make confidential data so repulsive that users will want it as far away from them as possible? That’s the power of analogy.

Contextual Shift

Another way to harness the user’s own mindset is to do a contextual shift. Change or clarify their perspective on security and their obligations to the organization. For example, a standard security policy statement may say:

All employees will endeavor to protect the confidentiality, integrity, and availability all critical information and stored customer data.

Yawn. Sure, whatever. The users will get right on that. How about flipping that statement around and making it clear as to what they’re actually doing?

  • Our customers decide who sees their data

  • Our customers decide who gets to modify their data

  • Our customers have access to their data when they want it

Although this says the same thing, we’ve now underscored the duty of data custodianship for the entire organization. This isn’t just a policy statement, it’s a mission statement. It’s an expectation, which is easier to meet than a performance objective. It’s a reminder to IT and all the data handlers of their responsibilities. We are all working to serve our customers by protecting their data.

Security Awareness Training

Now let’s pull this all together for some user awareness training. First, user awareness training is a powerful tool for security, but not necessarily as a means to prevent users from being tricked by hackers. Security awareness training is your first, best chance to get in front of all of your users and explain everything we’ve talked about so far. It’s your chance to break the ice and get them onboard with the security program. Soon they will be dealing with controls and security procedures, so here’s your opportunity to show you care about them and really sell them on the program. Use analogies, explain why, and put things in context. Lead them through the dangerous waters of the Internet and show them how to be safe.

Security awareness training is about affecting user decisions. However, there are only so much a user can remember and only many decisions that you can significantly affect. So choose your battles carefully. What are the few key decisions you want users to make? Think about what’s useful and feasible. Take into account their mindset, incentives, average daily attention span, and technical ability. Here are three things I think that I can reasonable expect most non-technical users to grasp and internalize:

  • The threats are real and security can’t be everywhere, so it’s everyone’s job to protect the organization. We will all work together on this.

  • If you see something suspicious or out of place, call security and we can help.

  • When we ask you do something, there’s a good reason why.

Whenever you can, make the training relevant and personal with real examples from the organization. Use threats and impacts that tie directly back to what the organization operates. As a bonus at the end of the training, it’s a good idea to provide them with some home-based security advice. Not only does this maintain their security mindset when they’re off the job but it makes them feel like they’ve gotten a little gift from you. This is also why I don’t mind giving users home security advice when asked, as long as it’s succinct.

Lastly, some regulatory environments mandate security awareness training, so take attendance. If you’re not doing training live in a classroom, then consider using a security awareness quiz. Not only does this create an audit trail for later, but it also encourages participants to pay attention. Just don’t go overboard on the quiz questions. I’ve used quizzes of less than a dozen questions and if they don’t get enough answers correct they can retake it as many times as they want. I figure if they didn’t learn it during the training, they’re learning during the quiz questioning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset