CHAPTER 8
The Hacker Threat

Illustration showing an encryption ransomware message on a computer screen.
“Hi, Bob.” “Hi, Steve.” “Still building the wall, I see.” Bob: “Yep, still making ladders?” Steve: “Yep.”

No book about digital transformation would be complete without a chapter or two or a hundred on security. As a matter of fact, the number-one issue I hear when I talk to financial institutions about digital transformation is their concern around security. I couldn't agree more that this is an important issue. As a matter of fact, I encourage most organizations that are going through digital transformation to make security their number-one priority regarding their processes. This means developing an iron-clad bulletproof security process within your organization. It should go above and beyond the regulatory standards, and it should exceed any current industry standards. For instance, if your regulatory standard is to have your digital assets penetration tested four times a year, I would consider doing it every month. If your regulatory standard is to have your organization conduct an internal penetration test once a year or twice a year, I would do it monthly or more. Simply put, you cannot spend enough on security.

Since the beginning of time, security has been a problem. I'm sure that the first caveman who found shiny rock had no idea that someone else would try to take his new treasure. The same caveman who lost his shiny rock probably found another similar one, and this time he buried it, thinking that would protect it. Sadly, the caveman didn't pay close enough attention to who was watching when he buried it, and the thief caveman dug it up and took it for himself. The same caveman found another shiny rock. This time, he hid it in a bush or behind a tree. Again, thief caveman would find it and steal it. You get the picture here; this process has been going on for millenniums. Security is a constant battle to stay ahead of the thieves. When we build a 10-foot wall to keep them out, they bring an 11-foot ladder.

The first thing to understand about digital security is that nothing is bulletproof. You must expect everything you do to be hacked. One important process around security is to conduct risk reviews or risk audits for every one of your products. The risk reviews should also include a recovery plan if the product or platform is compromised. The other important aspect of security is to have easy access to extensive documentation on all the risk controls in the organization. Understanding how a process works is very important, because with this understanding it can be audited for weak points. Without proper documentation, it can never be penetration tested fully. So, how do we go about security in the new digital world? Good news: There are some evolving opportunities in the digital space around security that can make us better. A wise man plans to be hacked, a fool hacks a plan together after they have been compromised.

A great example is a website called HackerOne. HackerOne leverages crowdsourcing by inviting their community of security experts and hackers to participate in penetration testing a product. So, in the past when my team and I have developed software, we've hired companies to try to break into the platform or software that was developed by our teams. Usually, they will send one or two people at it and they will do their best using amazing tools to break into the platform. We will also give them account numbers and passwords so that they can log in and see what they can do inside the platform. The challenge here is that mathematically this process doesn't make sense. Why? Because we're only using two penetration testers. But in the real world you can experience hundreds if not thousands of attacks a day by different hackers. This mismatch in scale creates an advantage for the hacker community. Each of these attackers will have a different process for attacking your platform. And as a result, it would be impossible for you to come up with the number of permeations that these hackers will put against your system. HackerOne solves this issue by crowdsourcing your vulnerability testing. Rather than having one-person vulnerability testing your platform, you will have hundreds of people trying to break in. HackerOne organizes its testing into challenges with a money reward for succeeding. It's a private challenge that allows a lot of people to attempt to break into your platform. It can be run using a bounty system. For every vulnerability that's found, you would pay a bounty for it. This new approach is an important evolution in the security and risk review process that will improve any institution's security position exponentially.

The Artificial Intelligence Threat

Soon, hackers will start leveraging artificial intelligence algorithms against financial institutions. Rather than wasting a human resource and trying to attack your website or your home banking site or mobile site, they will use artificial intelligence to try to break in. The hacker AI application will already be trained in thousands of techniques, and unlike a human, it will never give up. It will learn from every attempt and change its approach. It will be far more effective than even the world's best hacker, because it will learn from each mistake it makes.

Consider the following. Facebook recently launched a test where it was trying to teach two AI entities in the form of chatbots to negotiate with each other. To facilitate the learning process, they invented a game. The game involved trading back and forth different objects to achieve parity between the two bots. They trained both AIs by allowing them to review humans playing the same game. Facebook believes that for businesses to effectively use chatbots, they would need to be able to negotiate to be able to transact business with humans. There was cause for alarm when they discovered that the two platforms had started creating their own language to communicate with each other during the negotiation process. Neither of the platforms were trained or programmed to do this. The artificial intelligence in the system developed new approaches on its own to accomplish its given task the most efficient way. You might imagine the same technology pitted against your home banking system or your ACH processing or any other number of areas. Continually checking, checking again. Looking for vulnerabilities and learning from the traffic it sees and the errors it gets as it tries various things. It would record how long it takes for each transaction to respond and design new attacks based on its findings.

Planning for the Worst

So how do we prepare for this eventuality? Well, the first thing that we really need to work with is understanding that security revolves around a very important process called change control. I did not discuss change control in the processes chapter because I believe it fits better here in the security chapter. What is change control? And how critical of a function is it? How is it related to security? Change control is the concept of recording every change in your entire system. Many times, I have been into financial institutions where there is no change control. The symptom of not having a change control process is that when something breaks, you do not know what caused the break. More importantly, if there are anomalies, they are impossible to detect without some sort of baseline. Any change to production or near production systems must be put through a process of change control.

What would this process look like in your organization? Well here is a sample model. Let's say that on a Wednesday you need to patch or upgrade your home banking servers because the vendor who provides this server, or maybe it's your own organization who has written this software, needs to do an update. Or perhaps they just want to add a new feature. So, a change control committee would allow the organization to propose this change.

Whoever's in charge of this, it would likely be someone in the digital services area. This person would put in the request to the change control committee. The committee would review the request during their normal process and work with the product owner to understand the potential impacts of the change. That means that every system that the change might touch is documented, and just as important, every person within the organization whose department this change might affect is contacted. They are all given an opportunity to give input or ask questions about the upcoming change. What this means is that all changes will be planned, tested, assessed, and reviewed before they are finally implemented. The most important part of the change control process is documenting how to undo the change. So, I will give an example.

Many years ago, when we were designing the first round of multifactor authentication for home banking at my previous company, we implemented the product and, as far as we could tell, all was well. However, over time we began to see some adverse effects. At the time, we had no idea that they were related to this change. What we saw was that the number of invalid logins or failed logins was climbing. And at some point, it was beginning to climb exponentially. In looking at this problem, we were able to check the logs and our statistics and trace it back to a start date. When we looked at the start date in the change control log, we could see that someone—or our group—had implemented a new change to the multifactor authentication that resulted in the hardest problem to track, which is an intermittent problem. By using the change control, we knew this was the problem. We rolled back what had been done. And the systems went back to normal. And at that point, we could go through and review what had happened. We then put the change back in, submitted it, fixed the problem, and moved on. If we hadn't had a record of every change that was made on that day around that time, it would've been very difficult to understand what caused the problem.

Large digital systems are notoriously complex. And they must be documented. More importantly, all changes must be documented. This also includes non-software changes. So, for example, if someone's going in and changing a configuration on, say, a communications device like a router or a firewall, this should be documented in the change control process as well. Another value of the change control process is preventing collisions. For instance, it's not a good idea to change both your home banking platform and your firewall at the same time. It's simple common sense—if something breaks during that process, you won't know which one caused the problem. So by using this change control process where change requests are documented, they're formally sent in to a committee. The committee puts it into the planning stages and reviews it. They also sign off on the testing, including signing off on your go-back plan. And then after that, it's scheduled and implemented. This is one of the most important aspects of security, because hackers thrive on organizations that do not have change control. What this means is that if there is not a process or a governance around change control, hackers find their way in by finding systems that are poorly configured. These poorly configured systems often leave holes that allow them to compromise your platform. Change control will help you with this.

So, one of the things I like to do when I talk about security is tell stories. I believe these stories are very helpful to understand some of the security processes that need to be in place. The first story I'm going to tell you happened while I was a very young engineer at a financial institution. We had installed what was likely one of the very first internet home banking platforms. And we had written this platform ourselves.

One day, I got a call from a very upset gentleman. This gentleman told me that every day he would get locked out of our home banking system. Every day, his account was not accessible because someone had repeatedly tried to use it. And when they tried to use it, they did not use a correct password. One of the security features of the system is that any account that has multiple failed login attempts will be suspended. In these days, we did not have a process such as emailing you a new password or sending code to your phone because SMS didn't even exist to unlock a password. The only way that this gentleman could get back into his account was to call our call center and unlock it. And if he happened to get locked out over a weekend or some time when our call center wasn't open, he had to wait until the next business day. Needless to say, this gentleman was very frustrated. I promised him I would look into this issue immediately, as he was implying that because of his problem he thought our systems were less than secure.

Now, the first thing I did was to retrieve the logs from where he had tried to sign in. And keep in mind, back in these days we didn't have large statistical analysis systems reviewing all the logs for login. Instead we had large text files that were loaded on our server hard drives. It was your job to be able to search through these text files to find evidence of someone's login. So, if someone had a problem, you would have to go through and figure out which one of the three or four different home banking servers that the person logged into, and then inspect those logs. Logs could likely have hundreds of thousands of entries to search through just to find the single logs for this person. And furthermore, it's not likely that each time this frustrated customer signed in, they would login to the exact same computer. So, my staff and I started working on looking through the logs to determine what happened.

The first thing we noticed was that something or someone was logging into his account frequently. As a matter of fact, our first suspicion was that it was some sort of scripted tool because the timing of the logins was very consistent. The only challenge was that whoever was trying to log in didn't know the password. The person would try three times, very quickly, and then not try anymore. Back in those days, we did not use user IDs; we used account numbers. My first suspicion was there was a member out there somewhere whose account number was close to my frustrated customer's account number. And that person was mistyping it and logging in three times wrong. However, this person would get up in the morning, try, do it at lunch, try, and do it at night like clockwork.

Also after analyzing the logs, we determined that most of the attempts were coming from the same internet protocol (IP) address. Back then, there wasn't an easy way to look up who owned an IP address on the internet. But we worked with Verizon to determine what the source of this IP address was, and we discovered it went to a business. I figured I had enough information to call this member and explain to him what happened. When I called him, I said, “Sir, I believe someone has your account number, or something close to it. And each day they try to login and they're failing.” I said, “Also, coincidentally, this person either works or lives in this building.” And as soon as I said the building and the employer at the building or the company at that building, the gentleman immediately went silent. I said, “Sir, are you still there?” He said, “I know what's going on.” And I said, “Sir, it would be helpful if you would enlighten me in case it happens to someone else.” He said, “My ex-wife works in that building.” So, this was my first experience with sort of digital vandalism. His ex-wife would get up each day, go to work, have a cup of coffee, bring up our website, put his member number in, and try three times with bad passwords, because she knew this would lock him out and infuriate him. Then somewhere after that time he would call our call center, get his account unlocked. She would come back from her lunch or have lunch at her desk or, I don't know, I'm just sort of picturing her there. And she would do this again. She would log in three times, lock him out, and go on with her day. And then just before she went to bed, she would do the same thing again. And she had been repeating this process for months. So what could we do to stop this problem? Well, one thing—let me back that up.

This led to one of our first security features, which was designed to allow customers to block specific IP addresses from accessing their accounts. This turned out to be a very useful feature and came in handy later, when we started to see attacks from foreign entities. We could now block whole countries by adding their IPs to our database. For instance, if you don't live in Pakistan, then it's not likely that you will ever access your home banking from Pakistan. If you think about it, this is nothing new for the financial industry. We've been doing this for years as it relates to our credit cards. If you've never used your credit card in Pakistan, you'd better call your bank before you go there. Otherwise, your card is not going to work in Pakistan. Why not do the same thing for our digital products?

Operation Ababil

In 2012, the financial sector experienced its first full-on attack. The official reason for this was a video on YouTube, and it depicted Mohammed the prophet. Because depicting an image of the prophet is forbidden by the Muslim faith, a group of Muslim extremists or cyberterrorists demanded that all copies of the video, called The Innocence of Muslims, be removed from YouTube.

Starting on September 18, 2012, they launched Operation Ababil and began attacking places like Bank of America and the New York Stock Exchange—again, in retaliation for the YouTube video. The attacks continued: September 19, they attacked Chase bank. September 21, they attacked numerous financial organizations at once. September 24, attacks against the US Department of Agriculture. September 25, they took down Wells Fargo's website for almost a whole day.

In the midst of this string of attacks, Senator Joseph Lieberman told CSPAN in a taped interview, “I don't believe these were just hackers who were skilled enough to cause the disruption of these websites. Suspicions point toward a special unit of Iran's revolutionary guard corps called the Quds Force.”

DDoS Attacks

On October 2, analysts discovered an incredible distributed denial of service (DDoS) toolkit that was believed to be the software behind the attacks on Bank of America, Chase bank, Wells Fargo, and PNC. The toolkit was capable of simultaneously attacking components of a website's infrastructure, flooding the targets with millions of packets to overwhelm their ability to serve their customers. These attacks continued throughout 2012 all the way into 2013. These were distributive denial as service attacks or, as I said, simply just shutting down the website's ability to service their customers. However, it was determined that many of these cyberattacks were distractions. They were created to distract the security team at the financial institution from their real purpose. Their real purpose was to steal money from the bank. So, while the entire staff from the organization is trying to deal with the website being down and the thousands of calls coming in about the digital services being down, the cybercriminals were stealing money from accounts. They knew that no one would have a chance to look at the actual files for transfers in detail.

On October 15, 2012, Bank of America, Chase, and Citibank were notified that they were the targets of a planned cyberattack. A total of 26 banks and credit unions were identified initially, and more than 100 criminals were believed to be part of the cybercrime ring. By March 2013, the financial sector replaced the government as the top target of cybercriminals. Several of these DDoS attacks were also accompanied by crippling viruses. Here's one story from a company called Aramco:

A Christmas Eve cyberattack in 2013 against a website of a regional California financial institution helped to distract bank officials from an online account takeover against one of its clients, netting the cyberthieves more than $900,000.

As a matter of fact, what we are seeing now is different organizations partnering together to accomplish these sorts of attacks. So, here's some things that we can do to mitigate risk. One, were going to have to think outside the box in terms of security response teams. My suggestion is that each organization needs two separate incident response teams. Incident response team one is assigned to deal with whatever threat problem or disruption has arisen. In the case of a DDoS attack, incident response team one would work with the security vendors and other consultants to stop the attack. Meanwhile, incident response team two would be instructed not to participate at all in the current attack scenario and instead to focus on current operations of the organization, looking for anomalies in their security files, their transmission files. They would be specifically looking through money transfers to make sure that the attack was not a distraction to accomplish some other sort of loss.

Be Afraid When Things Are Down. Be Very Afraid When Things Are Going Well

In 2007, major retailer TJ Maxx disclosed that hackers had infiltrated its system and stolen credit card data that exposed over a 100 million credit cards to fraud. At the same time, TJ Maxx disclosed that the hackers had had access to the network for almost two years. The main hacker, Albert, who at the time was working as an undercover informant for the Secret Service under the moniker “Soupnazi,” was able to gain access to the TJ Maxx corporate network by using a practice known as wardriving. Wardriving involves driving around with a laptop and looking for Wi-Fi networks that can be hacked. TJ Maxx at the time was using the WEP (Wireless Equivalent Privacy) protocol to secure its wireless network. WEP had been cracked in 2001, and yet TJ Maxx was still using the protocol to protect its Wi-Fi network in 2005. Gonzalez and his group were able to easily infiltrate the network via the Wi-Fi and then escalate their privileges within the network by stealing TJ Maxx users' credentials. Once they had enough access, they set up accounts on the retailer's mainframes and started collecting credit card data by reviewing files that were unencrypted and being sent to banks. Gonzalez and his group of hackers would take turns managing the account.

This is where it gets interesting. In order to send the stolen credit card information to their servers, they had to transmit over 80 GB of information. Since the TJ Maxx firewalls were poorly configured, they had to fix problems on the network so that they could transmit such a large amount of information. They also had to close some of the backdoors they had discovered so that other hackers wouldn't get in and take the data they had collected. The hackers had access to the TJ Maxx network for over 18 months and during this time they would have to fix things to keep unwanted IT people from looking for problems and inadvertently discovering their access or shutting off access to the areas they needed to complete their criminal activities. They developed a communication system by leaving encrypted messages for each other on the systems they were accessing to make sure that everyone knew what the last person had done.

I know many CTOs that start to freak out when seemingly random events happen in their systems. They often immediately jump to the conclusion that someone is trying to hack their system. This may very well be true, but I would also be concerned, based on the TJ Maxx incident, when things are working too well. This is because the last thing a good hacker wants to do is draw attention to his or herself by setting off alarm bells in your system. Sure, destructive hackers that want to perform distributed denial of service attacks or deface a website will leave their mark, but these are just vandals. The real cyberthieves, criminals who actually want to steal money or other valuable information, will do all they can not to leave any tracks or set off any alarms. So instead of freaking out when something happens, you might want to be more concerned when things magically fix themselves. If your users can suddenly access a website that they couldn't access before, or if a firewall problem that prevented you from transmitting large files suddenly fixes itself, and you cannot find anyone who knows why it suddenly started working, then I would be much more suspicious.

Security as a Process of Innovation

Some of the most valuable innovations can and should be security related. For instance, the story of the angry ex-wife that I mentioned earlier resulted in a feature that allowed financial institution customers to block out access from certain IP addresses. This turned out to be a very valuable feature for the customers, because as time went on, account takeovers became more and more common, and while eventually hackers started spoofing addresses, this measure on our part caused them to move along to greener pastures. Security innovations will continue to go forward.

Some of the most difficult processes to digitize are security related, such as the FFIEC mandate that specifies that you must have multifactor authentication login. When digitized, this process is inconvenient for customers and ineffective against today's hacking techniques. Having to answer questions like “Who is your first school teacher?” and “What's your favorite pet?” are often inconvenient when you're trying to do something quickly. Especially if you did not set up these questions to begin with. I don't know about you, but I don't know the last name of my wife's favorite schoolteacher. So how will we look at security as innovation in the future? I believe that the evolution of security is going to be built around artificial intelligence and cryptography.

As a matter of fact, the same artificial intelligence that the hackers will be employing will be employed by financial institutions to defend against these new attacks. For example, consider the Facebook chatbot experiment I mentioned earlier. One chatbot was pitted against another chatbot in a negotiation game to determine if two chatbots or AI mechanisms could negotiate with each other. Much in the same way these two systems interacted, I believe that defense artificial intelligence bots will, in the future, protect our digital systems. These artificial intelligence bots will learn from the attacks that are levied against them, and they will create their own countermeasures. As they begin to create their own custom countermeasures, they will also work together with other financial institutions defense bots to collectively learn from the attacks happening at other institutions. Through cooperation and aggregation, we will create a much stronger defense against cyberterrorism and cybercriminals.

We will need to reexamine the security paradigms and conventional security wisdom if we are to succeed in a more dangerous digital environment. For a long time, digital security has been designed around a castle methodology. The castle protects the crown jewels and is fortified with tall walls, moats, alligators, soldiers, hot oil, and dragons. Each fortification is designed to be a defense against failure of the previous defense. The flaw in this design is that it is assumed that no one will ever breach the castle because the likelihood of all the defenses failing at once is low. But unbeknownst to the head of castle security, the king of the castle likes to throw parties, and during the parties he will let almost anyone in the castle. Sometimes during these parties, he orders the guards to raise the drawbridge and turn off the alarms so his friends aren't inconvenienced by the excessive security. The king's enemies closely monitored the habits of the king and the guards and eventually took advantage of the human error to gain access to the castle and steal the jewels.

The castle approach is highly susceptible to human error. All it takes is one unpatched system and the hackers are in. How can we think differently here? We must assume that the castle will be breached, and as a result, the assets must be protected even if they are in the hands of criminals. This is where cryptography comes into play. If all assets were protected with high-level cryptography, it wouldn't matter if the criminals were able to get their hands on it, because they wouldn't be able to access the data. This means we don't need alligators, moats, drawbridges, or dragons to protect our data. I can hear your argument: What if someone gets the keys to decrypt everything? Again, we need to change our thinking. There shouldn't be one encryption key to decrypt all the data. Instead, we allow the members to encrypt their own data, and through the magic of distributed ledger we can leverage a key management system that allows us to disburse the keys among all the customers. With this in place, a hacker would have to compromise every customer of the bank to make the data useful. I call this approach “hiding in plain sight.”

The Equifax Breach

Consider the Equifax breach that was reported in September 2017. Equifax is a credit agency that collects information from banks and other credit sources and, as a result, has a huge amount of data on millions of people. The announcement included the fact that more than 143 million people in the United States may have had their names, social security numbers, birthdays, addresses, and even photo-based identification like passports or licenses accessed.

At this writing, the nature of the attack is not known; however, I would bet that Equifax wasn't completely at fault here. Many believe when these breaches happen, it is due to carelessness or lack of security. However, I have worked with many of these organizations, and they work very hard on security. The simple fact is that the castle approach is destined to fail due to the modern tools that are available. If, however, Equifax had employed a cryptology approach and encrypted the data individually, then hackers wouldn't have been able to use the data even though they had gained access to it.

Furthermore, the castle approach also results in the biggest castles being targeted, in this case the hackers probably spent more time on gaining access to Equifax than other organizations because they clearly understood that hacking this platform would bring more gains than any other organization. Equifax is delivered data from over 91 million businesses worldwide. So rather than attack 91 million businesses why not just attack a place where data are centralized? Identity expert Phil Windley, the chair of the Sovrin foundation and an enterprise architect at Brigham Young University, explains it like this:

So what is the answer, other than cryptography? Well, if a centralized database draws hackers, then it stands to reason that a decentralized database would be a better approach. The challenge up until now has been that good technology for a decentralized database wasn't available. However, in the era of cryptocurrency, it turns out that a decentralized database technology has actually been around for the last nine years in the form of Bitcoin. Equally important is removing correlatable identifiers such as login names, CC numbers, and so on. These identifiers allow others to correlate who you are. For instance, if your login is always “CryptoGuy2477,” then Google and other sites will be able to correlate this knowledge with your search history and determine who you are. Again, the underlying technology of the Bitcoin platform comes to the rescue; along with the decentralized network, decentralized identifiers are also available.

Speaking of cyberterrorism, I would be remiss if I did not mention that all financial institutions are covered under the National Infrastructure Protection Plan devised by Homeland Security. As a matter of fact, financial institutions belong to one of the 16 critical infrastructure sectors identified by the government that need to be protected; in this case, the financial services sector. The department of the treasury is designated as a sector-specific agency for the financial services infrastructure. Consider what would happen if cybercriminals were able to destroy the trust between the population and our financial institutions. What if you didn't trust what your balance was? One popular notion of an attack is the idea of going in and altering ledgers but not in a way that everyone would notice. The idea is that most financial institutions have moved to a point of replicating their ledgers between active infrastructures to gain high availability.

What this means is that if you were to go in and attack a system and make some subtle changes to, say, a couple hundred thousand accounts, no one would notice. Over time, these errors and changes would be replicated to other systems. Eventually, it would be nearly impossible for the financial institution to restore from backups to erase the changes made by the criminals. In this way, it would reduce the trust that customers have in the financial system, and this could be catastrophic to our economy.

So what does this mean for you? It means that you're under heavy regulatory scrutiny with regards to security as a financial institution. Because of this, it would be very easy to throw in the towel and give in to the forces within your organization, particularly internal security forces that typically will fight against new innovations out of concern for security. However, while this may seem like a safe move, you can also secure yourself right out of business. The challenge with the security is not to say no to everything; the challenge is to create an environment where the security group is involved in all new features and innovations and to create a culture that encourages them to find solutions and reduce risk, as opposed to being an obstructionist in projects.

Security is a critical function that must have a seat at the table for all projects. That said, security should also have checks and balances to make sure that it is not slowing down projects. I knew a security officer who was super innovative, and instead of stopping ideas because of security concerns, she would work with the team to come up with solutions to security problems. I believe it would be useful to manage security personnel based on the ideas that they help get out the door. Executive leadership should measure security based on consistent process and being innovative. Creative thinking is going to be a highly sought-after trait in future security professionals.

When developing your own platforms, security takes on a whole new meaning. When an institution builds a product internally, it usually undergoes a lot of scrutiny by the staff, but there are steps that would be done in a professional fintech development shop that are often overlooked in an in-house development project. Three important security considerations should be noted as FIs continue to round out their development capability:

  1. Since there is code being created in house that will be exposed to the general internet it is critical that a code review be performed.
  2. Web application and network penetration testing should also be complete and all issues remediated before any production rollout.
  3. Stress testing should be performed before any product or project is deemed production ready. This is particularly important when there is a latency problem in a project.

Many credit union CEOs and CIOs that I have talked to about in-house development have cited security as their number one reason for not allowing their staff to develop customer-facing products in house. They fear that if a vulnerability was found in the software, they wouldn't have anyone to fault but themselves. These CUs would rather get software from a vendor that they trust. However, even when getting software from a vendor you cannot be sure there aren't vulnerabilities.

It is important to have a process to review the security procedures for software platforms that you depend on, whether you build it yourself or buy it from a vendor. It is also important to have a risk review for all platforms to understand what the damage would be in the case of a breach.

Scenario Planning

One great way to think and learn about digital security concepts is to do some scenario planning. What follows are some interesting and in some cases outlandish scenarios to think about. (See this book's companion website for scenario worksheets.)

Scenario 1: NSA Backdoor

What if you woke up one morning to discover your FI on a WikiLeaks list of institutions to which the NSA had gained backdoor access?

Recently, I was fortunate enough to attend the Temenos Community Forum in Lisbon, Portugal. This was a big conference focused on “real-world fintech,” tailored for an international audience. After filling my buffet plate and searching for a lunch seat one afternoon, I joined a gentleman sitting alone at a round table. We engaged in conversation and I learned he was from a bank in Dubai.

Immediately, I felt a bit awkward—not because of anything he said but because WikiLeaks had just revealed that the NSA, the US government's information spying platform, had hacked a group in Dubai that provides SWIFT payment services to banks in the region. I wondered what he thought about this, and how I would feel if the situations were reversed. Intellectually, I understood that neither of us was involved in the hacking, but when abroad we all become the embodiment of our country's actions to the people we encounter. The topic never came up, and we had a delightful conversation. The thought stuck with me, however.

Let's take a trip into a hypothetical bad day at your FI.

What if one morning you woke up to find your own institution's name in the news? Here's how it might play out for a CEO or CTO of the organization. Imagine driving into work as usual on a bright and sunny morning, thinking about the day's work ahead. Your phone begins buzzing and binging frantically. You are curious, but of course, safety comes first, so you pull over before checking your messages. Your marketing team would likely be first to know, because they have a Google alert set up to track the institution's name. They've already left you voicemails and forwarded links. After reading an article or two, you learn your system has a “backdoor” that the NSA has been using to monitor some of your customers' data. This backdoor is referred to as a zero day, a software hack that is coveted by hackers and organizations like the NSA because they are unknown to software providers like Microsoft and therefore never get fixed. Microsoft depends on the white-hat hacking community to report these issues so they can be resolved. When a nefarious individual or a state actor finds one and doesn't report it or share it with anyone else, it's a zero day until discovered by either the software company or the white hats.

You now know you have a backdoor in your system and, more importantly, the world knows. The logical first step is to plug the hole the NSA was using to get in. On the surface, this seems straightforward, since the issue is documented on the WikiLeaks site. However, if you have ever reviewed WikiLeaks' data dumps (and I have), you know that finding the necessary information isn't that easy. You will likely need to use a Torrent product to download the detail you need to close the hole. Meanwhile, your call center is besieged by customers wanting to know if their account was compromised. Do you console your customers by reasoning, “Relax, it's the NSA, they are working to keep us safe”? This approach may play better with certain fields of customers than others—and will almost certainly not be unanimous.

From here, the plot thickens. You confirm that the NSA has been regularly monitoring the accounts and purchases of certain customers. Do you tell the customers? Was the NSA watching them for a sound reason? Do you need to research the customer to ensure you performed proper vetting (OFAC, etc.)? Does the NSA call you? What if you get a letter from the State Department instructing you not to inform your customers? If you follow its direction, your entire customer base might conclude that their accounts have been compromised and lose trust in the institution.

While your team is sorting through the details to determine root cause and full impact, curiosity builds over who was hacked and how. News organizations are calling everyone related to your FI in the LinkedIn directory, trying to find somebody willing to dish.

Meanwhile there's mundane everyday banking business to perform. Good news: Someone on your IT team figured out the puzzle. Bad news: The hack involves a critical piece of software that can't be fixed without Microsoft's help. While IT is on the phone with Microsoft, others move onto the important business of determining what has happened to whom.

More good news/bad news: Microsoft has designed a work-around, but it will take four days to code and implement. You are now faced with the unappealing options of leaving your online banking running with the now well-documented hole (within the hacker community) that let the NSA in, or depriving your customers of online access for an extended period. By the way, your security team has detected high levels of suspicious traffic on your websites and mobile sites. You discover hackers are trying to use the same backdoor the NSA reported to compromise your site…. Decision made.

You choose to take your site down. The call center is melting down, the camera crews are in the parking lot, and you are reduced to the fetal position in the breakroom.

How do you respond to this? Download the scenario response form from the companion website.

Scenario 2: Ransomware

On May 12, 2017, users from all around the world woke up to find a message on their computer screens (Figure 8.1).

Picture illustration depicting the danger of hacker threats and there is no such thing as bulletproof digital security.

Figure 8.1 Encryption ransomware message

Hospitals, banks, and other large organizations around the world reported their computers were affected by this virus. In a bizarre twist of events, a security researcher named Marcus Hutchins, also known online as MalwareTech, was reviewing the code of the malware and accidentally discovered the kill switch and stopped the attack. According to Hutchins, during his review of the code, he discovered a list of URLs (domain names), and when he researched them, he noticed one of them wasn't registered. After he registered this URL, the malware stopped working, which is incredibly fortunate because the ransomware (aptly named WannaCry) had infected over 400,000 machines—98 percent of them were running Windows 7. The hackers were demanding $300 or 0.1 BTC (bitcoin) to unlock the machine. Further analysis of the virus by security experts exposed the fact that the hackers weren't prepared to get all of the bitcoin that the system had created (in fact, according to most experts, only 300 or so people paid the ransom).3

So, what if you woke up tomorrow and got that call from your security team that 70 percent of your systems were locked up with ransomware and even with restoring backups it was going to take a least a day to get back to being operational? How would you handle such an event? Do you pay? Do you admit to the public that your systems are vulnerable? What if your ATMs are among the infected systems and now are vulnerable as well?

Scenario 3: Cyber Infrastructure Attack

On December 23, 2015, hackers were able to gain access to three of the energy distribution companies in the Ukraine and temporarily disrupt their services. More than 230,000 people were left without power on one of the coldest nights of the year. It took 6 hours for some customers to regain power. The attacks were traced to IP addresses owned by the Russian Federation. Like something out of a movie, operators at the power companies reported their mouse pointers moving on their own and being powerless as they watched whoever was controlling the mice turn off the power to hundreds of thousands of people.

It is widely believed that this is just the beginning of these kinds of attacks. While it is interesting that they cut off power, most of the major financial institutions have backup power, however what if they specifically targeted the financial sector, what would it look like if hackers disrupted the payments system on Black Friday? What would happen if VISA/MC/Discover suddenly stopped working on December 23?

Scenario 4: Internet of Things Breach

The rise of IoT (Internet of Things) devices has created a new security threat. Case in point, on October 21, 2016, the internet experienced the strongest bandwidth attack ever recorded in its history. A company that maintains the domain name system for much of the internet's most popular sites was attacked with a distributed denial of service (DDoS) weapon that generated and hurled over 1.2 terabytes of data at their infrastructure. This DDoS attack was twice as strong as any attack ever recorded in the history of the internet.4 In addition to being the strongest, it also was a sustained attack that took down sites like Twitter, Netflix, Reddit, CNN, and other large sites for many hours. I was online this day, and I must say, it was unnerving to see that a site like Twitter wiped off the internet. If you typed in its domain name, it looked like a site that wasn't configured—not even an error page came up. The combination of bandwidth and sustainability proved to be more than any defense that was available at the time could handle.

What made it more interesting was that the method of attack used consisted of hundreds of thousands of IoT devices such as webcams, thermostats, DVR players, and other internet-enabled products. The IoT explosion has been well documented in technology circles—everything from refrigerators to bird feeders has been internet-enabled in the name of innovation. Sadly, in the rush to corner the market, it appears that security wasn't a high priority for many of the creators of these products, and as a result, hackers were able to conscript hundreds of thousands of devices and turn them loose on a critical piece of infrastructure that crippled many major sites. When devices are taken over and used for nefarious purposes without the consent of their owners, the device is called a bot; when you have multitudes of these devices that can be controlled from a single source, it is referred to as a botnet. A botnet is a command-and-control platform that allows a single bad actor to control hundreds of thousands of PCs from a single console. Botnets are considered highly valuable in the cybercrime world and can be bought or rented to perform various actions, such as delivering spam, spreading ransomware, or attacking DDoS.

Up until this point, no one had seen a botnet that was exclusively made up of IoT devices. An analysis of the attack revealed a new botnet console specifically made to control IoT devices, called mirai (Japanese for “future”). In retrospect, it should've been pretty obvious that these smaller devices capable of high bandwidth would be high-value targets for hackers. Most experts believe that this is not the last we will see of these sorts of attacks.

So here is your scenario. I have already noticed financial institutions putting these devices in their branches and headquarters. So assume that another attack happens, and you discover that some of the devices that were involved in the attack were inside your firewall. Once you discover these devices are part of the attack, what are your next actions?

NOTES

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset