CHAPTER 14
Digital Governance

Let me start this section by saying I am not a big believer in creating governance for the sake of saying we have it. I believe one of the great enemies of an innovative and productive culture can be too much governance. That said, I do believe that the right amount of governance done for the right reasons can be very effective and improve every aspect of your organization.

To begin with, let's cover a couple of kinds of governance that people often quote in books like these. ITIL, which is an acronym for Information Technology Infrastructure Library, is a detailed set of regulations or guidances for services such as ACH or Billpay, which are referred to ITSM or Information Technology Services. These are designed to align these services with the business objectives. It's full of checklists, best practices, tasks, and procedures that are not organization specific. It has its roots in BS (British Standard) 150000. It was eventually replaced by ISO 20000, which is an update to the ITIL standard that was revised in 2011. It is a valuable tool for an organization, especially if you are exceedingly large, and by large I mean that your IT department is over 1000 people. If you are reading this and your organization is at this level, then this is a good standard for you to look at—and chances are, if you are that large you already have something like this in place. However, if you are an organization that is more medium sized, then trying to implement something like this is like using a sledgehammer to pound in a penny nail. There are great things in it, but it has a lot of bloat that is specifically in place to deal with large organizations. This is likely not their official position and I am not an ITIL-certified professional, but these are my thoughts after being asked to consider several different ways to implement governance in medium-sized financial institutions. So bottom line, ITIL is great, and parts of it are very useful.

The second one that I am often asked about is called COBIT (Control Objectives for Information and Related Technologies). COBIT was created by ISACA (Information Systems Audit and Control Association), and like ITIL is a practical collection of procedures, tasks, checklists, and other processes that are designed to ensure best practices in IT (think security or storing data properly), formally document business requirements, and map them to procedures and processes, measure performance, and document how all systems interact. In my experience, this framework also provided good tools and things we could use to improve organizations. I still feel that implementing this in a small organization would also be overkill, but there are elements of it that could really help an organization in a growth stage.

The final approach I have considered is the plan, build, and operate framework. McKinsey (www.mckinsey.com) envisioned the plan, build, and run, and groups I have worked with in the past have enhanced this framework by adding security as a fourth major tenet of the approach. Traditional tower-based or segmented approaches to governing organizational infrastructure and services have reached their limits, and in the increasingly digital future, it is necessary to have an agile approach to governance.

So what to do? Let's start by taking a quick quiz to determine what you have in place already:

  1. We have a group of people who meet regularly to discuss digital services that is made up of middle management and key system experts. (YES/NO)
  2. We have a services catalog that I can refer to that helps me to determine how systems interact with each other and who to call in the event of a problem. (YES/NO)
  3. When we have a new project, it is a reviewed by the group mentioned in question number 1. (YES/NO)
  4. We have regular audits of digital services. (YES/NO)
  5. We have regular stand-up meetings to review upcoming digital services and offerings. (YES/NO)

Chances are, you were able to answer “YES” to many of these things, but in the back of your mind you were thinking, “well, we meet on these things, but it's not formally organized, or it's not regularly.” In any case, almost every organization could use some help in these areas. If you have a group that is meeting, but you haven't formalized its charter, then now is the time. Another one of my beliefs is that names are important. When I am brainstorming an idea with my team, I like to name it very early, even if the name we come up with isn't the eventual name of the product, group, or service. It's good to label it early on so that people can easily identify the concept during conversations.

The same is true of governance groups—labeling your groups helps teams to understand their purpose and communicates to others the objectives of the group. The digital governance group is just that—a group that is responsible for governing the digital services that your organization provides to its consumers and its staff. Your CDO or closest facsimile should be the chair of this group. The group should have a core and include leadership from each of the major divisions, but also should be flexible to expand to others when necessary and shrink when necessary. The priority of the group is to document your organization's services and create a services catalog.

What I am about to take you through is what I usually recommend to organizations just starting to implement governance in their organization. It creates natural processes and encourages open and candid communications regarding digital services within the organization. It represents the parts of ITIL, COBITS, and McKinsey “plan, build, operate” paradigms that I believe are valuable, but it excludes some of the more tedious processes that tend to slow down organizations and are perceived by the staff as pointless red tape. This approach should be considered a starting point for governance, and as the organization matures, these processes should be evaluated and reviewed, and organizations can adopt more of the guidelines and practices outlined in the methodologies described above as necessary.

A services catalog is a common element among these approaches and an important tool for the organization that wishes to transition to digital services. The services catalog is a list of services that the organization provides to either customers or staff. The list includes who in the organization is ultimately responsible for the service, exactly what the service does, the service level agreements, service dependencies, emergency contacts, service escalation points, and much more. On the website you will find a sample services catalog. Have you ever been using your home banking or mobile application and discovered something wasn't working—for instance, bill payment? So you call your CTO and he checks it out from his side and says, “Well, it appears to be working for me” (this is what I call the mechanic effect—the car won't make the funny noise in the presence of the mechanic), after a bit you discover that it only doesn't work in the mobile application, and after a few hours of research, it is determined that there is a dependency within mobile that allows bill pay to work that isn't present in the home banking (or web) platform.

Having a services catalog would expedite finding these problems by providing a list of dependencies for anyone to check and determine the problem, or possibly this service would've been identified by the digital governance committee as critical and as a result it would be monitored or even duplicated in an effort to build resilience into the infrastructure. Unfortunately, this scenario is very common in the world of financial institutions. It is a reactive approach to redundancy, only adjusting for adversity in the face of a problem as opposed to planning for failures and building redundancies ahead of time. In the postmortem, you discover that the bill-paying platform is full of redundancies but this single point of failure wasn't identified, as it was part of the mobile application and wasn't considered in the design phase because mobile was added after home banking. This is exactly what a digital governance committee or group would be tasked with resolving. The governance committee would've been involved in the mobile implementation, and during the project the services catalog would've been updated and it would have included the dependencies of the services and services and features that were aggregated through this channel.

The digital governance group ideally should meet once a month, but also should be flexible and meet when necessary during important stages of projects or in an emergency situation (like a security breach or outage). The responsibilities of the digital governance committee are discussed in the following section.

Review Proposed Products and Integration

Any product or feature that is being proposed for the organization needs to be vetted by this group. The group should look at whether the product or feature can be integrated, what it will take to support it, and what is the top-line revenue. Does it need to be reviewed by the data governance group? What are the SLAs? What is the risk if the product is breached or it is out of service for a significant amount of time?

Change Control

Change control should be reviewed in the monthly meeting by the team. The team should review changes and make sure that all protocols are followed, including rollback plans. Changes and change approvals should be recorded so that if a change has a systemic effect, the logs will show when the problem started. An example might be a change that is made to the login process on the mobile application. After this the team might see that the failed logins have been rising, upon further examination the failed logins started to rise after the date of the login process change. When this is discovered the committee may insist that the change be rolled back by following the procedures set forth in the rollback plan for the change. High-risk changes should be identified and the committee should question all facets of the systems involved, including whether backups are working, whether or not a penetration test has been performed, and when necessary the committee can recommend stress testing.

Review Security

Results of all security tests should be reviewed by the committee to determine if the tests are still effective and whether or not the testing procedures need to be changed. Digital security is the most important aspect of a digital transformation. It is where the trust between your organization and the consumer lies, and this relationship must be treated with the utmost care. A single breach or a public incident can erode the trust of even the organization's most ardent supporters. I have always followed a philosophy that you can never have enough security. In that regard, I have always advocated for exceeding any security regulations rather than just meeting them. This will be discussed in more detail in the security sections in chapter 7 and 8.

Accountability

The committee will share accountability with the service owners, which is an important aspect of the committee. Often in organizations where accountability for digital systems is centralized to IT, other departments tend to turn their back or place blame when systems fail or a system update causes errors. Since the committee is made up of leadership across the organization, there is no excuse for a lack of communication regarding digital services. This is why it is so important to have diversity on the digital governance committee.

Business Continuity

As financial institutions become increasingly dependent on digital services, it is vitally important that all systems in the portfolio have a business continuity plan and that these plans are regularly tested. I often find that organizations are behind on their business continuity testing or their testing doesn't include newer systems that have become critical components in the organization's infrastructure. My favorite example of a business continuity system that doesn't get tested enough is backups. In Ed Catmull's book Creativity Inc., he tells the story of how someone accidentally erased the entire file system for Toy Story 2. The person was upset but knew that they regularly backed up these files, so they resigned themselves to reloading the backups. Sadly, they learned that the backups had stopped working more than a year before. This loss would've put the movie two years behind schedule. The movie was saved by the fact that someone had copied the entire movie to his local system to work from home. In the financial world, a local copy of a database or file system on an employee's laptop would be expressly prohibited.

I believe in backup fire drills. It's a good idea to have a schedule of backups to be restored. This way every backup is tested within six-month intervals. This should also include failover systems, if you have an active/backup system then I would recommend failing over to the backup system and running on it for a period of time, then failing back. Another aspect of the team is to make sure you can go back to normal operations once the systems are restored. At one of my jobs we offered BCP services within our data center. One of our clients declared a disaster and failed over to the systems they housed at our shop. During this time our operations people took over the job of caring for the system and running the nightly financial processes. A few weeks went by, and I asked our data center manager if they had restored their systems (as it turned out the disaster was declared due to bad memory in their mainframe) and he responded they are still trying to figure out how to fail back to their normal operations. It is a mistake to think that IT can handle this all by themselves, BCP is the responsibility of the entire organization and the Digital Governance committee is great place for the team to come together and acknowledge these responsibilities as a group.

Schedule Approval

From time to time, there will be collisions in the organization between two scheduled tasks. Perhaps there is a need to perform an upgrade on the ATM fleet, and the vendor has given a deadline to do this on a certain date, but that date is also being used to update another system. It is always best practice to avoid two major changes or updates on the same date. The committee should vet these changes ahead of time and make a decision based on the risk, cost, and urgency of the updates.

Build versus Buy

Building software internally versus buying it from a vendor is undoubtedly one of the most important decisions that the committee will make. The committee should also be intimately involved with the development group. Many times, a group will decide that they need to work with IT to build a product or service to satisfy a need within the organization. Building software needs to be carefully considered, especially if the organization hasn't ever built software before. I have seen many organizations build a great service or product without considering the long-term care and feeding of the product, and as a result, over time the product or service isn't viewed as a valuable. If the care and feeding of the product had been considered in the planning stages of the product this degradation could've been avoided, or perhaps due to the cost of care and feeding, the organization may have chosen to buy a product rather than build it. It is also important to distinguish between building and integrating. Integration is far different than building an entire product. For instance, let's say the team has decided on a new feature or service, and this service needs to be installed throughout the organization. The service needs to be delivered via the branch, home banking, and mobile. The team may decide that integrating this service into the delivery channels by using internal resources is the best path. Even integrations need to have an evaluation done on the long-term care and feeding of the integration. If the home banking provider or the mobile provider makes changes, then there is a high likelihood that the integration will need to be tested after to make sure that the changes haven't broken any part of the integration. Here is an interesting build example from my past. At one of my jobs, we had an issue with address synchronization. When a customer would call in and change an address, the customer service representatives of the organization had to update the address in three different systems. This problem was brought to the committee and a search was started to find a solution to update addresses across several systems. One of the requirements was to use the NCOA (national change of address) service to check the addresses before updating the systems. After an exhaustive search, no product that fit the specific requirements of the organization could be found. It was decided that a solution would be built in-house. A simple website was built that the entire organization could use to update addresses. The addresses would be checked in real time against the NCOA service and then the website would update all of the necessary systems in real time. Over the years, the software had to be updated, more features were added, and as far as I know, to this day it is still in use. The factors that lead to the build decision were that the system would bring a lot of value to the organization by reducing errors and speeding up a process that was taking over five minutes per request, that there wasn't something on the market that would satisfy the requirements of the organization, and finally, that the scope of the build was within the organization's capabilities.

Final Approval on Recommended Vendors

When the decision has been made to implement a new feature or service and a new vendor or system will be implemented as a result, the committee must have final say on the vendors. This is extremely important because only the committee can properly weigh technology versus features in a platform. Time and time again I have reviewed institutions where a service or feature is not present in their digital offerings because of a decision to purchase a product with a limited ability to be integrated. When the decision is left to departments or groups that are not thinking of the solution from a digital perspective, they will often choose a solution for features over the ability to be integrated. Older systems tend to have more features because of their maturity; however, it is often very hard for these systems to be retrofitted with the latest technology that allows for integration into newer systems such as mobile platforms.

If this process is run through the committee, then there is a chance for each of these aspects to be reviewed, and a balance can be struck with regards to features versus ability to integrate the service into the organization's digital ecosystem. This is also a chance to make sure that the service's underlying platforms fit in with the rest of the organization's infrastructure. For instance, if your internal expertise is in Microsoft software and the preferred service runs on a Linux platform, then a decision must be made whether to acquire the appropriate resources to support the service, which would mean additional cost or keep searching for a service that runs on the organization's preferred platform to take advantage of the in-house expertise. No matter how good the system is, if it can't be maintained properly, then it will ultimately be a failure in the eyes of the end users.

As you can see, this group is very important and provides a valuable service in the form of check and balances for an organization going through digital transformation. The group should not be determining business strategy, but rather, enforcing it by aligning the digital services and processes with the strategies of the organization.

Data Governance

In the evolving digital world, an organization's ability to transform data into information will be the difference between surviving and disappearing. So if data is money, then it too must be governed and curated. Although this might seem like something the digital governance committee would handle, it's a completely different discipline that involves understanding regulatory and privacy laws around data and should be given its own group.

Here is why you need a data governance committee. Imagine that one day your team purchases a new service that is using social media to help determine creditworthiness. The team does the right thing and puts the system through the newly created digital governance committee and follows the process. However, during the discovery process it is missed that the system is capturing and storing social security numbers—more importantly, these numbers are being stored in clear text. As luck would have it, this system is compromised by hackers, and as a result, your customer's social security numbers are now being sold on the dark web. Customers notice a trend and the breach is tracked back to your organization. This creates a reputation issue that cannot be easily resolved.

Another example would be if a new system had data that was duplicated in other systems, or conflicted with your system of record. The system of record is the authority or source of truth in your data ecosystem—in other words, if you had a conflict between two pieces of data, the system of record would be the one you trust. It ties all the other systems in your ecosystem together (see the center of the mind map illustration below). Having data duplicated on external systems is costly and inefficient; it also can lead to errors in processing. The data governance group is responsible for identifying, classifying, and organizing data within your organization. The data governance group will determine if data are duplicative or must be treated specially, such as personal data. This group will also be responsible for determining data quality. As each new system is implemented in the organization, more and more data are being introduced, and if there isn't governance to make sure the new data sources are accounted for then your organization could wind up playing cleanup for years or, even worse, leaking data without even knowing it.

The data governance group should be chaired by your CAO (chief analytics officer) or a reasonable facsimile. The group should include subject matter experts from each department. It is important that group is made up of people who can work together to make decisions and have the expertise to identify the data in your organization and what that information means to other processes. In every department, there is a “go-to person” for reports—you know the one: she or he is frequently mentioned in meetings when someone on the team asks for data. These are the people that you want on your data governance committee.

For instance, let's say your group implements a new lending system and as a result, several reports need to be changed because the current source that the data is drawn from will be going away. It might seem like a simple solution, just retool the report to get the data from the new source, but this would only work if the data between the two systems exactly match. What if the new system has new features that split up payments or allow partial payments? Then the due date or the balance could be affected. This would mean that the reports would have to be retooled to support the data. However, without oversight from a group like the data governance committee, these issues could easily be overlooked, and either the reports would be wrong until someone noticed or the problem would surface during the project, and since it was unaccounted for in the project plan, it could increase the cost or extend the project timelines. There are many nuances to data governance, and it is a necessary ongoing process for your organization.

As we move into a future that will include artificial intelligence and business intelligence, data will be the fuel that drives these important services. A data governance committee should start with finding all the organization's data and reviewing it. This may take some time, and it may be worthwhile to hire an organization who specializes in mapping data. The initial data map will become the baseline for your organization. This map will be invaluable to your organization going forward. The second step for the committee is to review the security of the data discovered during phase I. Do all the data storage and transmission paradigms meet compliance regulatory standards? If not, what is the plan to get the data in to compliance? At the same time, the committee will be developing policies and standards to be applied to any new product or services and added to the master data dictionary. As mentioned above, this will reduce future problems and make implementations go smoother.

Figure 14.1 describes the continuous cycle that the data governance group will engage in. Because of the amount of data that we are continually collecting, this cycle will be important to your future. The data mapping cycle is the process of understanding the location of data in your organization, and what its relationship is to the rest of the data ecosystem. It will involve creating a document that is like the services catalog. Each data element is cataloged, with its attributes noted. Table 14.1 represents the most common data elements that are captured during the data mapping phase.

Schematic illustration describing the continuous cycle of a data governance group.

Figure 14.1 Data governance cycle

Table 14.1 Data-mapping key concepts

Attribute Definition
Data type Is the information a number, string, or computation?
Data permanence Will data be provided in real time or be updated by another system using an offline process?
Data retention How long should the data be kept before being destroyed or moved to an offline archive?
Data security Should the data be encrypted, and if so, what regulations, such as PCI or Sarbanes–Oxley, apply to the data?
Data controls Who has the authority to look at the data?
Data management Who is accountable for the data?
Metadata Will other data be used to describe or derive the data?
Data relationships Are the data elements duplicated or mapped to elsewhere in the system?

In Figure 14.2 one could drill down into each of the systems and see the data associated with them. The data then could be tracked back to processes, services, features, and projects within the organization. As you might imagine, having such a tool will be invaluable in the future.

Schematic illustration of a basic data map that explains the high-level overview of a master system.

Figure 14.2 Basic data map: High-level overview

Data Quality

Data quality is the process of evaluating each data element to determine whether it is viable for reports and analytical functions. Data points are checked against norms within the industry to see if any fall out of bounds. For example, how many times have you eyeballed a spreadsheet or report and saw something that seemed off? The number was too large, or too small. Usually, a report error like the one mentioned above is related to a data quality issue. The data that the report was derived from was flawed in some way. Testing data against norms is a valuable process to determining if you have processing issues or other errors causing inconsistences in your organization.

Data Security

Data security is the act of evaluating data against the organization's policies and procedures as well as regulatory policies to determine if the data is being appropriately stored, handled, and transmitted. This is critical in a world where there are new breaches and cyber threats every day. Usually, the hacker will obtain access to a database by elevating the privileges of a hacked user or exploiting a known flaw in the software that allows them to get the database but if the data are encrypted, this makes their job that much harder, because now they must decrypt the data before they can use it. Encryption is going to be one of the most important services in your organization soon.

Data Duplication

Data is often duplicated in systems, usually for the purposes of convenience. For instance, it is easier to copy the credit score data into a field on the central system so that the customer service representatives have access to it, rather than build a interface into your lending platform. This convenience can often cause problems in an organization. While the purpose of the duplicated data was to give customer service representatives the ability to share the score with the customers, sometimes a new process might find the data and use it for something else. This is bad approach if you are not aware of the source of the data.

In this scenario, let's say the lending system credit score data gets updated once a week but the scores on the central system are only updated once a month. There is a chance that the data that the new process is using will be stale. Instead the process should use the data from the lending platform to make sure it has the most current data. Therefore, it is important to understand and document duplicated data elements in the system. Another common scenario is that a new system or process has a placeholder for data also stored on the central system and again, rather than create a costly integration (usually because without a data governance group in place, these issues are identified after the fact and the cost of this process was not considered in the budgeting phase) someone will come up with the clever idea of writing a job to update these fields from the central system. Again, this can be a ticking time bomb. It is important that the end users of this data understand where the source is and what the update frequencies are.

Data Engineering

Data engineering is the act of determining the attributes of the processes to either extract, update, delete, or archive the data throughout the ecosystem. How often should data be updated, are there circumstances when the data should be purged? Is data transient? Should data be cleared on a regular basis? If the data will be encrypted, how are the keys stored, and how strong should the encryption be? You might think that's a stupid question—why not just use the strongest encryption always? But if you took that approach, you could slow down your entire platform or worse yet, bloat your processing and power bills because the high-level encryption causes extra CPU and power consumption. These are all factors to be considered when working through data engineering; this is especially important when introducing a new product, feature, service, or project that will bring foreign data elements into play.

Once data mapping is complete, the data policies are in place, and remediations are in progress for current data issues, it's time for the real magic to happen. While all this activity is valuable on its own, the real value is when you begin to put your data to work. Now you can start to shift your organization toward a data-driven results paradigm. Any new project, any new service, or undertaking can benefit from using your CAO to validate the assumptions associated with each undertaking. For instance, when we market without data-driven decisions, we make assumptions based on our intuitions about the community or segments of the population we serve.

Good marketers are reasonably successful at making these assumptions, but what if you could make your marketing plan 50 percent better by using your data? Consider recent Finnovate winners AlphaRank (www.alpharank.com). AlphaRank provides a service that allows you to identify the influencers in your population of customers, and it has been proven that marketing these influencers directly will dramatically increase adoption of any product or service. As expected, before you can engage a company like AlphaRank, you must be able to collect your data, anonymize it, and send it to their team in a safe manner. If you have ever been involved in an effort like this, you know what a trial it can often be to get the necessary data from your organization. It often requires a cross section of the entire organization and a full-on project plan. This process is often costly and time consuming. Another byproduct of data governance is that it will become much easier to identify the success metrics for each product, project, or service that your organization engages in, and by monitoring these metrics the organization can determine when to give up or when to dig in, or perhaps how to pivot to create success. Data is fuel that will drive your organization forward.

Data also drives innovation. I will cover this further in the innovation chapter but data is the most important ingredient to drive innovation within your organization. Innovation by its very nature is risky, but no one knows how risky. Data can help you drive your innovation by helping to validate your assumptions.

When I was starting BIG in 2014, I had an idea for a product. I was in a big meeting at a credit union in one of my first speaking gigs. It was a meeting with all the management staff of the organization, and they were all sitting around big round tables listening to me talk. I was talking about mobile wallets and it occurred to me I could do a quick straw poll with the audience I had to validate something that had been on my mind. My idea was that non-name-brand banks and financial institutions didn't have a huge penetration into the emerging digital lifestyle applications like Netflix, Hulu, Amazon, and iTunes. So, I asked the group a quick question. I asked everyone who had a Netflix account to raise their hands. I wasn't sure what to expect. My gut told me that this should be a large number but I wasn't expecting the reality, as almost the entire room raised their hands. I quickly said “Put your hands down,” and then I said the following: “Please raise your hand if you have THIS financial institution's (the one you work at) debit or credit card in your Netflix account to pay for the service.” Again, I wasn't sure what to expect, I had a theory, but it had all come down to this moment. Less than five hands went up in response to my query. I was blown away. These were managers of this financial organization and if they didn't have their workplace's tender in these accounts, how could they expect the customers to do the same? I repeated the process for iTunes, Amazon, and Hulu with the same results. After this, I worked with 10 medium-sized financial institutions to run reports to see if my straw poll could be validated with larger numbers.

This process wasn't easy for a lot of the institutions I worked with. Creating these reports proved to be difficult, which is why I am recommending that all organizations start a data governance committee as soon as possible. In this instance, I had an idea for an innovation that would increase penetration into these accounts that became SetitCredit.com, but before I embarked on this journey, I validated all my assumptions with real data. This is how data can support innovation in your organization.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset