1. Sensible Organizations: Sensors, Big Data, and Quantifying the Unquantifiable

What if I told you that changing when you take a coffee break could make you more productive? Or that one of the biggest decisions a company makes revolves around the size of its lunch tables? These are things that traditional theory never looked at, indeed couldn’t look at, because there was no way to measure them.

If people have learned anything over the past few decades, it’s that using data to build organizations is better than following instinct. There is a reason the first “moneyball” team in major league baseball, the Oakland Athletics (in a now well-publicized example of data versus instinct) performed so well with a paltry budget: they used data to drive their decisions. Sidestepping convention, they relied on player metrics to assemble the best team possible within their budget, a strategy that resulted in a 20-game winning streak and a trip to the playoffs, even though they had the third-lowest payroll in the league.

Life is a game of small percentages. The difference in baseball between an average player and an all-star can be as small as a five-percentage point difference in batting average (between a 0.300 hitter and a 0.250 hitter, for example). If someone were able to develop a method to raise his individual performance by 5%, it would cause a tectonic shift in the way baseball players are evaluated.

The same is true in business. Research has shown that companies that use data to drive their business decisions perform 5% better than their peers.1 Consider that large and diverse industries—from insurance companies to retail department stores—have profit margins of less than 5%. A 5% performance increase in one of these companies would result in a profit roughly double that of its competitors.

As a result, many companies now try to use data-driven decision making throughout their operations. Probably the best example of this comes from big-box retailer Target.2

Target is a master at using analytics in their business. They have a dedicated statistics department whose sole purpose is to mine the mountains of data they’ve assembled across their stores to find new insights they can use to sell more products. This data isn’t just a list of the things you’ve bought from Target. It’s also demographic information: age, gender, marital status, number of children, home address, and so on. Even activity from their website is incorporated into customer models.

The big issue for Target is that in most cases our purchasing patterns don’t change very much. We have a regular routine of going to the grocery store for food, going to the mall to buy clothes, and so on. Target, however, sells everything from electronics to food to furniture. They needed to change customers’ habits so that when they think of buying any of those items, their first impulse is to head to Target.

Unfortunately for retailers, influencing someone’s shopping behavior is incredibly difficult. These behaviors only change at a few key times throughout our lives, and other than that they are essentially locked in. These key times center on a few major events, such as moving to a new city or having a baby. From the data retailers collect, it’s hard to predict if a customer is going to move. Maybe a customer buys some luggage or some bungee cords to strap it to his car, but even if that purchase indicates to retailers that the person is moving (which it probably doesn’t), they still wouldn’t know where the customer was moving. This means retailers can’t give the person coupons to get him to come to their store, because they have no idea if the store is 1 mile or 100 miles away from his new home.

Instead, most retailers focus on the easier problem of predicting births, which means recognizing that a customer is pregnant. The reason Target in particular took this approach is that as soon as you have a kid, your mailbox is inundated with coupons from virtually every retailer within a 20-mile radius because birth records are public. Retailers constantly poll those records and use them to send out mailings in the hopes that their coupon will be the one to bring you into their fold. Of course, with so many mailings the chance that a particular retailer will be picked is quite slim. Instead, retailers want to get ahead of the game. If Target could figure out, months before their competition, that you were going to have a child, they would be assured of a captive audience for their products.

So, Target’s statistics department dove headfirst into this challenge. It was a relatively straightforward analytics problem, because the public birth records delivered hard data on what the statisticians were trying to predict. They found that analyzing purchases in 25 product categories provided extremely high predictive accuracy, with their algorithm’s estimates coming very close to actual due dates. After developing this model, Target could then offer coupons throughout a woman’s pregnancy that were targeted for her specific trimester (lots of vitamin supplements during the first trimester, for example), in addition to the all-important behavior-changing coupons that were sent right before the baby was due.

In fact, these models actually became too good. Some people, for various reasons, wanted to keep their pregnancy a secret. Target learned this lesson the hard way when their algorithm identified a particular individual as pregnant, triggering the flow of coupons to her house. Unfortunately, she was still in high school. After receiving the mailing, her father visited the local Target. He was not pleased.

“My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

The manager apologized and was able to defuse the situation. He even called the father a few days later to apologize again. The father, however, had a quite different tone this time around.

“I had a talk with my daughter,” he said. “It turns out there’s been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

The algorithm Target had developed was so good that it could recognize pregnancy even better than a family member who saw that girl every day. This is the power of analytics—providing a nearly superhuman ability to understand and change the world around us. Although these analytics aren’t perfect, they are orders of magnitude more effective than the overly simplistic methods of the past. This is why so many companies, from Target to Netflix to Amazon, make analytics a central part of their corporate strategy.

This data-driven approach, however, isn’t regularly applied inside companies. There just hasn’t been a good way to measure how people actually work. Surveys and interviews are fall-back approaches, but they become invalid and fantastically inaccurate in a wide variety of circumstances—for example, when an employee has a bad day, forgets something, has just eaten lunch, and so on.

The beauty of Target’s approach is that they used data on behavior (in this case, purchasing decisions) to make predictions. To extend this approach to the workplace, you need data on how people are actually behaving. As Target showed, digging into this real-world data can open up amazing opportunities.

Telescopes, Microscopes, and “Socioscopes”

New data can fundamentally change the way people view the world. When we look at the world through any particular lens, we are bound to create theories that deal with reality at that level of detail. When our ancestors saw points of light in the sky, they assumed that they existed as points of light orbiting above on a complex set of spheres. When they were able to look through a telescope and see that in fact some of those points of light appear larger and even have other celestial bodies orbiting them, they had to reconsider their model of reality.

New methods of observation have systematically reinvented fields across the scientific landscape; for example, the telescope revolutionized the study of astronomy, and the microscope revolutionized the study of biology and chemistry. However, social science has never experienced a revolution of this sort. Researchers still use pen and paper surveys, human observers, and laboratory confederates to attempt to untangle the myriad phenomena that constitute our society.

The lack of effective measurement tools isn’t unique to social science. Nearly every scientific field has at some point dealt with a paucity of data. Astronomy is a great example. For millions of years people looked up at the sky to observe the stars, but there was astoundingly little hard data on celestial movement until only a few hundred years ago. The primary issue was that people who actually wrote down their observations would only do so for short periods of time, using dead reckoning to estimate the movement of the stars.

As astronomers tried to develop models to explain celestial movement, the lack of data meant that testing any model was essentially impossible. Traditional astronomical models always broke down because of the phenomenon of apparent retrograde motion—the appearance of some stars (actually the planets in our solar system) moving in one direction on some nights only to switch direction in following nights. Today, we know that this is because all planets orbit the sun, and because each planet orbits the sun at different speeds they can appear from Earth as if they change direction.

Aristotle created a model that seemed to fit with these observations: one with the Earth at the center of the universe and with stars arrayed on a series of circular “celestial spheres” that spun around our planet. It took almost 2,000 years for an alternative model to be proposed. Interestingly, it was data, not the relative simplicity of these new models, that changed our view of the universe.

Brahe and Kepler

In the mid-sixteenth century, Copernicus had unveiled his new theory of planetary motion, placing the sun at the center of our solar system. This was a drastic change from the previous model which had Earth at the center, but this theory found little acceptance because his hypothesized circular orbits didn’t fit the reality of elliptical orbits.

Soon after this, a Danish nobleman named Tycho Brahe took it upon himself to assemble the most complete astronomical record in history. Tycho spent years of his life in a custom-built observatory that allowed his assistants to record his nightly observations of stellar and planetary positions. This massive compendium provided Johannes Kepler, one of Tycho’s assistants, with a dataset that allowed him to clearly show an elliptical orbit around the sun. Kepler’s model demonstrated unprecedented accuracy in predicting the movement of celestial bodies, which led to broad acceptance of the heliocentric model.

It’s no coincidence that Kepler’s model emerged directly from Tycho’s observations. Earlier astronomical observations had been limited to sparse records from a smattering of individuals in different parts of the world at different times, and never in an observatory that allowed for much more than educated guessing.

These problems are similar to the ones that social scientists are grappling with today: Studies are qualitative, observational, and limited to small pockets of what individual researchers can directly observe. Many researchers point to the rigorous training that they go through before making those observations as evidence of the strength of social science research. However, this argument merely ignores the problems that confront the advancement of work in this field.

Unbiased?

Social science has given organizations some powerful tools over the last century. Its findings have become cornerstones in much of the world of work, from product development to organizational design. Social science measurement tools, however, were initially devised decades ago. Surveys, human observers, aptitude tests, and controlled laboratory experiments are the tools of the social science trade. While certainly useful, each of these tools has some fundamental weaknesses.

Everyone is familiar with surveys, and you’ve probably been asked countless times by store employees to fill out an online survey about your experience. How often do you actually fill out those surveys—once a month, once a year, never? Maybe only people thrilled with their experience answer the survey. Maybe only people who had a terrible experience or people who just feel a sense of responsibility to answer respond. This sample is clearly biased, because stores are only getting data from a small fraction of their customers, and the vast majority of typical experiences are left out. However, even responses from typical consumers can be biased. If you had a bad day, your answers tend to be more negative. If it’s a beautiful day outside, your responses are more positive.

Researchers try to correct this bias problem by using observational data. Highly trained ethnographers and anthropologists integrate themselves into an environment and collect unbiased data on the activity they observe. This method runs into two major issues: individual differences and scale.

Different observers naturally see different things. Even with thousands of hours of training, observers can differ in something as simple as classifying what constitutes a conversation. On top of this challenge, having more than a few observers in one environment at the same time is impractical, so anyone trying to understand the behavior of thousands or millions of individuals would be out of luck.

Until recently, this methodological approach was essentially the end of the story. Simply no tool existed that would enable the understanding of human behavior on a massive scale at any fine-grained level of detail. Ironically, a revolution in social science data collection didn’t come from a desire to collect data. It came from a new communication tool: e-mail.

Digital Breadcrumbs

We all leave vast traces of digital breadcrumbs on our computers: contents of documents, program usage, and most notably the information sent to other people through e-mail messages. This is a treasure trove of information for researchers, because it essentially composes a log of a person’s activity throughout the day. Accessing most of this information would require a program to be installed on each individual’s computer, constantly logging every keystroke and program action and uploading this information to a server.

E-mail is different. When you send an e-mail from Gmail, for example, your message first goes to Google’s outgoing e-mail server, which then sends it on its way across the Internet. This activity is logged by the server and is how Google is able to save your e-mail in your Sent folder. When you receive an e-mail, it is also first logged on Google’s servers before you even see it. In most e-mail setups, messages remain on the server even after you download them.

Think about what this information represents: a digital contact list that contains information on everyone with whom you’ve ever communicated, and even information on what you communicated about. Researchers have recently begun to capitalize on this data and show its true value. For now, suffice it to say that because people frequently collaborate using electronic communication technologies, analyzing this e-mail data is critical for understanding how organizations really work.

“Socioscope”

Electronic records have only one flaw: They are completely disconnected from reality. The people whom you talk to and spend time with are not necessarily the people you e-mail. Most important events happen in the real world. Corporate mergers don’t happen through IMs (instant messages). People don’t take coffee breaks with colleagues a continent away. These moments are central to everyone’s daily lives, and they are completely absent in digital breadcrumbs.

Soon after the advent of e-mail, however, a technological explosion of a different kind enabled a similar lens into the real world: the massive proliferation of sensors.

When most people think of sensors, they imagine being strapped into an EEG with electrodes attached to their head, or full body suits with cumbersome helmets that track every movement. What they often forget is that everyone already carries around dozens of sensors every day.

Most people carry some ID cards in their wallet. Regardless of whether the cards were issued by a company or school, most modern ID cards have an embedded RFID (radio frequency identification) chip. This chip allows people to tap their card onto a reader to open a door. This same sensor could also be used to track a person’s location by placing RFID readers throughout an office. The readers would constantly send out requests for the RFID card to send its ID, and by observing which reader detected the card, a computer could recognize where the person was located.

In sensing terms, this RFID device is very simple. It has just one radio, and it provides a rough estimate of location. What if you augmented this ID card with some additional sensors? What things could you learn?

Infra-red (IR) Devices

In the mid-1990s, scientists began to experiment with sensing devices to answer these questions. Most of these applications focused on enabling people to keep a personal record of whom they interacted with at large events such as conferences or company meetings. This tracking was accomplished mainly by using a basic infra-red (IR) transceiver to recognize when two people were facing each other.

An IR transceiver is a common device that functions in essentially the same way as a TV remote control. If one person wearing an IR transceiver faces another person wearing the same device, a detection registers. Seeing enough of these detections indicates that two people are likely talking to each other (mostly because standing and facing someone for a few minutes and not at least saying hello would be awkward).

Accelerometers

Other scientists used ID badges in the medical field. Instead of IR transceivers, they added an accelerometer (a motion sensor) to the traditional RFID badge to look at how the movement of people changed over time. For example, research with accelerometers has been used to study people with degenerative diseases, such as Parkinson’s or ALS (Lou Gehrig’s disease). These diseases cause physical tremors and a decline in motor function. Data from an accelerometer allows for precise measurement of disease progression and can gauge the effectiveness of different treatments.

Different accelerometers work in different ways, but the general idea is that they have a chip with three microscopic weights inside, one for each of the three spatial dimensions (x, y, z). Acceleration causes the weights to shift, and the degree to which they shift indicates how fast you’re accelerating. If you’ve heard of an accelerometer before, it’s probably because it’s the same sensor that the iPhone and other devices use to let you interact with the device by moving it around. The accelerometer tells your phone that it’s been tipped on its side, shifting the screen to a landscape view.

Microphones

Microphones have also been added to ID badges, especially in the medical field. Vocera Communication’s communicator allows physicians to talk immediately to other medical staff using voice dialing phrases or names, especially helpful in situations where knowing where other people are can be difficult. Of course, microphones can also be used to record what people say, but more recently scientists have used microphones to analyze sound in real time. Researchers from Dartmouth used audio data from cellphone microphones to recognize everyday locations, such as cafeterias, offices, or the inside of your car. The idea is to listen for unique sounds, such as the clattering of plates or the click of a car’s turn signal, to determine where a person is.

Sociometer

In the early 2000s, researchers at MIT’s Human Dynamics group began combining multiple sensors into a single device. The idea was to make a general ID badge that would be able to measure all the different signals—IR, motion, and sound—at the same time. This kind of badge could do things that no device had been able to do before. For example, if you want to know exactly when two people are talking to each other, you really need an IR transceiver, a microphone, and a proximity sensor.

This general-purpose sensing device became known as the Sociometer. Originally, it contained only an IR transceiver, microphone, and two accelerometers. The Sociometer was essentially a gray box the size of a paperback book strapped across your chest—needless to say, not something you would want to wear through airport security. Despite its shortcomings in form factor, this device was the first of its kind—one system that incorporated the critical sensors necessary to understand many aspects of human behavior. As with most prototype devices, using it outside of tightly controlled settings was difficult. Initial experiments took place in the lab and then transitioned to limited field trials within MIT. Sandy Pentland’s book Honest Signals describes these experiments in great detail, but I summarize the most relevant ones in this chapter.

Predicting Speed-Dating Outcomes

Researchers at MIT initially used the audio processing technology of the Sociometer platform to examine behavior in controlled environments to demonstrate the future potential of this technology. A microphone recorded high-quality audio so that researchers could determine what aspects of speech are most important when trying to predict different outcomes.

Perhaps stereotypically for nerdy engineering types, the MIT researchers studied a situation that was most challenging to them: dating. Knowing the right words to say doesn’t guarantee a date with someone; rather, the mood and the chemistry between people are important. Can these things be quantified?

Researchers took these sensors to local speed-dating events to answer that very question. For those who aren’t familiar with the concept, speed dating operates on the principle that after five minutes people know whether they’re compatible. Central to a speed-dating event is the seating arrangement. Women sit at tables arranged in rows, and men rotate from table to table every five minutes. At the end of the event, the romantic hopefuls check off boxes to indicate who they would like to go out with on a date and hand these slips to the organizers. When there is a match, the organizers send both people an e-mail with their date’s contact information.

In this experiment researchers recorded dozens of these five-minute interactions and attempted to predict whether people would choose to go out on a date. Researchers didn’t look at the content of the conversation, only how people were talking—their “social signals.” Social signals are the unconscious messages that people pass to one another when they’re talking. Things as subtle as a slight change in tone, an interruption, or a raised eyebrow are all social signals that convey important information beyond the content of conversations. Using complex algorithms, researchers were able to automatically calculate the tone of voice, changes in speaking volume, and speaking speed of study participants. It turns out that with only these features, not looking at any content, the researchers were able to predict who would go on a date with about 85% accuracy. Incidentally, it turned out that only the woman’s voice features were predictive, probably because the men seemed to be interested in everyone—perhaps not the most startling discovery, but one for which we now have scientific backing.

These results were encouraging because they surpassed the state of the art in behavioral measurement up to that point, which mostly consisted of researchers’ painstakingly coding recorded audio by hand. At the time, however, it was unclear if the experiment result was an aberration or representative of the power of this method of data analysis. Also, although speed dating is of interest to many people, it’s somewhat separated from the larger impact that researchers envisioned for this technology. So they turned their attention to something that’s of vital economic importance: salary negotiation.

Negotiations Broken Down

In general, salary negotiation is difficult to navigate effectively. From a simplified viewpoint, employers want to pay the lowest amount possible but keep prospective employees from walking away, while employees want to maximize their salary while keeping the employer from balking at a ridiculous wage.

Traditional theory holds that the most important advantage in a negotiation is information. For example, if you’re an employer who wants an employee to start on June 1 and you learn that she wants to start on June 1 as well, you have an advantage. You can say you want her to start on a different date, and then “concede” that she can start on June 1, but she’ll have to give up some of her signing bonus.

This theory vastly discounts the effects of social signals. A dominant individual with the same information should reach roughly the same outcome as a shier person. Non-theoretical experience, of course, would dictate otherwise, and this study set out to examine precisely this issue.

To control as many variables as possible, researchers at MIT set up a laboratory study on salary negotiations. The experiment duplicated what you would see at most companies across America. A job candidate meets with a recruiter to negotiate a compensation package, with eight key areas on the table: starting date, salary, job assignment, company car, signing bonus, vacation days, moving expense reimbursement, and insurance provider. For each area, the researchers assigned participants a specific target number as well as a number of points for different outcomes. For example, a candidate who got a 10% signing bonus would receive 4,000 points while the recruiter would receive 0 points. On the other hand, a 2% bonus would translate into 0 points for the candidate and 1,600 points for the recruiter. Participants saw only their own points schedule; and if they came to an agreement, participants were paid at the end of the study based on the number of points they received.

When the participants sat down to the table, they turned on a small recording device. Researchers automatically extracted social signals from this audio data so they could study the data’s predictive power. They even raised the stakes. These negotiations typically went on for about 40 minutes, but researchers wanted to test whether the conversational style at the beginning of the negotiation would predict the results at the end. Astoundingly, the social signals (in this case, specifically modulation in volume and speaking rate) from the first 5 minutes were responsible for about 30% of the final salary.

This result was another powerful illustration of the importance of these social signals and the ability of wearable sensors to capture them. To get a sense for the magnitude of this result, this research indicates that if you are looking for an entry-level software engineer position, just changing the way you talk to your prospective employer would give you a 30% greater chance of pulling in $90,000 versus $65,000.

Overcoming the instinctive belief that the content of the conversation is what matters most can be difficult. Certainly, if President Obama delivered a speech expounding on last night’s episode of Survivor instead of one on health-care reform, the public would be puzzled to say the least. However, as long as people stay on topic, the non-content cues are influential.

A great illustration of the impact of social signals is how people can appreciate foreign language films. Imagine that you’re watching a film in another language and you turn off the subtitles. You can’t understand what they’re talking about, but you get the sense from their tone of voice, posture, and gestures that one character doesn’t like another, or that these people are having a heated discussion. The signals you’re paying attention to are exactly the same ones that these sensors are measuring.

Overall, the results from these initial studies were considered astonishing. A computer and a few clever algorithms had managed to predict the outcome of complex situations with startling accuracy, previously thought to be solely the province of human beings. Overstating how important these findings were is difficult. For the first time, human behavior could be objectively measured outside of the laboratory.

Enter the Badge

This leap forward in measurement technology embodied in the Sociometer platform enabled researchers to ask fundamentally new questions about how people work: Do bosses dominate conversations with their subordinates? What does corporate culture really mean? Scale and level of detail ceased to be limiting factors when observing behavior.

With this new technology, data collection is no longer the bottleneck for social science. Within a few months of deploying Sociometers and their technological descendants, researchers collected more data than had been assembled throughout centuries of observational methods. Instead, the limiting factor for sensor-based data collection became technology adoption.

The original Sociometer was heavy and awkward to wear. The Sociometer also didn’t provide study participants with any direct benefit beyond the general social good they would accrue for contributing to the advancement of science. It would also be months before they could receive feedback on their own behavior. Combined, these drawbacks put a heavy burden on participants, and made it difficult to envision widespread acceptance of this technology.

To combat these drawbacks, researchers integrated display functionality into the next version of the Sociometer system, dubbed the UberBadge. The UberBadge vastly improved the form factor of the Sociometer, taking the same sensing devices and packing them into a badge the size of a wide wallet. The badge also included an LED display on the front, allowing researchers to display scrolling messages. This helped users get some benefit out of the badges—such as displaying the length of a conversation or how many people they met at an event—and being able to wear it around the neck made them much easier to use.

By changing the way people wore the badges, researchers had found an avenue to broad adoption. However, with these early sensing devices, privacy was a major concern. These badges collected the actual content of conversations. It’s safe to say that most people don’t want to wear a device that records everything they say. It’s also against the law in most states.

The rapid reduction in sensor size and power consumption, as well as gains in battery life, provided the solution to this privacy problem. With extra power, future versions of the badge could process audio data in real time, recording only features of conversations—volume, pitch, and emphasis—a few times a second instead of the content.

This newest device, the Sociometric Badge, is the size of a deck of cards and weighs about as much as five U.S. quarters (see Figure 1.1). The badge incorporates all the sensors of the previous devices: a microphone, IR transceiver, and accelerometer—and has the addition of a Bluetooth radio. The badge can collect data continuously for about 40 hours, or one work week, without needing to be recharged. With the data analysis algorithms built into the badge, it could save the equivalent of one work year of behavioral data onto a 4GB SD card.

Image

Figure 1.1. The Sociometric Badge

You can think of the Sociometric Badge as the natural evolution of the company ID badge. No longer just a tool to open doors, this new kind of ID badge enables you to understand yourself and your company at large through data-driven reports and feedback. As shown from the examples in this chapter, this sensor technology has amazing potential. From gathering five minutes of data with the badge, you could figure out not only whether you’ll win a negotiation, but how well you’ll do. With this badge deployed across millions of individuals at different companies in countries all over the world for not minutes but years or decades, imagine what we could learn about to help people collaborate more effectively and create better organizations.

The opportunities offered by using the badge technology not only can revolutionize our understanding of organizations and society at large, but can also be used to create organizations where privacy is a thing of the past, and managers watch every movement, every conversation, looking for inefficiencies. Ironically, this means that data abundance, rather than scarcity, becomes the biggest hurdle to overcome.

Big Data = Big Brother?

Sensing data can be a major threat to privacy. Whether the data is from cellphones or web browsing histories, the potential abuse of this massive trove of data is an important concern. At the same time, awareness of this issue is alarmingly low, which means that people often don’t understand the power of the data that they make available.

This problem is magnified with the Sociometric Badge. Companies can already legally

• Watch employees via CCTV

• Log keystrokes

• Take screenshots of employees using their company computers

• Read employee e-mails

Exposing additional sensitive information from the badges, such as location and who you talked to, could lead to egregious abuses. This data could allow companies to determine when you’re in the bathroom, how much time you “wasted” talking to your friend in another department, and so on. Under current U.S. law, this kind of monitoring is completely legal.

This is a major failing of the U.S. legal system. Overreaching corporate monitoring should not only be morally distasteful, it should also be illegal. Most countries in Europe and Asia ban this activity, but they go so far as to prevent most analysis of this kind of data. To reach a productive middle ground, individuals and companies need to agree on steps to take when dealing with this extremely sensitive data.

The projects described in this book adhere to the “new deal on data” championed by MIT Professor Sandy Pentland. The core concepts of this new deal boil down to these three points:

• Data collection is opt-in and uses informed consent.

• Individuals control their own data.

• Any data sent to third parties must be aggregated.

The following sections examine each of these rules individually to help you understand why they’re necessary for reasonable application of sensing technology and “big data” analytics in general.

Opt-In

As mentioned previously, companies already collect a lot of data without your informed consent. For example, when the organizations I’ve been part of collect data with the Sociometric Badge, we spend weeks answering questions from participants, explaining what data we collect, and even give them consent forms that have our actual database tables. If people don’t want to participate, we also hand out “fake” badges that don’t collect data but otherwise look and act just like normal badges. This prevents those uncomfortable with the technology from being singled out, and in general makes everyone more likely to participate. Participants can also opt-out at any time. In practice this happens very rarely, because after a few days’ time people essentially forget that they’re wearing the badges.

Taken together, these steps help assuage people’s concerns and help us consistently achieve more than a 90% participation rate in all of our projects. Compare that to surveys, where researchers are ecstatic to get a 50% response rate.

With such high participation rates, the data itself becomes even more valuable; and whenever something is that important, everyone is going to try to stake their claim to it.

Data Control

Modern companies are extremely protective of their data—as they should be. Google’s entire revenue stream, for example, is dependent on the data created by its users. This protectiveness extends to corporate e-mail, where courts have continually reaffirmed the rights of companies to read their employees’ e-mails as long as they are accessed through company servers.

The sensitive nature of Sociometric Badge data points to the necessity of a change in this model. Without individual control of data, companies would be free to use your data any way they saw fit. For example, this kind of data could predict health risks (depression can be predicted from changes in communication patterns) or your likelihood to leave the organization (people getting ready to quit start to withdraw socially before making the announcement), leading your superiors to pass you over for promotion or diminish your role. If individuals control their own data, then any potential abuse can immediately be avoided by individuals denying access to their data. In the case of the projects described in this book, individuals can delete their data at will to prevent access to their information.

Overall, there generally are no good business reasons for companies to control the data of individuals. Knowing where Bob is at 2:30 on Tuesday, for example, doesn’t tell you about productivity. Companies should care much more about the general patterns and aggregate statistics that describe how different teams and divisions are collaborating and what behaviors and interaction patterns make people happy and effective. These aggregate statistics are also the only way to preserve privacy.

Data Aggregation

Anonymizing data from sensors is essentially impossible. Mathematically, it’s incredibly unlikely that someone would go to the exact same places and talk to the same people as you. Even if someone took a notebook and simply wrote down some of the times that a target person talked to others, then that someone would know whether he had the target’s sensor data.

The only way to deal with this problem is to aggregate data. Instead of allowing everyone to see information about each individual, the data is averaged over groups. This allows people to compare different teams and see how their own behavior stacks up in their group, but prevents anyone from identifying a specific person.

In my experience, companies usually aren’t too concerned by this restriction. People still get their individual data and can use it to improve. For example, they could see that they’re not interacting enough with another team on a project, or that compared to the happiest people at their company they tend to go to the coffee machine less frequently. The organization sees the aggregate data and general trends, which it can use to identify behaviors and collaboration patterns that make people and teams happier and more effective. This approach gives everyone what they want, even reducing liability for companies in case their servers get hacked. Because they don’t have individual data, even someone with malicious intentions couldn’t use the data to discriminate or spy on a coworker or employee.

To a lesser extent, companies today actually struggle with data anonymization problems. How does an organization deal with salary information? What happens if you submit a complaint about a coworker? Creating an organization that is open in its approach to these questions is critical not only for gaining widespread acceptance for this technology, but also for building a successful organization.

Trust and Transparency

At the core of the precepts outlined in this chapter are the importance of trust and transparency in organizations. If you don’t trust the people you work with and work for, you’re going to be unhappy, unproductive, and generally looking to jump ship for another job as soon as you can. If organizations instead position data collection policies to increase trust and transparency, employees learn how to improve and be happier, and companies can vastly increase their success.

People also shouldn’t be overly distracted by the privacy concerns associated with the widespread adoption of sensing technologies. As discussed in the coming pages, this technology has the potential to bring about radical, positive change in the way people work, from changing what it means to have an org chart to making management focus on people first. Ethically applying this technology and realizing these amazing possibilities is up to us.

In reality, however, the things that actually make people effective at work aren’t new. They have ancient origins, millions of years in the making. Even the concept of an organization has roots that stretch back for millennia.

Before we start looking at where we’re going, in the next chapter we’ll take a look at where we’ve been.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset