5
Beyond the Word “Big”: The Changes

“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay.”

Arthur Conan Doyle (1892)

5.1. Introduction

For a long time, companies did not see the importance of data. But now, with the change in the way data is collected and analyzed, companies are coming to use it on a daily basis.

The power of data and the potential of analytics, as well as the resulting opportunities, have changed the way companies view data.

The way in which data, that is generated from various sources and different forms and formats, is collected, analyzed and interpreted, becomes an intrinsic and essential element of the company’s various operations, because it is what makes it possible to carry out and operationalize the company’s various activities. Currently, data are what companies are examining more closely than ever before.

Each company, be it large or small, manages a considerable amount of data. Sometimes, they are able to manage this data using different automated analysis tools. However, when data cannot be analyzed by traditional tools, it is time to think about Big Data and analytics.

Big Data – the term you’ve probably heard on television, read in magazines and newspapers or even heard in conferences and seminars – is more than just a buzzword. This new term is used by all managers and decision-makers, and is the subject of entire conferences.

Several companies have taken advantage of the opportunities offered by this exciting and fascinating field, in order to improve their performance and boost their decision-making process. There is therefore a clear impact of Big Data on business practices.

But what lies behind this vague concept? A detailed look at what Big Data analytics encompasses will help you understand its potential and the opportunities that can result from it.

The original concept is, in fact, nothing new, because data has always existed. But the advent of technologies, especially the Internet of Things (IoT), has generated an exponential growth in these data. This thus leads us to say that Big Data is simply the new form of data, like old wine presented in a new bottle.

The first question that arises after reading these few lines is probably related to the changes that have made this phenomenon so important. Why does the term “Data” suddenly appear in every conversation and why are companies so interested in it? In other words, what has changed so much and justifies such hype?

This is what we will discuss in this chapter, so that you can understand the importance of this phenomenon.

5.2. The 3 Vs and much more: volume, variety, velocity

To answer the above questions, you need to be aware that as you read these lines, thousands of tweets have been exchanged, millions of requests have been analyzed by Google, millions of “likes” have been given on Facebook, hundreds of hours of new YouTube videos have been downloaded and several Netflix videos have been launched! In total, in less than a minute of reading, a huge amount of data, in different forms, has been created in real-time.

Data here, data over there; we are witnessing the era of Big Data. This phenomenon, which seems ambiguous to some, can be understood in different ways. Its understanding varies from person to person, depending on their adopted perspective when examining this phenomenon (technological, industrial, commercial, etc.).

This is why many people refer to Doug Laney’s 3 Vs, stated in 2001, when looking for a more complete overview. Laney’s model, adopted 10 years later by Gartner, established a definition of Big Data.

In other words, there are three properties or characteristics that can help you break down the term, nicknamed “the 3 Vs”: volume, variety and velocity. These three aspects are essential to understanding how to measure Big Data and how to differentiate it from the forms of data we know (qualitative and quantitative data).

These 3 Vs would certainly give you an overview of the scale of the data and how quickly their quantity increases. And while one party focuses on the development of analytical tools, another part seems to be sticking to the modifications of the 3V model, in order to complete its definition, which is often related to the volume, velocity and variety of data.

So, as the Big Data field matured, more Vs were added. Several characteristics such as: value, veracity, etc., were introduced to improve the understanding of Big Data and enrich its depth.

A fourth “V” has been added a posteriori, which refers to the value related to the goal of the companies or the benefits generated by the data, in particular by giving them meaning (Sedkaoui and Monino 2016). In essence, when we talk about Big Data, we are not just talking about the amount of data that can be converted to information. We’re also talking about analyzing this data, in a way that can generate value.

However, to define Big Data’s point of origin from which you can examine its details, we will give an overview of the different characteristics of this phenomenon. This will allow you to see how each characteristic affects the current context, and understand how true value can be generated from analytics.

5.2.1. Volume

Big Data is a concept that we have regularly come across in recent years. We can even say that it is almost impossible not to have heard the term before. Moreover, a simple search on Google Trends will allow you to see how omnipresent this term has become in the various conversations around the world.

What seems more interesting is to understand what Big Data is. You would probably like to understand this concept, its characteristics, its opportunities and promises, and everything that has made Big Data so “big”. But the most important thing is to know where we are going to start.

Think, for example, of a Boeing jet engine that creates about ten terabytes of data every thirty minutes (10×1012 bytes). Or, a Jumbo jet plane travelling across the Atlantic Ocean, whose four engines can generate approximately 640 terabytes of data (640×1012 bytes) (Rogers 2011). Now, with an average of more than 100,000 flights per day, can you imagine the amount of data produced per day by all the planes that have flown across the sky?

Facebook, for example, stores photos. This simple statement may not seem impressive to you, right? But once you realized that Facebook registered more than 2 billion users in 2018 (more than the population of China), and that in total, more than 250 billion photos were stored, this may change your mind.

Yes, 250 billion photos is already a lot, but you haven’t seen anything yet. In fact, more than 250,000 photos were uploaded every minute on Facebook in the same year. What? You think one minute isn’t a lot? Alright, come and take a look at Table 5.1.

Table 5.1. What happens on the Internet in one minute (2018)

What’s happening on…Volume/minute
FacebookLogins973,000
GoogleSearch queries3.7 million
Play Store & App StoreApps downloaded375,000
TwitterTweets sent481,000
YouTubeVideos viewed3.4 million
NetflixHours watched266,000
MessagingEmails sent187 million

Thus, every minute in the world, more than 700 people use an Uber, 18 million text messages are sent and 2.4 million snaps are created, 1.1 million profiles are swiped on Tinder, 174,000 images are scrolled through on Instagram, $862,823 are spent online and more (Sedkaoui 2018b).

Do you now see what happens in a minute on the Internet? Do you still believe that one minute is not a lot?

In this case, you can imagine how many thousands of personal and professional transactions are made every minute. We use applications to measure our sports performance, our cars transmit data, just like planes, trains, etc. As such, many industrial and commercial processes are controlled by connected devices. This means that data is transferred from several computers, mobile devices and connected objects around the world.

Big Data therefore involves a huge volume of data. This is probably the first thing that comes to mind when we hear of the concept of Big Data.

5.2.2. The variety

Among all the statistics given in the previous examples, you may have noticed that we talked about photos, tweets, videos, SMS, etc. Various types of data now exist, and are very different from each other. Big Data is therefore much more than just a “huge amount of data”.

In other words, we cannot understand Big Data without understanding its variety, linked to these two notions:

  • – the variety of data types;
  • – the data structure.
images

Figure 5.1. Nature and types of data. For a color version of this figure, see www.iste.co.uk/sedkaoui/economy.zip

Yes! Photos, tweets, videos, emails, etc. are also data, but in different forms. In addition, the data in Big Data is of a largely unstructured nature, and cannot be collected, stored and analyzed in the old row/column database format. This means that each form of data requires a specific type of analysis that differs from the other forms.

If you take an e-mail, for example, you will realize that no content in an e-mail is identical to any of the thousands or even millions of other e-mails sent. Each e-mail consists of different addresses of the sender, sent to different destinations at different times. Each contains a message (text), and possibly, attachments (documents, photos, etc.).

This diversity doesn’t just involve the variety of forms of data produced, but also the variety of sources from which these data come from. This variety, as you can see, represents the second property that joins the first one, in order to help you understand how different the nature of data in Big Data is from the one we knew.

5.2.3. Velocity

Volume and variety are important, but if you really want to understand the context of Big Data, you must pay particular attention to the velocity aspect, which is just as important as the two previous ones. Just the same as data collection, processing and analysis must be carried out in real-time.

The Internet and the various connected objects considerably accelerate the generation of large amounts of data (Maheshwari 2019), from e-mails to shared photos on social media, the number of videos viewed or downloaded, and of course, the data from geolocation systems. Data can move quickly.

Just think of our Facebook example and the 250 billion photos stored last year. Remember that it was clearly indicated that in 2018, every minute, Facebook users posted more than 250,000 photos online. Yes! Every minute. This means that Facebook must manage more than 15 million photos every hour, which means that it must process and classify more than 360 million photos per day to facilitate their recovery and reuse.

In this context, velocity therefore consists of measuring the speed with which data are produced, analyzed and stored. However, it should be noted that there is no absolute rule, such as the number of bytes, for example, that we could consider as a threshold to define the velocity of data.

5.2.4. What else?

When we hear the word Big Data, we can only think of the three characteristics, often used to define this phenomenon. In other words, the 3 Vs: volume, velocity and variety. Its definition is most often based on these characteristics. But, while this model is certainly important and correct, it is now time to add other crucial factors. And you know what? They all start with the letter V.

Definitions given by words starting with Vs have become so classic that practitioners want to explain every aspect of Big Data with a new V, such as:

  • value: which refers to the benefits of Big Data, that can be obtained through appropriate analysis;
  • veracity: the reliability of the data source, its context and its importance for the resulting analysis;
  • variability: or the presence of inconsistencies in the data;
  • validity: or the extent to which the data are accurate and correct;
  • volatility: describes how long it will take for data to be available and how long it should be stored;
  • visualization: the manner in which the results of the data processing (information) are presented, in order to ensure greater clarity;
  • viability, vulnerability, and many others.

Thus, there are several characteristics to help us make data useful and generate value. Each characteristic plays an important role in enriching the context of Big Data. All these additional Vs illustrate other challenges in extracting value from unconventional data-sets.

Veracity, variability and volatility, for example, refer to the problem that Big Data often consists of unguaranteed accurate data, irregularities in data-sets, an uneven data flow and complicated navigation between components.

Veracity, which often refers to data quality, is an important aspect of Big Data, because not only does data come from everywhere, but it also belongs to everyone. Visualizations are also necessary to make sense of the results.

With all these characteristics, defining Big Data is not so easy, because the term itself refers to many aspects and new features. Certainly, these features are all important. But the most important thing is to understand how to generate value that corresponds to the key of obtaining useful information and better conduct the decision-making process. This value is only possible by analyzing huge amounts of data (volume) from different sources (variety) in real-time (velocity).

5.3. The growth of computing and storage capacities

The increasing automation of all types of processing and analysis implies an exponential increase in the volume of data, which is now counted in petabytes, exabytes, zetabytes and yottabytes. Do these units mean nothing to you? Well, it is thanks to them that the amount of data produced is measured.

A zettabyte, for example, refers to 1021, in other words, 1,000,000,000,000,000,000 bytes. Is that a lot? Are you able to read that number? Can you imagine the computing and storage power needed to process such a large volume of data? Thus, be aware that this huge figure only represents a fraction of the amount of data produced each year around the world.

Another thing you should know is that, in one way or another, everyone is a producer and consumer of data today. You may ask yourself: but how is that possible? Just look at all these connected objects (smartphones, computers, tablets, smart watches, GPS, smart cars, etc.) and the various available applications on which we have become heavily dependent.

The Internet of Things (IoT) also contributes to the increasing data size (Gartner 2017), and has led to an increase in the number of applications based on artificial intelligence (AI), which is an enabling technology for Machine Learning.

These connected objects and applications also generate data, which leads to an increasing lack of computing and storage capacity for data flows.

The 3 Vs, mentioned above, are a challenge, because (Sedkaoui and Gottinger 2017):

  • – the volume focuses on the storage, memory and computing capacity of an IT system and requires access to a cloud;
  • – the velocity emphasizes the speed at which data can be absorbed and significant responses produced;
  • – whereas the variety makes it difficult to develop algorithms and tools to process this wide variety of input data.

The first two Vs are important from an IT perspective (storage and processing), whereas the last V is important from an analytical perspective (Sedkaoui 2018b). As a result, IT systems must allow for the storage, analysis and extraction of relevant knowledge.

It’s no longer about the word “big”, but about how this large volume of structured, semi-structured and unstructured data must be captured, stored and analyzed in order to generate “value”. The differentiating factor in today’s business is not having or collecting data, but the power to analyze it, transform it into information and extract knowledge from it (see the knowledge pyramid, (Ackoff 1989)).

So, with the ever-increasing volume of data, its variety and velocity, it was necessary to reconsider the storage and processing of this volume, in order to continue to extract useful information.

5.3.1. Big Data versus Big Computing

Being able to process and analyze the large volume of data, available in different formats and come from different sources, is another answer to the previous questions. This is mainly due to the increase in computing power. Big Data therefore refers to data-sets that are so large and complex that traditional data processing tools cannot support them.

Understanding the 3 Vs – volume, variety and velocity – is an essential element in understanding the Big Data universe, but it is far from over. We do agree that when a size of data is produced in real-time and arrives in continuous flow from multiple sources, it qualifies as Big Data.

However, to transform this quantity into value, we need large amount of computing and at a lower cost, so as to examine and process this data.

So “big” is, of course, a term relating to the volume, variety and velocity of data, but more importantly, it is also a term relating to the IT infrastructure that is in place. Because this “big” volume also induces great analyses.

The principles of statistics, forecasting, modeling and optimization remain the same. It is this computing capacity that has the potential to monetize data. Today, we can run billions of simulations thanks to the advancements and progress of IT tools, which tend to focus on Big Data technologies. There are now several tools available for solving problems, determining models and identifying opportunities.

Thus, the growth in computing power is the true change that has opened the door to great opportunities.

5.3.2. Big Data storage

Nowadays, companies focus so much on data analysis and processing that they often forget the need for data storage solutions. They are mainly interested in how they will be able to transform all the data collected into value.

However, the data accumulated does need a space where they will be stored. This means a necessary infrastructure that allows the storage of large amounts of data.

In 2011, the McKinsey Institute proposed its own version of the term Big Data. This term is defined by this institute as:

Data whose scale, diversity and temporal distribution require new technical architectures and more advanced analyses in order to extract knowledge that represents a new source of value. (Manyika et al. 2011)

If you look at the large number in a single zettabyte, you will realize that growth is not only in computing capacity, but also in storage capacity (Table 5.2).

Table 5.2. The growth of storage capacity

YearStorage capacity
1992100 gigabytes/day
1997100 gigabytes/hour
2002100 gigabytes/second
201328,875 gigabytes/second
201850,000 gigabytes/second

Technology is constantly evolving and, as a result, machines and connected objects are consuming more and more data. Table 5.2 shows that in 2018, 50 000 x 109 bytes of data were created per second, thus billions of bytes were created daily. The volume of data will only continue to grow very significantly.

It should also be noted that the quantity mentioned in the previous definition may vary from one company to another (small or large) and from one business sector to another (trade, industry, service). Of course, it can also vary depending on the analytical technologies used and the size of the databases available in a particular company or sector.

This is why, in many companies, or in many sectors, the amount of data ranges from a few dozen terabytes to several petabytes, which represent thousands of terabytes (1015 bytes).

Big Data generally introduces data-sets that have a volume that exceeds the capacity of the IT tools commonly used to capture, manage and process said volume in real-time. Therefore, it is essential that large data storage systems evolve.

5.3.3. Updating Moore’s Law

A simple computation of all the data on our physical activities that are recorded through connected objects (smartphones, smart watches, etc.), such as our heart rates for example, can not only save our lives, but also those of thousands of people. Here is just a simple example of the power of data.

The data generated on a person who jogs for 20 to 30 minutes in one day may seem like little to you. But imagine the amount of data that will be generated by calculating the pulses of a billion people who own smart watches or applications on smartphones? Now imagine the quantity produced per week, per month and per year.

Every sharing or comment on Facebook or Instagram, every video you watch on YouTube, is data. With billions of subscribers, data will continue to grow exponentially. This exponential growth was predicted more than 50 years ago by Gordon Moore.

In his 1965 article “Cramming More Components onto Integrated Circuits”, Gordon Moore indicated that: “The number of transistors per circuit of the same size doubled every eighteen months, with no increase in costs” (Moore 1965).

In his article, Moore noticed that the number of transistors on a chip had almost doubled each year from 1959 to 1965. The article also stated Moore’s law for the first time, according to which the number of transistors on a chip would roughly double every two years.

Table 5.2 shows that processor performance is increasing exponentially, as if hard disks were applying their own version of Moore’s law. This is the case because computers and various connected objects nowadays are much more powerful and smaller than those used 10 or 20 years ago.

This law certainly has an effect on data because the placement of sensors and smart interconnected objects everywhere increases their dissemination. And the bigger this dissemination is, the more the quantity of data produced by these objects increases. As the amount of data increases, so does the need to improve computing and storage capacity at a low cost. The lower the price, the more the dissemination of smart objects and sensors increases, and so on.

Do you see? It is an endless loop, which leads to the motivation of using and improving machine capacities. We can then say that Big Data pushes the limits of Moore’s law. Big Data has brought new ways of doing things and new methods of collecting, analyzing, integrating and visualizing data.

As processing power increases and storage capacity increases, it will be necessary to examine new computing fields and rely on a large number of essential resources, including the cloud, which is one of the main means by which computing can continue to grow.

Cloud computing, which has attracted considerable attention in recent years, is seen as an infrastructure that has revolutionized data storage and computing capabilities. These capabilities have led to advancements in the field of computing, which have improved the power of processing and analysis. This has enabled companies to be more efficient and introduce innovative solutions.

In parallel with these advances, Big Data technologies, which heavily rely on cloud platforms for storing and processing data flows, have been widely developed. They are among the most frequently used technologies in the development of applications or services in various sectors (health, education, energy, web, etc.).

5.4. Business context change in the era of Big Data

Big Data and its various applications have significantly changed the business playground. Technological advancements in analytical tools and algorithms have unlocked the operational potential of both large and small companies, and have had a significant impact on their various activities.

As the collection, analysis and interpretation of data become more readily available, they will have a significant impact on each company in several important ways, regardless of its business sector or size.

The awareness of the importance and potential of data for companies has therefore been raised. Now it’s time for companies to think about how to make the most out of it. Whether it is for the operationalization of projects (revenue growth, cost reduction, etc.), the optimization of different activities, the creation of new services, the improvement of the decision-making process, etc., Big Data analytics has transformed the way companies act.

As Kenneth Cukier, Data Editor of The Economist in London and co-author of Big Data: A Revolution That Transforms How We Live, Work, and Think, published in 2013, points out: “More isn’t just more. More is new. More is better. More is different.”

Yes! Because more data means new methods of analysis, new ways to glean useful information, more efficient decision-making and more effective operationalization of business strategies and perspectives. Therefore, with more data, companies become more creative.

According to Frizzo-Barker et al. (2016), this phenomenon could potentially change the way companies think about data infrastructure, business intelligence and analysis and information system strategy (Sedkaoui 2018a). By using its different applications, companies will be able to improve their decision-making process and, consequently, their performance (McAfee and Brynjolfsson 2011).

Big Data is becoming a trend and a lever that every company must consider in its culture and in the design of its business model. Its various applications have changed the way companies operate and have created new opportunities for growth. By leveraging this data, companies can identify new opportunities, create more efficient operations, increase profitability and improve their service.

Companies that exploit all available opportunities will not only gain a competitive edge, they will also transform their business models and industries by stimulating growth in new sectors using new technologies.

So, just as Cukier had thought, more doesn’t just simply mean more, because having more data has completely changed the game. Having more data has allowed companies to operate differently and see new opportunities that were not previously visible. Having more data has led us into a completely different situation.

Big Data has therefore changed the business paradigm and will continue to influence its context, which in turn will undoubtedly affect society as a whole. However, the real value of this paradigm is only achieved when companies exploit the full range of opportunities offered by each byte of data. In other words, this is only possible when everything that is said is done.

5.4.1. The decision-making process and the dynamics of value creation

The different Vs of Big Data challenge the fundamental principles of existing technical approaches and require, as already mentioned, new forms of processing that promote decision-making, knowledge extraction and operations optimization (Curry 2016; Sedkaoui 2018b).

It is well known that the decision-making process is often based on the model of limited rationality: intelligence, modeling, choice and control, forged by Herbert Simon in 1977. With the varying amounts of data produced every second, this model becomes complex and needs to be improved (Sedkaoui 2018a). Indeed, decision-making, strategy development and anticipation of change have always been dependent on the availability and quality of data. Information technology experts create algorithms to better manipulate and organize data. These experts are supported by data science experts who are responsible for the initiation, development and application of quantitative methodology.

They collect, store and analyze data, using software and programs, applying the necessary analyses (data analysis algorithms) to generate models or information that subject matter experts, or decision-makers, interpret in order to make strategic decisions and create value.

All these steps will be part of a data value chain, from collection to decision-making. With, of course, the contribution of the various stakeholders supported by the technologies used. The value chain therefore depends on the quantity and quality of the data that will be put to analysis.

The decision-making process depends on the process of creating meaning, in other words, the process of creating knowledge. In this respect, decisions are data-driven, that is to say, they contain an important analytical phase, based on the processing of structured or unstructured data, be it from internal (company databases) or external (web, social networks, etc.) sources.

A data-driven decision-making process is therefore a process that includes an analytics aspect. This aspect can help to identify decision-making opportunities in the intelligence phase of Simon’s model, where the term “intelligence” refers to the discovery and extraction of knowledge. In this particular case, this phase consists of identifying the opportunities for which a decision must be made (Simon 1997).

From the decision-maker’s perspective, the importance of the data analysis process lies in its ability to provide valuable information on which to base decisions.

It allows decision-makers to take advantage of opportunities resulting from the wealth of data generated by supply chains, production processes, customer behavior, etc. (Frankel and Reid 2008).

This process includes several distinct phases, which we will discuss in more detail in the next chapter.

5.4.2. The emergence of new data-driven business models

When technologies become cheaper and easier to use, they transform businesses. This is the case with Big Data technologies, with a substantial reduction in data processing and storage costs.

The range of opportunities offered by these technologies allows companies to transform their business model through new visions of the value chain into a vast, fully digitalized ecosystem. This involves new ways of operating and has led to a reassessment of the foundations of business and an interpretation of new ways of creating value.

This is what Sergey Brin and Larry Page of Google, Jeff Bezos of Amazon, Jack Ma of Alibaba.com, Travis Kalanick and Garrett Camp of Uber, Reed Hastings of Netflix, Brian Chesky, Joe Gebbia and Nathan Blecharczyk of Airbnb, Mazzella of BlaBlaCar, and many others, have done.

You may be wondering how they did it. The answer is simple, they took full advantage of the opportunities of Big Data analytics by deciphering the underlying messages revealed by each data byte. Of course, these innovative models have also understood the importance of the digital ecosystem and digital platforms.

Their success depends on their resistance to change and their ability to monetize it in their “ROI” (Return on Investment). They have exploited the potential of analytics, not just to differentiate their business models, but also to innovate. And innovating in a business model means exploring new ideas, creating new value propositions and setting up new value chains. It is therefore a question of innovating, but in a different manner.

As Drucker (1994) indicated, it is about how to make a difference. This is what the business model concept itself means. The concept refers to the technique that companies adopt in the face of competition and change, based on the skills and resources available (McKelvey and Zhu 2013). The business model therefore describes how a company can create, deliver and capitalize value.

Value creation, for the benefit of all stakeholders in the business context, is the main element of a company’s success. However, in a generalized digitization ecosystem, be aware that evaluating and evolving a business model is not an option, because everything changes quickly and only the most agile companies will resist.

But when a company makes a strategic decision to develop its project, data is always useful. The experiences cited best illustrate this point. If these companies, among others, managed to surprise us, it is because they set up a series of innovative business models oriented and based on data and analytics, or what is called the data-driven business model.

Creating a model that is oriented and driven by data means making decisions based on the analysis of that data. For each company, or start-up, that designs and implements what constitutes a data-driven business model, which has changed the business landscape, there are several experiences that can serve as a reference to capture new sources of revenue.

5.5. Conclusion

We hope you now have a better understanding of the content of this chapter: what has changed so much with Big Data?

The purpose of this chapter was to provide an overview of this change. This phenomenon brings together a set of challenges on the exploitation of data, which are continuously produced with new scales in size, form and complexity. That is why when you hear the term Big Data, you often hear the term “analysis” introduced in the same sentence.

In this context, companies must make the most of this extensive landscape of data in order to apply multiple technologies wisely, to carefully select key data for specific surveys, in addition to innovatively adapting large integrated data-sets to support specific queries and analyses.

As a result, Big Data analytics has become a major trend in the world of technology and business. It is a great challenge for companies.

Before going into analytics, which will be well detailed in the next chapter, let us summarize what we have been able to understand through the different points that have been developed in this chapter.

TO REMEMBER.– This chapter has taught you that:

  • – Big Data is a generic term for any collection of data that is so large or complex that it becomes difficult to process using traditional data management techniques;
  • – Big Data is often characterized by the 3 Vs:
    • - V for volume (the size of the data): how much;
    • - V for variety (data type): diversity;
    • - V for velocity (speed): its speed;
  • – often, these characteristics are complemented by other Vs, namely: value, veracity, etc.;
  • – growth in computing power and storage capacity are two other aspects that quickly change as the amount of data continues to increase;
  • – in the Big Data universe, you will encounter different types of data, and each type generally requires different processing and storage tools and techniques;
  • – given its importance, many companies are investing in Big Data in order to better understand their customers, boost their processes and differentiate their business models.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset