What is Next?

Computers are developing more and more day by day, and the form factors of devices are changing tremendously. In the past, we would only see computers in offices; however, now we see them on our home desks, on our laps, in our pockets, and on our wrists. The market is becoming increasingly varied as machines are equipped with more and more intelligence.

Almost every adult currently carries a device with them, and it is estimated that we look at our smartphones at least 50 times a day, whether there is a need for it or not. These machines affect our daily decision-making processes. Devices are now equipped with applications such as Siri, Google Assistant, Alexa, or Cortana—these are features that are designed to mimic human intelligence. The ability to answer any query thrown at them presents these types of technology as superior to humans. On the backend of this, these systems continuously improve by using the collective intelligence that is acquired from all users. The more you interact with virtual assistants, the better the results they give out.

Despite these advancements, how much closer are we to actually creating a human brain through a machine? We are in 2019 now; if science discovers a way to control the neurons of our brain, this may be possible in the near future. Machines that mimic the capabilities of a human are helping to solve complex textual, visual, and audio problems. They resemble the tasks carried out by a human brain on a daily basis; to put this into perspective, on average, the human brain makes approximately 35,000 decisions in a day.

While we will be able to mimic the human brain in the future, it will come at a cost. We don't have a cheaper solution for it at the moment. The magnitude of power consumption of a human brain simulation limits the development efforts in comparison to an actual human brain. The human brain consumes about 20 W of power, while a simulation program consumes about 1 MW of power or more. Neurons in the human brain operate at a speed of 200 Hz, while a typical microprocessor operates at a speed of 2 GHz, which is 10 million times more than the speed of neurons in the human brain.

While we are still far from cloning a human brain, we can implement an algorithm that makes conscious decisions based on previous data, as well as data from similar devices. This is where the subset of Artificial Intelligence (AI) comes in handy. With predefined algorithms that identify patterns from the complex data we have, these types of intelligence can then give us useful information.

When the computer starts making decisions without being instructed explicitly every time, we achieve Machine Learning (ML) capability. ML is used everywhere right now, including through features such as identifying email spam, recommending the best product to buy on an e-commerce website, tagging your face automatically on a social media photograph, and more. All of these are done using patterns identified in historical data, and also through algorithms that reduce unnecessary noise from the data and produce quality output. When the data accumulates more and more, the computers can make better decisions.

Mobile phones have become the default consumption medium for most of the digital products that are being produced today. As data consumption increases, we have to get results to the user as soon as possible. For example, when you scroll through your Facebook feed, a lot of content will be based on your interests and what your friends have liked. Since the time that users spend on these apps is limited, there are a lot of algorithms running on the server and client side, to load and organize content based on what you prefer to see on your Facebook feed. If there was the possibility of running all the algorithms on the local device itself, we wouldn't need to depend on the internet to load the content faster. This is only possible by performing the processes on the client's device itself, instead of processing in the cloud.

As the processing capability of mobile devices increases, we will be encouraged to run all ML models on the mobile device itself. There are a lot of services that are already being processed on the client's device, such as identifying a face from a photo (such as Apple's Face ID feature), which uses ML on the local device.

While multiple topics are trending such as AI, augmented reality (AR), virtual reality (VR), ML, and blockchain—ML is growing faster than others, with proper use cases evident across all sectors. ML algorithms are being applied to images, text, audio, and video in order to get the output that we desire.

If you are a beginner, then there are multiple ways to start your work, by utilizing all of the free and open source frameworks that are being built. If you are worried about building a model yourself, you can start with ML Kit for Firebase from Google. Alternatively, you can build your own model using TensorFlow, and convert that model into a TensorFlow Lite model (for Android), and a Core ML model (for iOS).

In this chapter, we will cover the following topics:

    • Popular ML-based cloud services
    • Where to start when you build your first ML-based mobile app
  • References to further reading
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset