Expected next actions

Since the news about AlphaGo was featured in the media, the AI boom has definitely had a boost. You might notice that you hear the words "deep learning" in the media more often recently. It can be said that the world's expectations of AI have been increased that much. What is interesting is that the term "deep learning," which was originally a technical term, is now used commonly in daily news. You can see that the image of the term AI has been changing. Probably, until just a few years ago, if people heard about AI, many of them would have imagined an actual robot, but how about now? 'The term AI is now often used—not particularly consciously—with regard to software or applications, and is accepted as commonplace. This is nothing but an indication that the world has started to understand AI, which has been developed for research, correctly. If a technology is taken in the wrong direction, it generates repulsion, or some people start to develop the technology incorrectly; however, it seems that this boom in AI technology is going in a good direction so far.

While we are excited about the development of AI, as a matter of course, some people feel certain fears or anxieties. It's easy to imagine that some people might think the world where machines dominate humans, like in sci-fi movies or novels, is coming sooner or later, especially after AlphaGo won over Lee SeDol in the Go world, where it was said to be impossible for a machine to beat a human; the number of people who feel anxious might increase. However, although the news that a machine has beaten a human could be taken as a negative if you just focus on the fact that "a machine won," this is definitely not negative news. Rather, it is great news for humankind. Why? Here are two reasons.

The first reason is that the Google DeepMind Challenge Match was a match in which the human was handicapped. Not only for Go, but also for card games or sports games, we usually do research about what tactics the opponents will use before a match, building our own strategy by studying opponents' action patterns. DeepMind, of course, has studied professional Go players' tactics and how to play, whereas humans couldn't study enough about how a machine plays, as DeepMind continued studying and kept changing its action patterns until the last minutes before the Google DeepMind Challenge Match. Therefore, it can be said that there was an information bias or handicap. It was great that Lee SeDol won one match with these handicaps. Also, it indicates that AI will develop further.

The other reason is that we have found that a machine is not likely to destroy the value of humans, but instead to promote humans' further growth. In the Google DeepMind Challenge Match, a machine used a strategy which a human had not used before. This fact was a huge surprise to us, but at the same time, it meant that we found a new thing which humans need to study. Deep learning is obviously a great technology, but we shouldn't forget that neural networks involve an algorithm which imitates the structure of a human brain. In other words, its fundamentals are the same as a human's patterns of thinking. A machine can find out an oversight of patterns which the human brain can't calculate by just adding the speed of calculation. AlphaGo can play a game against AlphaGo using the input study data, and learns from the result of that game too. Unlike a human, a machine can proceed to study for 24 hours, so it can gain new patterns rapidly. Then, a whole new pattern will be found by a machine during that process, which can be used for humans to study Go further. By studying a new strategy which wouldn't been found just by a human, our Go world will expand and we can enjoy Go even more. Needless to say, it is not only machines that learn, but also humans. In various fields, a machine will discover new things which a human hasn't ever noticed, and every time humans face that new discovery, they too advance.

AI and humans are in a complementary relationship. To reiterate, a machine is good at calculating huge numbers of patterns and finding out a pattern which hasn't been discovered yet. This is way beyond human capability. On the other hand, AI can't create a new idea from a completely new concept, at least for now. On the contrary, this is the area where humans excel. A machine can judge things only within given knowledge. For example, if AI is only given many kinds of dog images as input data, it can answer what kind of dog it is, but if it's a cat, then AI would try its best to answer the kind of dog, using its knowledge of dogs.

AI is actually an innocent existence in a way, and it just gives the most likely answer from its gained knowledge. Thinking what knowledge should be given for AI to make progress is a human's task. If you give new knowledge, again AI will calculate the most likely answer from the given knowledge with quite a fast pace. People also have different interests or knowledge depending on the environment in which they grow up, which is the same for AI. Meaning, what kind of personality the AI has or whether the AI becomes good or evil for humans depends on the person/people the AI has contact with. One such typical example, in which AI was grown in the wrong way, is the AI developed by Microsoft called Tay (https://www.tay.ai). On March 23, 2016, Tay appeared on Twitter with the following tweet: hellooooooo world!!!

Tay gains knowledge from the interaction between users on Twitter and posts new tweets. This trial itself is quite interesting.

However, immediately after it was made open to the public, the problem occurred. On Twitter, users played a prank on Tay by inputting discriminatory knowledge into its account. Because of this, Tay has grown to keep posting tweets including expressions of sexual discrimination. And only one day after Tay appeared on Twitter, Tay disappeared from Twitter, leaving the following tweet: c u soon humans need sleep now so many conversations today thx.

If you visit Tay's Twitter account's page (https://twitter.com/tayandyou), tweets are protected and you can't see them anymore:

Expected next actions

The Twitter account of Tay is currently closed

This is exactly the result of AI being given the wrong training by humans. In these past few years, the technology of AI has got huge attention, which can be one of the factors to speed up the development of AI technology further. Now, the next action that should be taken is to think how AI and humans interact with each other. AI itself is just one of many technologies. Technology can become good or evil depending on how humans use it; therefore, we should be careful how we control that technology, otherwise it's possible that the whole AI field will be shrinking in the future. AI is becoming particularly good within certain narrow fields, but it is far from overwhelming, and far from what science fiction currently envisions. How AI will evolve in the future depends on our use of knowledge and technology management.

While we should definitely care about how to control the technology, we can't slow down the speed of its development. Considering recent booms of bots, as seen in the story that Facebook is going to launch Bot Store (http://techcrunch.com/2016/03/17/facebooks-messenger-in-a-bot-store/), we can easily imagine that the interaction between a user and an application would become a chat-interface base, and AI would mingle with the daily life of an ordinary user going forward. For more people to get familiar with AI, we should develop AI technology further and make it more convenient for people to use.

Deep learning and AI have got more attention, which means that if you would like to produce an outstanding result in this field, you are likely to find fierce competition. It's highly likely that an experiment you would like to work on might already be being worked on by someone else. The field of deep learning is becoming a world of such high competition as start-ups. If you own huge data, you might take advantage by analyzing that data, but otherwise, you need to think about how to experiment with limited data. Still, if you would like to get outstanding performance, it might be better for you to always bear the following in mind:

Tip

Deep learning can only judge things within the knowledge given by training.

Based on this, you might get an interesting result by taking the following two approaches:

  • Experiment with data which can easily produce both input data and output data for training and testing
  • Use completely different types of data, for training and test respectively, in an experiment

For the first approach, for example, you can check automatic colorization using CNN. It was introduced in the project open to the public online at http://tinyclouds.org/colorize/ or in the dissertation at http://arxiv.org/pdf/1603.08511v1.pdf. The idea is to color gray-scale images automatically. If you have any colored images – these should be obtained very easily – you can generate grayscale images just by writing quick scripts. With that, you have now prepared input data and output data for training. Being able to prepare lots of data means you can test easier and get high precision more often. The following is one of the examples of the tests:

Expected next actions

Both are cited from http://tinyclouds.org/colorize/

Inputs to the model are the grayscale images on the left, outputs are the middle, and the images on the right are the ones with true color.

For the second approach, using completely different types of data, for training and testing respectively, in an experiment, we intentionally provide data which the AI doesn't know and make the gap between a random answer and a correct answer interesting/fun. For example, in Generating Stories about Images (https://medium.com/@samim/generating-stories-about-images-d163ba41e4ed), they have provided an image of a sumo wrestler to neural networks, which have only studied one of the projects and introduces the following: romantic novels and then test what the neural networks think. The result is as follows:

This experiment itself is based on the approach called neural-storyteller (https://github.com/ryankiros/neural-storyteller), but since the given data has an idea, it got the interesting result. As such, adding your new idea to an already developed approach would be an approach which could also get an interesting result.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset