CHAPTER 6
Java Programming for Mobile Applications

“It is what you learn after you know it all that counts.”

—John Wooden

6.1 Introduction

Ever since 2007, when Apple first introduced its iPhone, our passion for smartphones has become stronger and deeper. Smartphone sales have rocketed. There are around 2.5 billion smartphones worldwide in 2019, according to Statista (https://www.statista.com/). Smartphones are an excellent example of a modern technology that has fundamentally revolutionized our lives. With smartphones, you can search the Internet, book flights and hotels, and order food and taxies all with just a few finger swipes. With mobile payment, in many places, you can survive with just a phone, without needing cash or credit card in your wallet! Smartphones have truly gone from luxury items to daily necessities in the span of about 10 years' time.

There are generally two main types of smartphones on the market, Google's Android phones and Apple's iPhones. There are an estimated five billion mobile users in the world. The largest market share is held by phones using Google's Android operating system, which occupies more than 80 percent of the market. The top Android phone makers are Samsung, Huawei, Google, HTC, LG, ZTE, Xiaomi, Oppo, Vivo, and so on. However, the largest single phone company is still Apple, with about 15 percent market share. The traditional mobile phone companies such as Nokia, Ericsson, Motorola, and the once-most-popular-with-business-customers BlackBerry, have all fallen out of favor and become nonexistent.

This chapter first introduces how to use Android Studio to develop mobile phone applications—apps—for Android phones and then introduces MIT App Inventor, another popular way of developing Java Android applications. MIT App Inventor is a web-based, visual programming tool, which allows users to build Java Android applications using visual objects. MIT App Inventor is particularly popular among beginners. Finally, this chapter introduces 5G, the most talked about and most researched next-generation mobile technology. 5G is going to significantly change the way we communicate, and therefore it is beneficial to understand what 5G is and how it works.

6.2 Android Studio

Android is an open source and Linux-based operating system for mobile devices, developed by Google. Android Studio is the official IDE for developing Android programs. Android Studio actually works based on IntelliJ IDEA, so instead of using Android Studio, you can also use IntelliJ IDEA to develop Android mobile applications.

To develop Android mobile applications, you will need the following:

  • Java JDK (5 or later)
  • Android Studio (or IntelliJ IDEA)

Because you installed the Java JDK in Chapter 2, here you just need to install Android Studio. Figure 6.1 shows the Android developer web site (https://developer.android.com/studio/) where you can download and install Android Studio. During the Android Studio installation, it will also install Android SDK for compiling your programs and Android Virtual Device (AVD) for running simulations of your programs.

Image described by caption and surrounding text.

Figure 6.1: The Android Studio download web site

Unfortunately, installing and configuring Android Studio is not a completely painless process; there are several tricky places, so you might need to try more than once. Figure 6.2 shows an excellent Android Studio installation tutorial from Tutorialspoint (https://www.tutorialspoint.com/android/android_studio.htm). There is also an excellent seven-part YouTube tutorial series (https://www.youtube.com/watch?v=LN8fBh7LH9k&list=PLt72zDbwBnAW5TU96UHUbLtnivjviIKks) called Android App Development for Beginners (2018 Edition), shown in Figure 6.3.

Image described by caption and surrounding text.

Figure 6.2: The Android Studio tutorial from Tutorialspoint

Image described by caption and surrounding text.

Figure 6.3: The YouTube tutorial series for Android App Development for Beginners (2018 Edition)

6.3 The Hello World App

Once you have successfully installed Android Studio, you can start to develop your own Android applications. In our first example, we will create a simple Android app that will display Hello World! on-screen.

First, you will need to create an Android project using Android Studio. When you first run Android Studio, a welcome window will appear, from which you can click Start A New Android Project, as shown in Figure 6.4 (top). A new project window will appear. Choose your project with the default empty activity and click the Next button, as shown in Figure 6.4 (middle). On the following Configure Your Project screen, choose your project name; the default application name is My Application, as shown in Figure 6.4 (bottom). Click the Finish button to finish creating the project.

Image described by caption and surrounding text.

Figure 6.4: The Android Studio welcome window (top), the Choose Your Project window (middle), and Configure Your Project window (bottom)

Once the project is created, your screen will look as shown in Figures 6.56.8. This is a blank Android project. The left panel of the Android Studio shows the structure of your project, and the right panel shows the file contents. There are several key files in the project, which you can find from the left panel, using the following selection sequences:

  • app ➪ java ➪ com.example.myfirstapp ➪ MainActivity.java
  • app ➪ res ➪ layout ➪ activity_main.xml
  • app ➪ manifest ➪ AndroidManifest.xml
  • Gradle Scripts files

The MainActivity.java file is the main Java program of your project. Double-click the file to view its content in a tab on the right panel, as shown in Figure 6.5. Inside the MainActivity class, the onCreate() method is called when the MainActivity class is first created. The setContentView() method gives information about the layout resource, defined in the activity_main.xml file.

Image described by caption and surrounding text.

Figure 6.5: The Android project My Application in Android Studio, with the MainActivity.java tab

The activity_main.xml file defines the user interface (UI) layout of your Java app. Double-click the file to view its content on another tab in the right panel. Inside the activity_main.xml tab, there are also two subtabs at the bottom, Design and Text. The Design tab shows the look and feel of the UI layout (Figure 6.6), and the Text tab shows its Extensible Markup Language (XML) code (Figure 6.7). In this case, the program's UI layout contains only one TextView component, which is used to display the Hello World! text. You can simply modify this XML code to display other text.

Image described by caption and surrounding text.

Figure 6.6: The activity_main.xml tab in Design view

Image described by caption and surrounding text.

Figure 6.7: The activity_main.xml tab in Text view

The AndroidManifest.xml file describes the fundamental characteristics of your Java app. Double-click the file to view its content in another tab on the right panel, as shown in Figure 6.8.

Image described by caption and surrounding text.

Figure 6.8: The AndroidManifest.xml tab

The Gradle Scripts files are used to build the project.

Example 6.1A shows the corresponding code of MainActivity.java, and Example 6.1B shows the activity_main.xml file.

To run the program, just click the green triangle Run button in Android Studio, or press Shift+F10, and your program will run in the Android Virtual Device (AVD), which is an emulator of Android phones. Please note that Android Studio requires you to download a disk image of the relevant AVD as well, on first setup. The program will show a blank screen with Hello World! in the middle, as in Figure 6.9. When you run your program for the first time, the AVD will take a while, as it mimics the bootup process of the phone.

Image described by caption and surrounding text.

Figure 6.9: The output of the Android project My Application in the AVD emulator

Congratulations! You have successfully developed your first Android app!

You can also run the program directly on your smartphone. Doing so allows you to test and debug your Android program on your smartphone directly over an Android Debug Bridge (ADB) connection. In this case, first you will need to change your phone's settings and run the program using ADB. The following are the steps:

  1. From your phone, open the Settings menu, select Developer Options, and then enable USB Debugging.
  2. Plug your phone into the computer using a USB cable.
  3. Click the Run button in Android Studio to build and run your program on your phone.

For more information on developing Android Apps, see the following web sites:

https://developer.android.com/training/basics/firstapp/

https://www.youtube.com/playlist?list=PLS1QulWo1RIbb1cYyzZpLFCKvdYV_yJ-E

https://www.javatpoint.com/android-tutorial

6.4 The Button and TextView Apps

In this example, you will create an Android app that contains a Button component and a TextView component in the UI. You will also add some action to the Button component, so that when clicked, it checks the text in the TextView.

Create another blank Android project in Android Studio, exactly the same way as in section 6.2, and change the application name to My Application 2. From the activity_main.xml tab, select the Design subtab at the bottom and drag a Button component into the screen so there are two components in the screen, a TextView component and a Button component, as shown in Figure 6.10 (top).

Image described by caption and surrounding text.
Image described by caption and surrounding text.

Figure 6.10: The Android project My Application 2 in Android Studio, with the activity_main.xml tab in Design view (top), in Text view (middle), and the MainActivity.java tab (bottom)

Select the Text subtab, and notice that the corresponding XML code has also been automatically generated for the Button components. Set the TextView component ID name to textview, and set the Button component ID name to button, as shown in Figure 6.10 (middle).

Now modify the MainActivity.java code to add some actions to the Button component, as shown in Figure 6.10 (bottom). In this case, when the button is clicked, it will change the text of the TextView component to Button has been clicked! and change the font size to 25. Figure 6.11 shows the output results in AVD emulator.

Schematic diagram depicting output of the Android project My Application 2 in AVD emulator with "Button has been clicked!"

Figure 6.11: The output of the Android project My Application 2 in AVD emulator

Example 6.2A shows the corresponding code of MainActivity.java, and Example 6.2B shows the activity_main.xml file.

6.5 The Sensor App

In this example, you will create an Android app that will display information from a sensor on the screen.

Create another blank Android project in Android Studio, exactly the same way as in section 6.2, and change the application name to My Application 3. From the activity_main.xml tab, select the Text subtab, and set the TextView component ID name to textview, as in Figure 6.12 (top right).

Image described by caption and surrounding text.
Image described by caption and surrounding text.

Figure 6.12: The Android project My Application 3 in Android Studio (top), with the activity_main.xml tab (middle), and the MainActivity.java tab (bottom)

Now modify the MainActivity.java code to add some actions to get all the sensor information; see Figure 6.12 (bottom). In this case, the SensorManager class is used to handle the sensor service, the List class is used to store all the sensors, and the TextView component is used to display the information. A for loop is used to go through all the sensors and get their name, their vendor, and their version, and then display that information on the TextView component. Figure 6.13 shows the output results in the AVD emulator.

Schematic diagram depicting output of the Android project My Application 3 in AVD emulator with the My Application 3 screen.

Figure 6.13: The output of the Android project My Application 3 in AVD emulator

Example 6.3A shows the corresponding code of the MainActivity.java file, and Example 6.3B shows activity_main.xml.

6.6 Deploying Android Apps

Once you have finished developing your Android app and are ready to distribute it to others or publish it in the Google Play Store, you need to build a signed Android Package Kit (APK) file using Android Studio. APK is the package file format used by the Android operating system for distributing and installing mobile apps. It is similar to Windows EXE files. An APK file typically contains a compiled program's code, resources, certificates, and the manifest file. To sign your app, you need to generate an upload key and key store. In Java, a key store is used to store authorization certificates or public key certificates and is protected by a key store password. Key stores are often used by Java-based applications for encryption and authentication. More details about security, encryption, and authentication are available in Chapter 7.

As an example, you will build an APK file for the MyApplication3 project. To begin, from the Android Studio menu choose Build ➪ Generate Signed Bundle/APK. The Generate Signed Bundle or APK window will appear, in which you can choose to generate a signed bundle or APK. Select APK and then click Next, as shown in Figure 6.14 (top). In the window that follows, for the Key Store Path field choose Create New, as shown in Figure 6.14 (middle).

Image described by caption and surrounding text.

Figure 6.14: The Generate Signed Bundle or APK window to choose to generate signed bundle or APK (top), to select key store path and password (middle), and the New Key Store window (bottom)

A New Key Store window will appear, as shown in Figure 6.14 (bottom). Complete the fields. For the key store path, you choose a path and a name for your Java KeyStore (JKS) file. In this case, I chose MyApplication3. For the password, you choose your own password. Click OK.

In the next window (Figure 6.15), select one of the signature versions. In this example, I chose V2 (Full APK Signature). Then click Finish.

Image described by caption and surrounding text.

Figure 6.15: The Generate Signed APK window

You can find your APK file in your Android project folder, MyApplication3app eleaseapp-release.apk. You can then rename the APK file and copy it to your smartphone to install it or publish it on Google Play Store.

For more details on generating a signed APK file and publishing Android apps, see the following resources:

http://www.androiddocs.com/tools/publishing/app-signing.html

https://developer.android.com/studio/publish/

6.7 The Activity Life Cycle of an Android App

In the Android system, an activity is a single task that the user can do. Each Android app can have one or more activities. Figure 6.16 shows the activity life-cycle diagram of an Android app, adapted from the Android Developer web site (https://developer.android.com/guide/components/activities/activity-lifecycle). This activity life cycle can be divided into six stages, which can be invoked by different callbacks: onCreate(), onStart(), onResume(), onPause(), onStop(), onRestart(), and onDestroy().

Image described by caption and surrounding text.

Figure 6.16: The activity life-cycle diagram of Android apps, from the Android Developer web site

The onCreate() callback is called when the activity is first created, similar to the main() method in standard Java programs. The opposite is the onDestroy() callback, which is called just before the activity is destroyed by the system. The onStart() callback is called when the activity is visible to users. The onResume() callback is called when the activity is interacting with users. This is the state in which the user is using the app. The app stays in this state until the user is doing something else to take focus away from the app. The onPause() callback is called when the current activity is being paused, for example during multiscreen operations or while using transparent apps on top. After a pause, onResume() callback is called again. The onStop() callback is called when the activity is no longer visible. The onRestart() callback is called when the activity restarts after being stopped.

6.8 MIT App Inventor

In addition to Android Studio or similar IDEs, there are other ways to develop Android applications, such as MIT App Inventor.

MIT App Inventor is an open source, web-based, online Android development environment, developed by Massachusetts Institute of Technology (MIT) and Google. It is aimed at teaching young children, or newcomers, to develop Android apps using Open Blocks visual programming, without the need of writing Java code. There are two versions; the current version is App Inventor 2. My first encounter with MIT App Inventor was through one of Google's code camps. Within just a few hours, starting from knowing absolutely nothing, I had managed to develop several apps, including a speech recognition app and Chinese–English translation app. I was impressed with its simplicity and ease of use. Here are a few examples:

Speech Recognition App

In this example, you will build an app that includes a screen with a label and a button. When you click the button, a speech recognition engine will start. You can then speak an action, such as “open Google Map,” “open BBC weather,” “open YouTube,” or “open Facebook,” and the app will recognize it and run the corresponding apps on your phone.

To use the App Inventor 2, go to http://ai2.appinventor.mit.edu/, and log in using your Google account. Create a new project called PerryAsk; Figure 6.17 shows what it looks like when the project is created. In the top-right corner, there are two buttons, Designer and Blocks. The Designer button displays the front end of your app, or how your app looks. It is the default view. The Blocks button displays the backend of your app, the block editor. Here you write your program using the function blocks. App Inventor development is based on this front-end and backend concept, the same as LabVIEW, another popular visual programming tool.

Image described by caption and surrounding text.

Figure 6.17: The front end of a new project called PerryAsk (top) and the backend (bottom), from the MIT App Inventor 2 web site

In the Designer view, you have one or more screens (the default is one screen). To see how screens work, from the left palette drag a label and a button into the screen, and then drag five more invisible components: TextToSpeech, SpeechRecognizer, OrientationSensor, LocationSensor, and ActivityStarter. These components will not appear on the screen; instead, they will appear in the space below the screen, as shown in Figure 6.17 (top). Next, click the Blocks button, and edit the block code as shown in Figure 6.17 (bottom).

The full source code of the project, zipped in a file named PerryAsk.aia, is available on the web site accompanying this book.

The screen initialization block enables the LocationSensor, as shown in Figure 6.18. This block can also be used to initialize other variables used in the program.

Image described by caption and surrounding text.

Figure 6.18: The screen initialization block

The button block calls the SpeechRecognizer to get text from your speech, as shown in Figure 6.19. This will trigger the SpeechRecognizer block to run, shown in Figure 6.20.

Image described by caption and surrounding text.

Figure 6.19: The ButtonAsk Click block

Image described by caption and surrounding text.

Figure 6.20: The SpeechRecognizer block

Inside the SpeechRecognizer block is a do loop, which first displays the SpeechRecognizer text in the label and then uses a series of if … else if statements to do different things depending on the SpeechRecognizer text. For example, if you say “open map,” the app will use ActivityStarter to start the Google map and use LocationSensor to get the current address and display it on the map.

It can also search the Web, run your camera, and when you say “hello,” it will use TextToSpeech to say “Hello, human” to you using an artificial voice.

To deploy and run your app on your phone, from the menu select Build ➪ App (provide QR code for .apk). This will compile the program and generate the 2D QR code, as shown in Figure 6.21. Use your phone to scan the QR code to download and install the app.

Screen captures depicting MIT App Inventor website with screens for compiling your program (top) and generating a QR code (bottom).

Figure 6.21: Compiling your program (top) and generating a QR code (bottom)

Translation App

App Inventor 2 also comes with an impressive language translation engine called YandexTranslate. In this example, you will build an app that does speech recognition and language translation at the same time. You press one button and then say something in English, and the app will recognize your speech in English, translate it to Chinese, and speak Chinese back to you. You press another button and say something in Chinese, and the app can recognize your speech in Chinese, translate it to English and speak English back to you. Cool!

Again, from the App Inventor 2 web site, create a new project called PerryTranslate, as shown in Figure 6.22. From the Designer view, drag the following visible components into the screen: TextBox for English, a Label for English, a Button for English, a TextBox for Chinese, a Label for Chinese, and a Button for Chinese. Also drag three invisible components into the screen: TextToSpeech, SpeechRecognizer, and YandexTranslate. Again, the full source code of the project, zipped in a file called PerryTranslate.aia, is available on the web site accompanying this book.

Image described by caption and surrounding text.

Figure 6.22: The PerryTranslate project's front end (top) and the backend (bottom), from MIT App Inventor 2 web site

Now click the Blocks button to display the block editor. The screen initialization block enables the Language variable value to zero. In this case, zero presents English, and one presents Chinese. The ButtonEnglish block resets the Language variable value to zero. If the TextBoxEnglish is empty, it will start SpeechRecognizer to get text from your speech in English. This will trigger the SpeechRecognizer block to run. If the TextBoxEnglish is not empty, it will copy the text into LabelEnglish, as shown in Figure 6.23.

Image described by caption and surrounding text.

Figure 6.23: The screen initialization block and the ButtonEnglish Click block

The ButtonChinese block resets the Language variable value to one. If the TextBoxChinese is empty, it will start SpeechRecognizer to get text from your speech in Chinese. This will trigger the SpeechRecognizer block to run. If the TextBoxChinese is not empty, it will copy the text into LabelChinese, as shown in Figure 6.24.

Image described by caption and surrounding text.

Figure 6.24: The ButtonChinese Click block

The SpeechRecognizer block contains a do loop, shown in Figure 6.25. Inside the do loop, if the Language variable is zero, it will set LabelEnglish to the text of SpeechRecognizer and then call the YandexTranslate engine to translate the LabelEnglish text to Chinese. After the YandexTranslate engine has produced the translated text, it will trigger the YandexTranslate GotTranslation block, which will set the LabelChinese text to the translated text, call the TextSpeech to speak the Chinese text, and then set the TextBoxChinese text to the LabelChinese text.

Image described by caption and surrounding text.

Figure 6.25: The SpeechRecognizer AfterGettingText block (top) and the YandexTranslate GotTranslation block (bottom)

Inside the do loop, if the Language variable has a value of 1, it will do the same process but from Chinese to English. Simple!

Again, to run your app on your phone, select Build ➪ App (provide QR code for .apk) from the menu to compile the program and generate a QR code, as shown in Figure 6.26. Then use your phone to scan the QR code to download and install the app.

Screen captures depicting MIT App Inventor website with screens for compiling your program (top) and generating a QR code (bottom).

Figure 6.26: To compile your program (top) and to generate a QR code (bottom)

For more information about App Inventor and tutorials, see the following resources:

http://appinventor.mit.edu/explore/ai2/tutorials.html

http://www.appinventor.org/book2

6.9 5G

When we talk about mobile applications, we have to talk about 5G, as it is no doubt one of the most talked about and heavily researched technologies. With all the biggest smartphone companies starting to launch their own 5G-ready smartphones, it will be an interesting way to conclude this chapter by looking at what exactly 5G is and what it means to us.

5G refers to the fifth generation of mobile technology. It is continued from 1G, 2G, 3G, and 4G technologies. The main difference is frequency spectrum, as all the wireless mobile technologies rely on electromagnetic waves to send and receive signals. Unlike previous generations, 5G uses a much higher frequency range, typically ranging from 3 to 86 GHz. Mobile networks work based on the concept of cells, where the mobile phones in each cell are connected to a base station (BS). Each base station is connected to a mobile switch center (MSC), which is then connected to a wired network. Mobile networks are therefore also called cellular networks. Figure 6.27 shows the cellular network architecture and how the mobile users communicate with each other. 1G has the largest cell size, 2G/3G/4G has a much small cell size, and 5G will have an even smaller cell size. Figure 6.28 shows the frequency range and cell size of different generations of mobile technology.

Schematic diagram depicting cellular network architecture and how mobile users communicate with each other with MU, Mobile User; BS, Base Station; MSC, Mobile Switch Center.

Figure 6.27: The cellular network architecture and how mobile users communicate with each other

Schematic diagram depicting electromagnetic frequency spectrum and the cell size of 1G, 2G, 3G, 4G and 5G technologies on a scale from 1 to 300 GHz.

Figure 6.28: The electromagnetic frequency spectrum and the cell size of 1G, 2G, 3G, 4G, and 5G technologies

The main benefits of 5G are fast speed, low latency, and better connectivity. This makes 5G a key technology for the future driverless cars, virtual reality, augmented reality, Internet gaming, and Internet of Things (IoT) applications.

  • Fast Speed 5G networks are going to be significantly faster than previous networks. Table 6.1 shows a comparison of download times for a two-hour high-definition (HD) movie. While on a 3G network this takes 24 hours and on a 4G network it takes 7 minutes, on a 5G network it may take as little as a few seconds. Even if most users experience speeds that are a fraction of the potential maximum, it will still be impressive.
    Table 6.1: Comparison of Download Speed of Different Networks for a Two-Hour HD Movie
    NETWORK MAXIMUM DOWNLOAD SPEED MINIMUM TIME
    3G network 384 Kbps 24 hours
    4G network 100 Mbps 7 minutes
    5G network 1-10 Gbps or faster A few seconds
  • Low Latency Latency time is the time from when you've sent data to the Internet until it's received. The 5G network will have very low latency time compared to previous generations of networks, as shown in Table 6.2. (Note that a latency time of 1 ms is a goal; current latencies are said to be in the low to mid single digits.)
    Table 6.2: Typical Latency Time of Different Network Generations
    NETWORK GENERATION MILLISECONDS (MS)
    3G 100 ms
    4G 50 ms
    5G 1 ms
  • Better Connectivity 5G will not only connect more people and more devices but also provide more reliable connections, with very low downtime. 5G is claimed to have the always-on connectivity. Table 6.3 shows a comparison of 1G, 2G, 2.5G, 3G, 4G, and 5G technologies.
    Table 6.3: Comparison of 1G, 2G/2.5G/2.75G, 3G, 4G, and 5G
    1G 2G / 2.5G / 2.75G 3G 4G 5G
    Period 1970–1990 1990–2000 2000–2010 2010– 2020 2020–2030
    Bandwidth 30 KHz 200 KHz 20 MHz 20 MHz 100 MHz
    Frequency 800 MHz (Analog) 800 MHz to 1.8 GHz (digital) 1.6–2.1 GHz 2–8 GHz 3–300 GHz
    Data Rate 2 Kbps 9.6 Kbps (2G)
    64–114 Kbps (2.5G)
    384 Kbps (2.75G)
    384 Kbps (moving),
    2 Mbps (stationary)
    100 Mbps >1 Gbps
    Standard MTS, AMTS, IMTS 2G: GSM
    2.5G: GPRS
    2.75G: EDGE
    IMT-2000
    3.5G-HSDPA
    3.75G-HSUPA
    Single unified standard Single unified standard
    Services Analog voice,
    No data
    Digital voice, SMS, MMS
    Web browsing (2.5G)
    Audio, video streaming, web browsing, IPTV Enhanced audio, video streaming, web browsing, HD TV Dynamic information access, wearable devices with AI capability
    Multiplexing FDMA TDMA, CDMA CDMA CDMA CDMA
    Core Network PSTN PSTN Packet network Internet Internet
    Switching Circuit Circuit, Packet Packet All Packet All Packet
    Characteristic First wireless communication Digital Digital broadband Digital broadband, high speed, all IP Digital broadband, ultra-high speed
    Technology Analog cellular Digital cellular CDMA, UMTS, EDGE LTE, WiFi, WiMAX WWWW
    Unified IP

1G refers to the first generation of mobile technology. It was first introduced in the late 1970s. It is an analog technology, for voice communication only. It was operating on the 800 MHz band, with 30 KHz bandwidth, and 2 kilobits per second (kbps) data rate. 1G typically has a cell size greater than 20 km, with each cell allowing up to 395 channels. Because 1G mobile phones need to transmit long distances, they tend to be bulky. 1G also suffers poor voice quality and no security.

2G refers to the second generation of mobile technology, also known as the global system for mobile communication (GSM). It was introduced in the 1990s and is a digital communication technology. It can be used for both voice and text, known as Short Message Service (SMS). It works initially on the 900 MHz band and later the 1800 MHz band. It has a bandwidth of 200 KHz and a data rate of 64 kbps. 2.5G is a transition from 2G to 3G, based on General Packet Radio Service (GPRS) technology. It has a slightly higher data rate of 114 kbps, and supports email, web browsing, and Multimedia Messaging Service (MMS), such as video and photo messages. 2.75G is similar to 2.5G. Based on Enhanced Data rates for GSM Evolution (EDGE) technology, it can have a data rate up to 384 kbps. 2G uses much smaller cells than 1G, which brings several benefits. First, it improves the frequency spectrum reusability, which means it can support more mobile users. Second, because the mobile phone does not need a lot of power to transmit signals to long distances, mobile phone size is also getting smaller. 2G mobile phones also start to have mobile phone cameras. 2G network typically has a cell size of several kilometers.

3G refers to the third generation of mobile technology and was introduced in the early 2000s. It mainly uses the 2.1 GHz frequency band with a bandwidth of 20MHz, but different countries have different frequency bands and slightly different bandwidths. It can have a data rate up to 3Mbps and supports video calls, chats, navigational maps, mobile gaming, music, movies, and mobile TV. Since the introduction of the iPhone in 2007, 3G mobile phones with touchscreen have been also called smartphones. 3G also introduced significantly greater security features such as network access, domain security, and application security.

4G refers to the fourth generation of mobile technology and was introduced in 2010. Again, different countries operate at different frequency bands, but mainly at 800 MHz, 1800 MHz, 2.1 GHz, and 2.6 GHz with a bandwidth of 20 MHz. It is based on Long Term Evolution (LTE) and can have a data rate up to 100 Mbps.

5G is the latest generation of mobile technology. It uses a much higher-frequency band than the previous generations and can support a data rate up to 1 Gbps. The following are the key technologies used in 5G.

6.9.1 Millimeter Waves

5G uses two frequency ranges, frequency range 1 (< 6 GHz) and frequency range 2 (24–86 GHz). The bandwidth of frequency range 1 is 100 MHz, and the bandwidth of frequency range 2 is 400 MHz. In physics, the electromagnetic waves with a frequency between 30 and 300 GHz are called millimeter waves because they have a wavelength ranging from 1 to 10 mm. This is different from electromagnetic waves below 6 GHz that were used for previous generations of mobile technologies in the past, which have wavelengths measured in tens of centimeters. Millimeter waves do not travel well; they can be easily blocked by buildings or obstacles, or absorbed by foliage and rain. That's why 5G networks use smaller cells. Also, because of the frequency differences, 5G cells cannot use existing 4G/3G/2G base stations' antennas for transmitting and receiving signals. 5G needs its own antennas and therefore its own infrastructure. All these factors lead to 5G's small-cell technology.

6.9.2 Small Cells

5G's small cells typically have a size of a few hundred meters, compared with several kilometers of 4G/3G/2G cells. 5G antennas on small cells are also smaller than traditional antennas if they are transmitting tiny millimeter waves. So, 5G antennas are typically miniaturized and use much less power to operate, which makes it easier to be installed on lamp posts and the tops of buildings. This new cell structure should provide more targeted and efficient use of spectrum. However, this also means it is hard to set up in rural areas.

6.9.3 Massive MIMO

The term Multiple-Input, Multiple-Output (MIMO) describes wireless systems that use two or more transmitters and receivers to send and receive more data at once. MIMO is already used in today's 4G network, in which base stations typically have a dozen ports for antennas, eight for transmitters, and four for receivers. 5G base stations can support about 100 ports, a capability called massive MIMO. Massive MIMO could significantly increase the capacity of mobile networks by allowing send and receive signals from many more users at once.

6.9.4 Beamforming

The biggest issue for massive MIMO is how to reduce interference while transmitting more information from many more antennas at once. Beamforming is designed for base stations to identify the most efficient data-delivery route to a particular user and to reduce interference for nearby users in the process. Beamforming allows base stations to send individual data packets in many different directions in a precisely coordinated pattern, which allows many users and antennas on a massive MIMO array to exchange much more information at once. Beamforming can also focus a signal in a concentrated beam that points only in the direction of a user to avoid obstacles and reduce interference for everyone else.

6.9.5 Full Duplex

In the current mobile network, a transceiver (transmitter and receiver) of a base station and a mobile phone must take turns transmitting and receiving information over the same frequency or use different frequencies for transmitting and receiving signals at the same time. With 5G, a transceiver will be able to transmit and receive data at the same time, on the same frequency. This is known as full duplex technology and could double the capacity of wireless networks. The main challenge of full duplex is to design a circuit that can route incoming and outgoing signals to avoid collision while an antenna is transmitting and receiving data at the same time. Full duplex also creates more signal interference.

6.9.6 Future 6G and 7G

Believe it or not, even though 5G is still yet to arrive, there is already research on the future 6G and 7G technologies. 6G is not going to replace 5G; instead, it will integrate 5G with the existing satellite network for global coverage. 6G will provide ultra-fast Internet access and will be used in smart homes and smart cities.

7G will go even further, integrating 6G and 5G, providing space roaming, and making the world completely wireless.

For more information about 5G, please visit the following resources:

https://spectrum.ieee.org/video/telecom/wireless/everything-you-need-to-know-about-5g

https://5g.co.uk/guides/how-fast-is-5g/

https://5g.co.uk/guides/5g-frequencies-in-theuk-what-you-need-toknow/

https://www.qorvo.com/design-hub/blog/small-cell-networks-and-the-evolution-of-5g

http://www.emfexplained.info/?ID=25916

https://www.digitaltrends.com/mobile/what-is-5g/

6.10 Summary

This chapter introduced Java programming for mobile applications. It first introduced the mobile software development tool Android Studio, then provided three mobile example programs, and finally introduced the deployment of mobile application programs. It also introduced MIT App Inventor, another popular way of developing Android applications. Finally, it introduced the next-generation mobile network technology, 5G, and briefly outlined how 5G works and what it can be used for.

6.11 Chapter Review Questions

Q6.1. What is Android?
Q6.2. Compare the market share of Android phones, Apple's iPhones, and other phones.
Q6.3. What are Android SDK and Android AVD?
Q6.4. Write down the steps for downloading and installing Android Studio.
Q6.5. Write down the steps for creating an Android project using Android Studio.
Q6.6. What is an Android Package Kit (APK) file?
Q6.7. What is a Java KeyStore (JKS) file?
Q6.8. Write down the steps for deploying an Android app.
Q6.9. What is the activity life cycle of an Android app?
Q6.10. What is MIT App Inventor? What is the current version?
Q6.11. What is the difference between Designer view and Block view in MIT App Inventor?
Q6.12. What is 5G? What are main benefits of 5G?
Q6.13. What are the key technologies used in 5G?
Q6.14. What are 6G and 7G?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset