Chapter 6. A Comprehensive Approach to Testing of Mobile Applications

This chapter will focus on enforcing quality in your mobile applications. It will explore the various aspects of mobile application testing and the environments that would be necessary to facilitate the testing of these applications. It will go into detail on the what, why, and how of testing a mobile application. Assessing and enforcing quality in a mobile application belongs throughout the development lifecycle and below you will find a description of what type of testing is fruitful at which stage of the mobile application development.

Why Is Quality Essential?

As we have discussed so far, design and implementation choices of building mobile applications are very critical for the overall success. However, having a maniacal focus on quality of the mobile application is equally, if not more, important and critical for the application being accepted by our customers. Adopting test driven development with agile methodology and automated DevOps (as discussed later in this book), empowers the application owner to focus on the unique selling proposition of the mobile application. Due to the imminent possibility of a quick trial and an even quicker rejection of a low-quality mobile application, the development team may find itself constantly addressing concerns over its functionality, performance, and feedback from various adopters in the Application Store.

When one plans for testing effort of the mobile application, the various aspects of the mobile application should be kept in mind. There is the

• mobile frontend that might be available on various operating systems and form-factors

• middle tier on a server infrastructure, and

• the mobile backend

A comprehensive approach to testing all aspects of the application environment is critical to ensure reliable end to end quality.

Most often, the entire infrastructure for the application environment may not be available at all given times, hence forcing a waterfall nature of development. There are some creative ways of getting around this to speed up development and testing of the application, before putting it out for restricted or general use by leveraging virtualization techniques.

A challenge is to make sure that the application works equally well on all supported platforms and form-factors. While it might be impractical to test on each and every configuration and mobile operating system out there, we shall discuss ways where automation will help reduce the overall cost of making sure the application has improved in quality. There are varieties of testing that a team may adopt to ensure overall quality.

This chapter will provide a perspective on testing of mobile applications while demonstrating how the overall quality could be ascertained using IBM Rational toolset.

When Should Quality Be in Focus?

The appropriate time and occasion to focus on quality is right from the beginning. If you are developing a modern age mobile application, chances are, and it is imperative, that you have adopted an agile method of application development. We have focused on design and implementation in previous chapters, while this chapter will focus on application quality as its being built and tested all the way up to maintaining the quality in subsequent releases of the application. We will describe methods of both manual and automated testing.

What Is the Cost of Quality?

The cost of quality has many aspects and it compounds proportionally to the lack of focus on it. The simplest form is the amount of time (converted to money) spent on the efforts to prevent defects much before the app is released. This is the easy part. Where it compounds is that the cost of not controlling quality during development adds to the cost of formal testing. The cost of not controlling quality during formal testing eventually could lead to customers rejecting the software due to lack of quality. It is a known fact that the later a defect is found in software, the more expensive it is to fix and deliver. It is also a known fact that often times fixing one defect uncovers other hidden defects which are possibly more expensive to fix.

In the case of mobile applications, it leads to acceptance or rejection of the application, and a huge negative impact through bad ratings in the app store. A mobile app with security vulnerability could potentially expose the app to a much higher cost and even more serious consequences.

With accelerated release timelines and the potentially high cost of quality for a mobile application, the challenge is to create an application that is both accepted and adopted. In this chapter, we will discuss some of the quality related steps that an app developer should take towards this goal.

Automated versus Manual Testing

Some mobile app testing is performed in an automated, unattended manner, while other testing needs to be done in an interactive, manual style. The best comprehensive quality assessment of your mobile app will employ a balanced combination of both automated and interactive testing.

Automated mobile app testing is critical to accelerate delivery of your application and maintain the velocity of your mobile app development lifecycle. There are a wide variety of automated testing techniques that can be applied to mobile apps. Each technique has certain strengths and it is important to strike a balance across the forms of automated mobile app testing.

• Random generated mobile tests (aka “monkey” testing)

• Key word based mobile app test scripts

• Programmatic user interface (UI) testing applications (UIAutomator/UIAutomation)

• Behavior Driven Development (BDD) testing

• Image recognition based automation

• Instrumented application object/event based automation

We will discuss the pros and cons of some of these automated techniques in subsequent sections of this chapter, but next consider how manual interactive mobile app testing fits into your quality regimen.

Automated testing of your mobile application is not sufficient to ensure the best quality app. There are aspects of the quality of your app that cannot be determined using automated testing techniques alone. The “look and feel” of the application, its usability, the logical flow of the user journey through the application function—these are some of the aspects of a mobile app that are more subjective in nature, and therefore, are a better fit for human-executed interactive manual testing and assessment.

A best practice of mobile app testing strategy strikes a balance between automated tests and interactive human based testing (see Figure 6.1). The ideal quality cycle begins by running a battery of automated tests against the output of the continuous integration build process for your mobile app. Once this initial battery of automated tests has verified that the latest build meets minimum quality criteria, the build can be distributed to a group of testers/internal evaluators who will perform the interactive testing.

Image

Figure 6.1 Ideal mobile app quality cycle

After both automated and interactive tests have passed for the mobile app, it is a candidate to be released to production use and distributed to real end users (presumably via public app store or else some private enterprise app store). Even after the mobile app has been released into production, you can continue to obtain quality assessment data about the application.

Preproduction versus Postrelease

The main focus for quality assurance of mobile apps is on the preproduction phases of the development lifecycle. However, quality assessment about the app should not end when the app gets released into production. There is still very important data about the behavior of the mobile app “in the wild” that can be obtained and used to help the developers continuously improve the app.

It is not practical to expect that every conceivable defect in a mobile app can be caught and fixed before it is released into production. Even after deployed to the app store and installed on end users’ devices, the best mobile apps continue to capture context information for every crash that occurs and deliver that information back to the app development team. It also helps discover the most often used portions of the application and the user experience related to them. This is one method of discovering potential enhancements in the application that would be most effective.

Many times, a crash only occurs under very specific conditions that are difficult or impossible to recreate in the test lab. The problem could arise from the use of an unusual or old mobile device model, or from special network conditions, or the combination of certain other mobile apps running concurrently with your mobile app. These real world circumstances are impossible to anticipate and impractical to cover in all permutations in preproduction test time frames. So your development organization has to depend on receiving good technical contextual data when they occur in the field, so that the root cause of the crash can be quickly determined.

Besides outright crashes, it is valuable to solicit feedback from your end users about how they perceive the mobile app. Most popular mobile apps include some kind of “in-app feedback” mechanism so that users who would not take the time and effort to write a review in the app store can at least send the development team a short message about how they view the app. And it is especially important to capture the context of such feedback at the time it is submitted, so that the development team can know if special conditions are contributing to the impression conveyed by that user in their feedback.

Automated Mobile App Testing Considerations

There are several aspects of automated mobile app testing that bear special attention, including the devices used for testing, isolation of the code running on the mobile device, and the specific technique employed to create and execute mobile test automation.

Test Devices

A crucial consideration for automated mobile testing, regardless of the type of testing to be done, is the type of mobile device (or devices) on which to execute the automated tests. The automated tests could be executed on a small number of tethered real physical mobile devices. Or they could be executed on emulator programs running on the developer’s workstation. Several vendors offer a remote “mobile device cloud” as a potential target for test execution. Device emulator programs could even be scaled up in a virtual device cloud to provide elastic compute capabilities as testing load varies between peak activity times.

Emulators and Simulators

Emulators come with all of the native mobile operating system development kits, and simulators are available from several sources (including IBM). Emulators attempt to replicate the actual mobile operating system running on top of some other hardware, such as a PC workstation. Simulators do not attempt to replicate the mobile OS, but instead provide a light-weight simulation of the UI.

Using emulators and simulators for some amount of testing the mobile app can be cost effective, especially, in early stages of code development. The typical code/deploy/debug cycle especially for simulators is much more rapid than for physical devices (typically) and use of these tools eliminates the need for a developer to have the real physical device in hand.

However, there are subtle differences in behavior between device emulators and real physical devices (even for the very best emulator programs), and simulators do not allow execution of some parts of the application logic flow (only the UI look and flow). So, while emulators and simulators can be used to cut costs and speed development, they are generally not acceptable as the only form of test execution for mobile apps.

IBM offers mobile simulators as part of the development tools for the IBM mobile enterprise solution. Emulators are always supplied directly by the supplier of the mobile operating systems (Apple, Google, Microsoft, RIM, etc.).

Device Clouds

Broad-scale testing on real mobile devices is crucial for any app that will be released into the consumer market or within organizations that have opened up for a bring-your-own-device strategy. There are several approaches for on-device testing including the category called device clouds. What about the problem of the sheer number of different physical mobile devices on the market? There are literally thousands of different device types running different release levels of mobile operating systems, and connected to different network carrier providers and wireless networks. The combinatorial complexity of the universe of possible permutations is almost beyond comprehension. The cost of owning, setting up, and managing all of those different combinations is completely prohibitive even for very well funded projects.

A technique that can address this problem is to employ a “device cloud” testing solution. Device cloud is a term used to describe a very large array of real physical devices that have been made remotely available for access across the Internet much in the same way that general compute resources are made available in a generic software “test cloud” solution.

The test organization arranges to reserve some mobile devices for test for a certain amount of time, and deploys the mobile app code to the devices where automated tests are run using whatever “on device” automated test solution is the choice of the test organization. Once the current testing cycle is completed, the reserved devices are relinquished back to the “device cloud” where they are available to be used for other mobile app testing, potentially by a completely different project.

This technique does not eliminate the need for manual interactive testing by humans, and in fact works best in conjunction with some form of execution of those other techniques. What this approach is good at is reducing the cost of ownership for the huge variety of device types that exist and can be expected to be employed by the users of the mobile app once it gets into production.

A test organization can invest in purchasing just a few key mobile devices and “rent” the rest of the combinations from the “device cloud.” The same automated techniques for mobile testing used on the stand-alone physical devices can also be used for the devices in the “cloud,” so results of the automated testing are consistent. While it handles the array of devices to test on, this approach does not provide for testing connectivity states other than turning off the connection. One of the most overlooked testing scenarios is what happens if the device disconnects during various key uses of the app. Some services do offer the ability to select which carrier the device is running on, but ideally tests should be run in as close to real use scenarios as possible.

There are issues related to this kind of testing, whether the resources in the cloud are general compute resources or mobile device resources. Issues of security of the app under test, public or private device cloud, and balancing the cost of the cloud with the potential cost of the defects eliminated are all issues that any cloud testing solution has to address.

IBM does not offer its own device cloud solution. Instead, we offer integration between our overall mobile testing management solution and a variety of business partners who have device clouds.

Crowd-Sourced Testing

Many organizations find it challenging to get feedback from their own internal users for prereleases of their mobile applications. As a result a set of companies have brought to market solutions that allow for distribution of an application to live testers out in the field.

What is interesting about using these testing services is that you will get real user behavioral feedback. You can also test in region where you will deploy. While it does not necessarily work very well, users can even run through use scripts to ensure anticipated use is specifically tested.

Crowd-source testing introduces the same security risks mentioned for the device clouds so if the application is accessing sensitive production systems or data that should not be public, this approach should not be considered.

Using Service Virtualization to Isolate Mobile Code

Because mobile applications are multi-tier architectures, the process of setting up the infrastructure to support test execution of the code on the mobile device can be time-consuming and costly. All of the middleware servers and services need to be up and available, and typically it is not acceptable to use real production servers for testing purposes.

In addition, many test case failures can occur not because of defects in the code under test, but instead because of problems in the connected components of the application running in other tiers. In other words, if the middle-tier app server has a problem, the mobile device accessing it will fail its test case.

Cost and deployment delays can be eliminated through use of solutions that effectively replicate connecting components of the multi-tier system so that the testing can concentrate narrowly on the code executing on one specific tier of the app. By leveraging solutions such as the service virtualization capability in IBM Rational Test Workbench, test teams can avoid the need to set up complex middleware environments in support of test execution for code running on the mobile devices.

Service virtualization can emulate the middle tier and backend services and protocols so that the test execution can concentrate on the client tier of the mobile app that is running on the device itself. Conversely, the middle-tier components of the mobile app need to be validated also, and service virtualization can emulate the protocols delivered by the mobile device clients so that tests can be focused solely on the middle-tier functions and services, without need of coordinating physical mobile devices.

Mobile Test Automation Techniques

Staying ahead with mobile apps means frequent iterations with new features. As companies create more updates for their apps, testing quickly gets out of control when trying to do everything manually.

Ultimately there is a requirement to augment and accelerate manual functional verification with some form of automated testing of the code that is executing on the mobile device. This area of the software testing market is new and evolving. There are a variety of approaches that have been created by different vendors. Some of these approaches for automated function test are better suited to typical mobile business applications than others.

Mobile App Programmatic Instrumentation

A typical approach is to place some kind of additional code on the device where the automated testing is to occur. This code acts as a local “on device” agent that drives automated user input into the application and monitors the behavior of the application resulting from this input.

The instructions for telling the agent what to input into the app are typically formatted as either a script or an actual computer program (for instance, written in Java). Creation of these automated test instructions usually requires some proficiency in programming, and many test organizations are short on such skills. Furthermore, creation of these automated test programs is a development effort in its own right, which can delay the delivery of the mobile app into production.

IBM’s point of view is that the creation of automated mobile function test scripts should be possible for testers who have no programming skills at all. The tester should be able to put the application into “record mode” and interact with the app normally while the testing solution (i.e., the test agent running on the device) captures the user input from the tester and converts it into a high-level automation script. Once the tests have been “recorded” into a language that is close to natural written instructions, these test scripts can be further edited, organized, managed, and replayed whenever necessary.

Furthermore, since the language employed for the captured test scripts is nearly natural human language, the tester can easily read and modify the script to add elaboration and additional verification points to the instructions. If the language is suitably abstracted from the details of the underlying mobile operating system, these scripts can be executed against real physical devices other than the type used to capture the script in the first place.

IBM has produced just such an automated mobile app test solution and delivers it in both the mobile app development environment (IBM MobileFirst Platform Studio) as well as a software testers’ solution (IBM Rational Test Workbench). This technique for automating the mobile function tests is quite complementary to the other techniques described in this document, and can be very effectively used in combination with these other techniques.

Random Generated Mobile Tests

Random generated (sometimes called “monkey”) tests have the advantage of not requiring any scripting or coding of automation instructions. Instead of executing a precreated automation script or program, this type of testing introspects the mobile app and generates random pathways of interaction with the app. This random input to the app is executed until a fatal error in the app occurs (for instance, a crash or freeze of the app).

The execution of randomly generated automated tests has proven to be quite valuable for quickly uncovering serious defects in mobile apps that would not have been uncovered through typical scripted tests. Any battery of automated tests for a mobile app should include some amount of this kind of testing. It is not uncommon for this kind of testing to uncover defects in the app within just the first few seconds of execution. As the defects in the mobile app get surfaced and fixed, it may take longer and longer to identify problems in the app using the random test method. But that should be considered a good outcome from use of this technique over sufficient time.

Image Recognition Automated Mobile Tests

Another form of automated testing for mobile apps employs the display images from the mobile device and pixel locations on the device screen. The device display image can usually be captured and automatically compared to a known good image for verification. Automated input to the app under test is defined as a set of tap events targeted at a pixel location on the device display rather than at an internal programmatic application object.

The advantage of this approach for test automation is that it is completely agnostic to the mobile operating system and to the technology used for internal implementation of the mobile app. It is more similar to how a real human interacts with and perceives the mobile app. And highly skilled programmers are not required to produce the automation scripts.

The downside of this approach to automation is that the scripts are highly susceptible to display changes in the mobile app. If the location of a particular widget gets changed by the new build of the app, then the script that depends on the pixel location of that widget will be broken. Some vendors employ advanced algorithms that reduce this “brittleness” in the test scripts. But this technique is so sensitive to app display changes, it is best used later in the development process when the amount of anticipated visual changes in the app are minimized.

Making Manual Testing More Effective

Manual testing is the most common approach for mobile testing in use in the industry today. It is an essential element of any quality plan for mobile apps because it is the only technique that currently provides results for the consume-ability of the app. Intuitiveness and consume-ability are crucial aspects of successful mobile apps and, so far, we do not have a mechanism for automating the testing of this aspect of the code.

But manual testing is also the most time-consuming, error-prone, and costly technique for mobile testing. Manual testing can be combined with other techniques such as the aforementioned crowd-sourcing and “device-cloud” techniques so that the costs and time required can be somewhat mitigated. Solutions that organize the manual test cases, guide the tester through execution, and store the test results can substantially reduce the costs associated with manual testing.

IBM offers a hosted, Software-as-a-Service capability designed to make interactive manual testing significantly more efficient and effective. IBM Mobile Quality Assurance (MQA) services begin the process of making your interactive testing efficient with an “over-the-air” app distribution capability that makes it easy for app developers to deliver updates and new mobile app builds to a targeted set of testers directly on their mobile devices.

When a new build of the app is ready (i.e., passes the initial battery of automated tests) the developer can upload the app binary (.apk or .ipa file) to the IBM MQA service and identify the people who should be notified about the availability of this new build. These app evaluators/testers receive an email notifying them about the new build. And when the tester clicks on a link in the notification email, the new app build is automatically downloaded to their mobile device and installed, ready for immediate testing.

The mobile app tester can be confident that they have the correct build of the app to be tested. As they perform interactive manual testing of the app, when they encounter a defect of any kind, they can use IBM MQA “in-app” bug reporting capability to submit a defect right from inside the app being tested on their mobile device.

The tester simply shakes their mobile device and the app being tested will go into “bug reporting mode.” This mode suspends normal behavior of the app and allows the user to capture one or more screen images from the mobile app. The screen image can be augmented with annotations made with the tester’s fingers (lines, circles, arrows, anything that you can draw with your fingers).

After the screen image is captured, the tester is presented with a text box to be used to describe the defect in words. Once the description of the problem is entered, the tester taps on the “Report” button and the defect information is sent over the network to the IBM MQA service. Along with the explicit information (screen images and text description) form the tester, rich technical details of the context of the mobile app and the device on which it was running are captured and sent as well.

Some of the rich context for each defect captured includes:

• Mobile device type

• Mobile operating system and release level

• Network in use, including carrier or wireless details

• Memory available and in use on the device

• Logging output up to the point when the defect is reported

• Battery level

This detailed technical information is invaluable for helping the mobile app developers troubleshoot the defect and understand the root cause of the problem.

If you use IBM Bluemix DevOps services for defect and work item tracking and management, you can configure IBM MQA services to automatically open work items for each crash or bug report that comes into the service.

Crash Data Capture and Analysis

In addition to the in-app bug reporting capability of IBM MQA services, every application crash is captured by the service logic. Each time the application crashes, whether during preproduction testing or after the app is released into production, the entire context of the application and the device on which it was running is captured at the moment of the crash. This critical “must gather” data is sent over the network to the IBM MQA services where it is analyzed and made available to the development team.

The crash data capture capabilities of IBM MQA services can be leveraged during the initial automated battery of tests on the app, during the manual interactive phase of testing, and even after the app has been released to the app store.

Additional analytics within the service allow crashes that occur at the same spot in the mobile app to be recognized and aggregated so that you can see how many times a crash occurs at the same location in the app logic. This crash occurrence count is important information that helps your development team prioritize which crashes to fix first. A crash that occurs 1000 times is more important to fix before one that occurs only once or twice.

Performance Testing

Performance testing can be viewed from two distinct directions, when it comes to mobile apps. One form of performance testing is to scale up the number of mobile client instances concurrently running and drive large loads of requests against the middleware and server components of the app. This load and stress testing is important to ensure that the services supporting the mobile app will be able to absorb the traffic that mobile apps are capable of delivering.

The other dimension of mobile performance testing involves measuring how efficiently the code running on the mobile device makes use of the device resources. There are many ways in which a mobile app can inadvertently consume too much of the device’s resources (memory, CPU, network, battery) and become unwelcome on the end user’s mobile device. A mobile app that is a “resource hog” is likely to be perceived to have poor quality and to be deleted from the user’s phone.

Load and Stress Performance Testing

Performance testing for load and stress is one of the more difficult testing scenarios. Typically this involves setting up a test harness that will pour sample transactions to your backend at whatever rate you set. For internal enterprise apps, it is more critical to test peaks and maximums, with consumer apps you need those tests plus spikes and lulls to replicate the natural use states that you would see.

Load and stress testing products such as IBM Rational Performance Tester (part of the IBM Rational Test Workbench) include a recording proxy that allows you to capture the traffic patterns between your mobile app and the services it calls across the network. The recorded interactions are stored as a script that can be run in hundreds or thousands of virtual clients in order to apply load to the mobile app backend services.

At the same time that the mobile app backend services are operating under synthetic load generated by the stress testing tool, it is a good practice to execute functional tests against the mobile app itself to see if behavior of the client side changes when the backend services are under heavy load.

IBM’s point of view on performance is that direct integration architectures are brittle and do not enable graceful handling of issues related to volume and stress. Leveraging mobile middleware creates a separate tier that can shield the backends from catastrophic conditions.

Mobile Client Resource Metrics

Client side performance is affected by many different aspects from the level of graphics used in an app to poor coding and bad practices. By measuring and monitoring some of the critical areas such as battery use, device resource use (such as getting GPS coordinates), and how transversal through the screens of an app affect screen load time allow for tuning of apps to remove potential resource hogging.

It is especially effective to correlate the measurements of mobile device resources with the functional tasks within the mobile app. Being able to associate a spike in memory usage with a specific functional activity in the app is invaluable for developers to quickly pinpoint the area of the app code that needs to be addressed in order to resolve excessive resource usage.

User Sentiment as a Measure of Quality

Once your app has been released into production, your end users will begin to postreviews in the app store. App store review comments offer a rich source of quality assessment also.

If your mobile app has only a couple dozen reviews in the app store, it is easy enough to read each one and gain insight into the sentiment of your users. But once the number of reviews reaches several dozen or more, you need an analysis tool to effectively and efficiently mine the key insights from that amount of data. This app store review user sentiment analysis can shed light on trends in the perception of your mobile app audience, and can be useful to correlate with the other quality assessment measurements such as crash data captured from production versions of the app.

For example, the IBM MQA service includes an app store review analysis capability that captures all of the review text and searches for a set of special “user sentiment” key words in each review text. Analysis for app store reviews is organized into 10 distinct “attributes” about your mobile app, such as:

• Usability

• Stability

• Performance

• Elegance

You can drill down into each of the user sentiment attributes for your mobile app and see the analysis used to produce the score for that attribute, even going so far as to see the specific list of reviews containing comments about that attribute.

This app store user sentiment is invaluable, especially when you correlate the sentiment expressed by the reviewers with hard technical evidence in the crash reports and in-app user feedback records.

Some users would not take the time to post a review in the app store, but will comment about their perception of your mobile app in their various social media networks. Other sources of user sentiment include comments made by your users in social media such as Facebook, Twitter, or LinkedIn. Social media analytics can be used to uncover information about what your users are saying to their community about your mobile app.

Summary

Our point-of-view for a comprehensive solution for mobile testing and quality management is an approach that encompasses the full spectrum of mobile test techniques currently in use in the industry. The key elements of this solution are:

Run a suite of automated mobile tests against each build of your mobile app.

1. After sufficient automated testing has completed successfully, distribute the app to a group of human testers to perform interactive manual testing on the app.

2. Organize and manage the various test execution tools for mobile app testing (both mobile frontend and supporting middleware and backend services) using a test management product such as IBM Rational Quality Manager. Consolidate and link the test results from these multiple execution tools into a single mobile app quality metric, and link data from test case failures back to development work items for defect removal.

3. Use service virtualization, such as available in IBM Rational Test Workbench, to isolate various tiers of the mobile app so that testing can be concentrated solely on those specific tiers. Test the code on the mobile device without needing to have a complete middle-tier up and running. Test the middle-tier mobile app logic without having to coordinate large numbers of mobile device clients.

4. Automate the tests for the code that executes directly on the mobile device using the IBM Rational Test Workbench automated mobile testing capability (either directly or using IBM MobileFirst Platform Studio development environment). There is no need to hire specialized skilled programmers in order to produce the automated versions of your mobile test cases.

5. Concentrate your investment in real physical devices to only the highest priority device types and OS release levels. Rent the other permutations from a device cloud vendor that is integrated with your test management solution (such as IBM Rational Quality Manager) so that the same consistent tests can be applied to the “cloud devices” as is used for your in-house physical devices.

6. Use emulators as target test devices for your every day automated regression testing. Use the same automated mobile test capability to execute emulator test cases as you use for the real physical devices. An automated mobile app testing solution, such as IBM Rational Test Workbench, should work on both emulators and real devices.

7. Organize your manual test cases into logical suites and reduce the costs and increase the efficiency of your manual test efforts using IBM Rational Quality Manager manual test management capabilities.

8. Instrument your mobile app with IBM MQA code so that your manual interactive testers can submit bug reports directly from within the app being tested on their mobile devices.

9. Employ IBM MQA mobile app crash data capture services to obtain deep technical context information about each crash that occurs in your mobile app, whether during preproduction testing or after the mobile app is released into production use.

10. Leverage “in-app” user feedback to make it easy and efficient for your end users to communicate with your development team about their perception of the mobile app. Capture the context of the app and the device on which it is running, every time a user submits feedback.

11. Use an analysis tool to aggregate and gain insight from the app store review comments for your app. Correlate this app store user sentiment analysis with the other quality assessment data that you continue to gather about your mobile app in the field.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset