Appendix A

Mobile Cloud Resource Management

This appendix will firstly provide an overview on cloud resource management frameworks and requirements, and then several existing mobile cloud resource management platforms will be discussed.

A.1 Overview of Cloud Resource Management

The tasks of cloud resource management include resource allocation, recycling, scheduling, and monitoring. The resources to be managed include computation resources, such as CPU and GPU time, storage resources, such as memory and disks, and network resources, such as connection between virtual machines and devices. This section presents four major cloud resource management systems: OpenStack, CloudStack, Euclyptus, and OpenNebula.

A.1.1 OpenStack

OpenStack [220] is a free and open-source software platform for cloud computing, mostly deployed as an IaaS. A high-level view of OpenStack framework is presented in Fig. A.1. The software platform consists of interrelated components that control hardware pools of processing, storage, and networking resources throughout a data center. Users either manage it through a web-based dashboard, command-line tools, or a RESTful API. OpenStack.org released it under the terms of the Apache License. OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA. As of 2016, it is managed by the OpenStack Foundation, a nonprofit corporate entity established in September 2012 to promote OpenStack software and its community. More than 500 companies have joined the project.

Image
Figure A.1 OpenStack. Source: Picture source: https://www.openstack.org/software/

The OpenStack community collaborates around a six-month, time-based release cycle with frequent development milestones. Its software release history is presented in Table A.1. During the planning phase of each release, the community gathers for an OpenStack Design Summit to facilitate developer working sessions and to assemble plans.

Table A.1

History of OpenStack Releases

Image

A.1.1.1 OpenStack Software Architecture

Fig. A.2 shows the relationships among the OpenStack services. OpenStack consists of several independent parts, named OpenStack services. All services authenticate through a common Identity service. Individual services interact with each other through public APIs, except where privileged administrator commands are necessary.

Image
Figure A.2 Conceptual Architecture of OpenStack. Source: OpenStack Conceptual architecture, source: https://docs.openstack.org/admin-guide/common/get-started-conceptual-architecture.html#get-started-conceptual-architecture

Internally OpenStack services are composed of several processes. All services have at least one API process, which listens for API requests, preprocesses and passes them on to other parts of the service. With the exception of the Identity service, the actual work is done by distinct processes.

For communication between the processes of one service, an AMQP message broker is used. The service's state is stored in a database. When deploying and configuring the OpenStack cloud, there are several message broker and database solutions that can be used, such as RabbitMQ, Qpid, MySQL, MariaDB, and SQLite.

Users can access OpenStack via the web-based user interface implemented by the OpenStack dashboard, via command-line clients, and by issuing API requests through tools like browser plug-ins or curl. For applications, several SDKs are available. Ultimately, all these access methods issue REST API calls to the various OpenStack services.

Fig. A.3 shows the most common, but not the only possible, architecture for an OpenStack cloud. OpenStack has a modular architecture with various code names for its components. The following subsections discuss the core services within the OpenStack service architecture.

Image
Figure A.3 OpenStack Architecture. Source: OpenStack Architecture, source: https://docs.openstack.org/admin-guide/common/get-started-logical-architecture.html

A.1.1.2 Compute – Nova

Nova manages the life-cycle of compute instances in an OpenStack environment. Responsibilities include spawning, scheduling, and decommissioning of machines on demand.

Nova is the instance management component. An authenticated user who has access to a Glance image and has created a network for an instance to live on is almost ready to tie all of this together and launch an instance. The last resources that are required are a key pair and a security group. A key pair is simply an SSH key pair. OpenStack will allow a user to import his/her own key pair or generate one to use. When the instance is launched, the public key is placed in the authorized keys file so that a password-less SSH connection can be made to the running instance.

Before that SSH connection can be made, the security groups have to be opened to allow the connection to be made. A security group is a firewall at the cloud infrastructure layer. The OpenStack distribution will have a default security group with rules to allow instances to communicate with each other within the same security group, but rules will have to be added for Internet Control Message Protocol (ICMP), SSH, and other connections to be made from outside the security group.

Once there are an image, network, key pair, and security group available, an instance can be launched. The resource's identifiers are provided to Nova, and Nova looks at what resources are being used on which hypervisors, and schedules the instance to spawn on a compute node. The compute node gets the Glance image, creates the virtual network devices, and boots the instance. During the boot, cloud-init should run and connect to the metadata service. The metadata service provides the SSH public key needed for SSH login to the instance and, if provided, any post-boot configuration that needs to happen. This could be anything from a simple shell script to an invocation of a configuration management engine.

Images and instances

Virtual machine images contain a virtual disk that holds a bootable operating system on it. Disk images provide templates for virtual machine file systems. The Image service controls image storage and management.

Instances are the individual virtual machines that run on physical compute nodes inside the cloud. Users can launch any number of instances from the same image. Each launched instance runs from a copy of the base image. Any changes made to the instance do not affect the base image. Snapshots capture the state of an instances running disk. Users can create a snapshot, and build a new image based on these snapshots. The Compute service controls instance, image, and snapshot storage and management.

When launching an instance, a flavor must be chosen, which represents a set of virtual resources. Flavors define virtual CPU number, RAM amount available, and ephemeral disks size. Users must select from the set of available flavors defined on their cloud. OpenStack provides a number of predefined flavors that a user can edit or add to.

You can add and remove additional resources from running instances, such as persistent volume storage, or public IP addresses. The example used in this chapter is of a typical virtual system within an OpenStack cloud. It uses the cinder-volume service, which provides persistent block storage, instead of the ephemeral storage provided by the selected instance flavor.

Fig. A.4 shows the system state prior to launching an instance. The image store has a number of predefined images, supported by the Image service. Inside the cloud, a compute node contains the available vCPU, memory, and local disk resources. Additionally, the cinder-volume service stores predefined volumes.

Image
Figure A.4 System state before VM creation.

Fig. A.5 shows the instance creation process. To launch an instance, select an image, flavor, and any optional attributes. The selected flavor provides a root volume, labeled vda in this diagram, and additional ephemeral storage, labeled vdb. In this example, the cinder-volume store is mapped to the third virtual disk on this instance, vdc.

Image
Figure A.5 System state when VM is running. (For interpretation of the colors in this figure, the reader is referred to the web version of this chapter.)

The Image service copies the base image from the image store to the local disk. The local disk is the first disk that the instance accesses, which is the root volume labeled vda. Smaller instances start faster. Less data needs to be copied across the network. The new empty ephemeral disk is also created, labeled vdb. This disk is deleted when the instance is deleted.

The compute node connects to the attached cinder-volume using iSCSI. The cinder-volume is mapped to the third disk, labeled vdc in this diagram. After the compute node provisions the vCPU and memory resources, the instance boots up from root volume vda. The instance runs and changes data on the disks (highlighted in red on the diagram). If the volume store is located on a separate network, the my_block_storage_ip option specified in the storage node configuration file directs image traffic to the compute node.

Fig. A.6 shows the system state after the instance exits. When deleting an instance, the state is reclaimed with the exception of the persistent volume. The ephemeral storage is purged. Memory and vCPU resources are released. The image remains unchanged throughout this process.

Image
Figure A.6 System state after VM termination.

A.1.1.3 Networking – Neutron

Neutron enables network connectivity as a service for other OpenStack services, such as OpenStack Compute, provides an API for users to define networks and the attachments into them, and has a pluggable architecture that supports many popular networking vendors and technologies.

Neutron is the network management component. With Keystone, users are authenticated, and from Glance a disk image will be provided. The next resource required for launch is a virtual network. Neutron is an API frontend (and a set of agents) that manages the Software Defined Networking (SDN) infrastructure in OpenStack. When an OpenStack deployment is using Neutron, it means that each of the cloud tenants can create virtual isolated networks. Each of these isolated networks can be connected to virtual routers to create routes between the virtual networks. A virtual router can have an external gateway connected to it, and external access can be given to each instance by associating a floating IP on an external network with an instance. Neutron then puts all configuration in place to route the traffic sent to the floating IP address through these virtual network resources into a launched instance. This is also called Networking as a Service (NaaS). NaaS is the capability to provide networks and network resources on demand via software.

By default, the OpenStack distribution will install Open vSwitch to orchestrate the underlying virtualized networking infrastructure. Open vSwitch is a virtual managed switch. As long as the nodes in the cluster have simple connectivity to each other, Open vSwitch can be the infrastructure configured to isolate the virtual networks for the tenants in OpenStack. There are also many vendor plug-ins that would allow replacing Open vSwitch with a physical managed switch to handle the virtual networks. Neutron even has the capability to use multiple plug-ins to manage multiple network appliances. As an example, Open vSwitch and a vendor's appliance could be used in parallel to manage virtual networks in an OpenStack deployment. This is a great example of how OpenStack is built to provide flexibility and choice to its users.

Networking is the most complex component of OpenStack to configure and maintain. This is because Neutron is built around core networking concepts. To successfully deploy Neutron, a user needs to understand these core concepts and how they interact with one another. In Chapter 5, Network Management, we have spent time covering these concepts while building the Neutron infrastructure for an OpenStack deployment.

A standard OpenStack Networking setup has up to four distinct physical data center networks as shown in Fig. A.7:

Management network is used for internal communication between OpenStack Components. The IP addresses on this network should be reachable only within the data center and it is considered the Management Security Domain.

Guest network is used for VM data communication within the cloud deployment. The IP addressing requirements of this network depend on the OpenStack Networking plug-in in use and the network configuration choices of the virtual networks made by the tenant. This network is considered the Guest Security Domain.

External network is used to provide VMs with Internet access in some deployment scenarios. The IP addresses on this network should be reachable by anyone on the Internet. This network is considered to be in the Public Security Domain.

API network exposes all OpenStack APIs, including the OpenStack Networking API, to tenants. The IP addresses on this network should be reachable by anyone on the Internet. This may be the same network as the external network, as it is possible to create a subnet for the external network that uses IP allocation ranges to use only less than the full range of IP addresses in an IP block. This network is considered the Public Security Domain.

Image
Figure A.7 OpenStack Networking service placement on physical servers.

Fig. A.8 shows the flow of ingress and egress traffic for the VM2 instance.

Image
Figure A.8 The flow of ingress and egress traffic for the VM2 instance.

A.1.1.4 Object Storage – Swift

Swift [57] stores and retrieves arbitrary unstructured data objects via a RESTful, HTTP based API. It is highly fault tolerant with its data replication and scaled out architecture. Its implementation is not like a file server with mountable directories.

Swift is the object storage management component. Object storage is a simple content-only storage system. Files are stored without the metadata that a block file system has. These are simply containers and files. The files are simply content. Swift has two layers as part of its deployment: the proxy and the storage engine. The proxy is the API layer. It's the service that the end user communicates with. The proxy is configured to talk to the storage engine on the user's behalf. By default, the storage engine is the Swift storage engine. It's able to do software-based storage distribution and replication. GlusterFS [96] and CEPH [272] are also popular storage backends for Swift. They have similar distribution and replication capabilities to those of Swift storage.

Fig. A.9 shows the Swift cluster architecture. Large-scale deployments segment off an access tier, which is considered the Object Storage system's central hub. The access tier fields the incoming API requests from clients and moves data in and out of the system. This tier consists of frontend load balancers, ssl-terminators, and authentication services. It runs the (distributed) brain of the Object Storage system, the proxy server processes. In most configurations, each of the five zones should have an equal amount of storage capacity. Storage nodes use a reasonable amount of memory and CPU. Metadata needs to be readily available to return objects quickly. The object stores run services not only to field incoming requests from the access tier, but to also run replicators, auditors, and reapers. You can provision object stores provisioned with single gigabit or 10 gigabit network interface depending on the expected workload and desired performance.

Image
Figure A.9 OpenStack object storage architecture.

A.1.1.5 Block Storage – Cinder

Cinder provides persistent block storage to running instances. Its pluggable driver architecture facilitates the creation and management of block storage devices.

Cinder is the block storage management component. Volumes can be created and attached to instances. Then, they are used on the instances as any other block device would be used. On the instance, the block device can be partitioned and a file system can be created and mounted. Cinder also handles snapshots. Snapshots can be taken of the block volumes or of instances. Instances can also use these snapshots as a boot source.

There is an extensive collection of storage backends that can be configured as the backing store for Cinder volumes and snapshots. By default, Logical Volume Manager (LVM)1 is configured. GlusterFS and CEPH are two popular software-based storage solutions. There are also many plug-ins for hardware appliances.

A.1.1.6 Identity – Keystone

Keystone provides an authentication and authorization service for other OpenStack services. It provides a catalog of endpoints for all OpenStack services.

Keystone is the identity management component. The first thing that needs to happen while connecting to an OpenStack deployment is authentication. In its most basic installation, Keystone will manage tenants, users, and roles, and be a catalog of services and endpoints for all the components in the running cluster.

Everything in OpenStack must exist in a tenant. A tenant is simply a grouping of objects. Users, instances, and networks are examples of objects. They cannot exist outside of a tenant. Another name for a tenant is project. On the command line, the term tenant is used. In the web interface, the term project is used.

Users must be granted a role in a tenant. It's important to understand this relationship between the user and a tenant via a role. Identity Management (IDM) is in charge of creating the user and tenant and associating the user with a role in a tenant. For now, it is important to understand that users cannot login to the cluster unless they are members of a tenant. Even the administrator has a tenant. Even the users that the OpenStack components use to communicate with each other have to be members of a tenant to be able to authenticate.

Keystone also keeps a catalog of services and endpoints of each of the OpenStack components in the cluster. This is advantageous because all of the components have different API endpoints. By registering them all with Keystone, an end user only needs to know the address of the Keystone server to interact with the cluster. When a call is made to connect to a component other than Keystone, the call will first have to be authenticated, so Keystone will be contacted regardless.

Within the communication to Keystone, the client also asks Keystone for the address of the component, the user intended to connect to. This makes managing the endpoints easier. If all the endpoints were distributed to the end users, then it would be a complex process to distribute a change in one of the endpoints to all of the end users. By keeping the catalog of services and endpoints in Keystone, a change is easily distributed to end users as new requests are made to connect to the components.

By default, Keystone uses username/password authentication to request a token and Public Key Infrastructure (PKI) tokens for subsequent requests. The token has a user's roles and tenants encoded into it. All the components in the cluster can use the information in the token to verify the user and the user's access. Keystone can also be integrated into other common authentication systems instead of relying on the username and password authentication provided by Keystone. In Chapter 3, Identity Management, each of these resources has been explored. There we have walked through creating a user and a tenant and looked at the service catalog.

Terminologies defined by Keystone:

Token is an alpha-numeric text string that enables access to OpenStack APIs and resources. A token may be revoked at any time and it is valid for a finite time. While OpenStack Identity supports token-based authentication in this release, it intends to support additional protocols in the future. OpenStack Identity is an integration service that does not aspire to be a full-fledged identity store and management solution.

Service is an OpenStack service, such as Compute (nova), Object Storage (swift), or Image service (glance), that provides one or more endpoints through which users can access resources and perform operations.

Project is a container that groups or isolates resources or identity objects. Depending on the service operator, a project might map to a customer, account, organization, or tenant.

User is a digital representation of a person, system, or service that uses OpenStack cloud services. The Identity service validates that incoming requests are made by the user who claims to be making the call. Users have a login and can access resources by using assigned tokens. Users can be directly assigned to a particular project and behave as if they are contained in that project.

Role is a personality with a defined set of user rights and privileges to perform a specific set of operations. The Identity service issues a token that includes a list of roles to a user. When a user calls a service, that service interprets the set of user roles and determines to which operations or resources each role grants access.

Domain is an Identity service API v3 entity. It represents a collection of projects and users that defines administrative boundaries for the management of Identity entities. A domain, which can represent an individual, company, or operator-owned space, exposes administrative activities directly to system users. Users can be granted the administrator role for a domain. A domain administrator can create projects, users, and groups in a domain and assign roles to users and groups in a domain.

Region is an Identity service API v3 entity. It represents a general division in an OpenStack deployment. You can associate zero or more subregions with a region to make a tree-like structured hierarchy. Although a region does not have a geographical connotation, a deployment can use a geographical name for a region, such as US-East.

Group is an Identity service API v3 entity. It represents a collection of users that are owned by a domain. A group role granted to a domain or project applies to all users in the group. Adding users to or removing them from a group respectively grants or revokes their role and authentication to the associated domain or project.

A.1.1.7 Image Service – Glance

Glance stores and retrieves virtual machine disk images. OpenStack Compute makes use of this during instance provisioning.

Glance is the image management component. Once we're authenticated, there are a few resources that need to be available for an instance to launch. The first resource we'll look at is the disk image to launch from. Before a server is useful, it needs to have an operating system installed on it. This is a boilerplate task that cloud computing has streamlined by creating a registry of preinstalled disk images to boot from. Glance serves as this registry within an OpenStack deployment. In preparation for an instance to launch, a copy of a selected Glance image is first cached to the compute node where the instance is being launched. Then, a copy is made to the ephemeral disk location of the new instance. Subsequent instances launched on the same compute node using the same disk image will use the cached copy of the Glance image.

The images stored in Glance are sometimes called sealed-disk images. These images are disk images that have had the operating system installed but have had things such as Secure Shell (SSH) host key and network device MAC addresses removed. This makes the disk images generic, so they can be reused and launched repeatedly without the running copies conflicting with each other. To do this, the host-specific information is provided or generated at boot. The provided information is passed in through a post-boot configuration facility called cloud-init.

The images can also be customized for special purposes beyond a base operating system install. If there was a specific purpose for which an instance would be launched many times, then some of the repetitive configuration tasks could be performed ahead of time and built into the disk image. For example, if a disk image was intended to be used to build a cluster of web servers, it would make sense to install a web server package on the disk image before it was used to launch an instance. It would save time and bandwidth to do it once before it is registered with Glance instead of doing this package installation and configuration over and over each time a web server instance is booted.

There are quite a few ways to build these disk images. The simplest way is to do a virtual machine install manually, make sure that the host-specific information is removed, and include cloud-init in the built image. Cloud-init is packaged in most major distributions; a user should be able to simply add it to a package list. There are also tools to make this happen in a more autonomous fashion. Some of the more popular tools are virt-install, Oz, and appliance-creator. The most important thing about building a cloud image for OpenStack is to make sure that cloud-init is installed. Cloud-init is a script that should run post boot to connect back to the metadata service.

A.1.2 CloudStack

CloudStack is an open source cloud computing software for creating, managing, and deploying infrastructure cloud services. It uses existing hypervisors such as KVM, VMware vSphere, and XenServer/XCP for virtualization. In addition to its own API, CloudStack also supports the Amazon Web Services (AWS) API and the Open Cloud Computing Interface from the Open Grid Forum.

Architecture

Generally speaking, most CloudStack deployments consist of the management server and the resources to be managed. During deployment, a management server is in charge of the resource allocation, such as IP address blocks, storage devices, hypervisors, and VLANs.

As shown in Fig. A.10, the minimum installation consists of one machine running the CloudStack Management Server and another machine to act as the cloud infrastructure (in this case, a very simple infrastructure consisting of one host running hypervisor software). In its smallest deployment, a single machine can act as both the Management Server and the hypervisor host (using the KVM hypervisor).

Image
Figure A.10 CloudStack basic deployment.

A more full-featured installation consists of a highly-available multinode Management Server installation and up to tens of thousands of hosts using any of several networking technologies.

Management Server

The management server orchestrates and allocates the resources in the cloud deployment.

The management server typically runs on a dedicated machine or as a virtual machine. It controls allocation of virtual machines to hosts and assigns storage and IP addresses to the virtual machine instances. The Management Server runs in an Apache Tomcat container and requires a MySQL database for persistence.

The management server:

•  Provides the web interface for both the administrator and end user;

•  Provides the API interfaces for both the CloudStack API as well as the EC2 interface;

•  Manages the assignment of guest VMs to a specific compute resource;

•  Manages the assignment of public and private IP addresses;

•  Allocates storage during the VM instantiation process;

•  Manages snapshots, disk images (templates), and ISO images;

•  Provides a single point of configuration for the cloud.

Cloud Infrastructure

Resources within the cloud are managed as follows:

•  Regions. A region is a collection of one or more geographically proximate zones managed by one or more management servers.

•  Zones. Typically, a zone is equivalent to a single data-center. A zone consists of one or more pods and secondary storage.

•  Pods. A pod is usually a rack or row of racks that include a layer-2 switch and one or more clusters.

•  Clusters. A cluster consists of one or more homogeneous hosts and primary storage.

•  Host. A host is a single compute node within a cluster, often a hypervisor.

•  Primary Storage. A storage resource typically provided to a single cluster for the actual running of instance disk images. (Zone-wide primary storage is an option, though not typically used.)

•  Secondary Storage. It is a zone-wide resource which stores disk templates, ISO images, and snapshots.

Networking

A basic CloudStack networking setup is illustrated in Fig. A.11.

Image
Figure A.11 CloudStack basic networking configuration.

A.1.3 Eucalyptus

Eucalyptus is free and open-source computer software for building Amazon Web Services (AWS)-compatible private and hybrid cloud computing environments marketed by the company Eucalyptus Systems. Eucalyptus is the acronym for Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems. Eucalyptus enables pooling compute, storage, and network resources that can be dynamically scaled up or down as application workloads change. Eucalyptus Systems announced a formal agreement with AWS in March 2012 to maintain compatibility. Mårten Mickos is the CEO of Eucalyptus. In September 2014, Eucalyptus was acquired by Hewlett-Packard.

Eucalyptus has six components:

•  The Cloud Controller (CLC) is a Java program that offers EC2-compatible interfaces, as well as a web interface to the outside world. In addition to handling incoming requests, the CLC acts as the administrative interface for cloud management and performs high-level resource scheduling and system accounting. The CLC accepts user API requests from command-line interfaces like euca2ools or GUI-based tools like the Eucalyptus User Console and manages the underlying compute, storage, and network resources. Only one CLC can exist per cloud and it handles authentication, accounting, reporting, and quota management.

•  Walrus, also written in Java, is the Eucalyptus equivalent to AWS Simple Storage Service (S3). Walrus offers persistent storage to all of the virtual machines in the Eucalyptus cloud and can be used as a simple HTTP put/get storage as a service solution. There are no data type restrictions for Walrus, and it can contain images (i.e., the building blocks used to launch virtual machines), volume snapshots (i.e., point-in-time copies), and application data. Only one Walrus can exist per cloud.

•  The Cluster Controller (CC) is written in C and acts as the frontend for a cluster within a Eucalyptus cloud and communicates with the Storage Controller and Node Controller. It manages instance (i.e., virtual machines) execution and Service Level Agreements (SLAs) per cluster.

•  The Storage Controller (SC) is written in Java and is the Eucalyptus equivalent to AWS EBS. It communicates with the Cluster Controller and Node Controller and manages Eucalyptus block volumes and snapshots to the instances within its specific cluster. If an instance requires writing persistent data to memory outside of the cluster, it would need to write to Walrus, which is available to any instance in any cluster.

•  The VMware Broker is an optional component that provides an AWS-compatible interface for VMware environments and physically runs on the Cluster Controller. The VMware Broker overlays existing ESX/ESXi hosts and transforms Eucalyptus Machine Images (EMIs) to VMware virtual disks. The VMware Broker mediates interactions between the Cluster Controller and VMware and can connect directly to either ESX/ESXi hosts or to vCenter Server.

•  The Node Controller (NC) is written in C and hosts the virtual machine instances and manages the virtual network endpoints. It downloads and caches images from Walrus as well as creates and caches instances. While there is no theoretical limit to the number of Node Controllers per cluster, performance limits do exist.

A.1.4 OpenNebula

OpenNebula is a cloud computing platform for managing heterogeneous distributed data center infrastructures. The OpenNebula platform manages a data center's virtual infrastructure to build private, public and hybrid implementations of infrastructure as a service. OpenNebula is free and open-source software, subject to the requirements of the Apache License version 2 [4].

OpenNebula is used by hosting providers, telecom operators, IT services providers, supercomputing centers, research labs, and international research projects. Some other cloud solutions use OpenNebula as the cloud engine or kernel service. OpenNebula orchestrates storage, network, virtualization, monitoring, and security technologies to deploy multitier services (e.g., compute clusters) as virtual machines on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies. According to the European Commission's 2010 report, “…only few cloud dedicated research projects in the widest sense have been initiated – most prominent amongst them probably OpenNebula …”

The toolkit includes features for integration, management, scalability, security and accounting. It also claims standardization, interoperability and portability, providing cloud users and administrators with a choice of several cloud interfaces (Amazon EC2 Query, OGF Open Cloud Computing Interface and vCloud) and hypervisors (Xen, KVM and VMware), and can accommodate multiple hardware and software combinations in a data center.

A.2 Mobile Cloud Resource Management

The mobile cloud resource management include more resources into the resource pool besides what the previous cloud management covers. The mobile devices, such as smart phones and sensors, are typical resources in the mobile cloud resource pool. This section presents several mobile cloud resource management systems.

A.2.1 Cloudlet

A cloudlet is a trusted, resource-rich computer or cluster of computers that's well-connected to the Internet and available for use by nearby mobile devices [237]. Cloudlet represents the middle tier of a 3-tier hierarchy: mobile device–cloudlet–cloud. A cloudlet can be viewed as a “data center in a box” whose goal is to “bring the cloud closer.” A cloudlet has four key attributes [238]:

•  only soft state. It does not have any hard state, but may contain cached state from the cloud. It may also buffer data originating from a mobile device (such as video or photographs) en route to safety in the cloud. Avoiding hard state means that each cloudlet adds close to zero management burden after installation – it is entirely self-managing.

•  powerful, well-connected and safe. It possesses sufficient compute power (i.e., CPU, RAM, etc.) to offload resource-intensive computations from one or more mobile devices. It has excellent connectivity to the cloud (typically a wired Internet connection) and is not limited by finite battery life (i.e., it is plugged into a power outlet). Its integrity as a computing platform is assumed; in a production-quality implementation this will have to be enforced through some combination of tamper-resistance, surveillance, and run-time attestation.

•  close at hand. It is logically proximate to the associated mobile devices. “Logical proximity” is defined as low end-to-end latency and high bandwidth (e.g., one-hop WiFi). Often, logical proximity implies physical proximity. However, because of “last mile” effects, the inverse may not be true: physical proximity may not imply logical proximity. Building on standard cloud technology, it encapsulates offload code from mobile devices in virtual machines (VMs), and thus resembles classic cloud infrastructure such as Amazon EC2 and OpenStack. In addition, each cloudlet has functionality that is specific to its cloudlet role.

•  builds on standard cloud technology. It encapsulates offload code from mobile devices in virtual machines (VMs), and thus resembles classic cloud infrastructure such as Amazon EC2 and OpenStack. In addition, each cloudlet has functionality that is specific to its cloudlet role.

OpenStack++

OpenStack++ provides extensions of OpenStack so that any individual or any vendor who uses OpenStack for his/her cloud computing can easily use cloudlets [123].

OpenStack provides an extension mechanism to add new features to support innovative approaches. This allows developers to experiment and develop new features without worrying about the implications to the standard APIs. Since the extension is queryable, a user can first send a query to a particular OpenStack cluster to check the availability of the cloudlet features. APIs for extensions are provided to the users by implementing an Extension class. An API request from the user will arrive at the extension class and a set of internal APIs will be called to accomplish desired functionality. Some of the internal API calls will be passed to a corresponding compute node via the messaging layer if necessary. Then, the API manager at the compute node will receive the message and handle it by sending commands to the hypervisor via a driver. Finally, the driver class will return the result and pass it to the user following the reverse call sequence.

The cloudlet extensions follow the same call hierarchy. Once a user sends a request via a RESTful interface, the message will be propagated to the matching compute node. Then the hypervisor driver performs the given task. Here's an example command flow for creating a VM overlay. The command is applied to a running virtual machine and generates a VM overlay which extracts the difference between the running VM and the base VM. To define a new action for creating a VM overlay, a cloudlet extension class is declared following the OpenStack extension rule. The user-issued API request first arrives at the extension class, and then is passed to a corresponding compute node via API and message layer. At the compute node, the message is then handled by a cloudlet hypervisor driver, which interacts with a target virtual machine. Finally, the cloudlet hypervisor driver will create a VM overlay using the VM snapshot.

A.2.2 POEM

POEM is short for Personal On-demand execution Environment for Mobile cloud computing. It is a service oriented system or middleware that connects the mobile devices and virtual machines in the cloud to provide a uniform interface for the application developers. Developers build mobile applications running on top of the POEM system which helps partition the applications and offload some computation tasks from mobile devices to the virtual machines or the other mobile devices.

Based on the POEM system, the partition and offloading strategies can be applied for various scenarios. POEM models a mobile application as a graph where a vertex is a computation task or application component and an edge is the data transfer between components or the dependency between computation tasks. Besides the application graph, POEM models the mobile devices and virtual machines in the cloud as the site graph where the devices and virtual machines are the nodes and the network provides the links. The POEM system maps the application graph to the site graph to achieve two objectives: (i) to save energy consumption on the mobile devices and (ii) to decrease the execution time for particular tasks.

A.2.3 Fog Computing

Fog computing or fog networking, also known as fogging, is an architecture that uses one or a collaborative multitude of end-user clients or near-user edge devices to carry out a substantial amount of storage (rather than stored primarily in cloud data centers), communication (rather than routed over the internet backbone), and control, configuration, measurement, and management (rather than controlled primarily by network gateways such as those in the LTE (telecommunication) core).

Fog computing can be perceived both in large cloud systems and big data structures, making reference to the growing difficulties in accessing information objectively. This results in a lack of quality of the obtained content. The effects of fog computing on cloud computing and big data systems may vary; yet, a common aspect that can be extracted is a limitation in accurate content distribution, an issue that has been tackled with the creation of metrics that attempt to improve accuracy.

Fog networking consists of a control plane and a data plane. For example, on the data plane, fog computing enables computing services to reside at the edge of the network as opposed to servers in a data-center. Compared to cloud computing, fog computing emphasizes proximity to end-users and client objectives, dense geographical distribution and local resource pooling, latency reduction for quality of service (QoS) and edge analytics/stream mining, resulting in superior user-experience and redundancy in case of failure.

Fog networking supports the Internet of Everything (IoE), in which most of the devices that are used on a daily basis will be connected to each other. Examples include our phones, wearable health monitoring devices, connected vehicles, and augmented reality using devices such as the Google Glass.

A.2.4 Dew Computing

Dew Computing goes beyond the concept of a network/storage/service, to a subplatform – it is based on a microservice concept in vertically distributed computing hierarchy. Compared to Fog Computing, which supports emerging IoT applications that demand real-time and predictable latency and the dynamic network reconfigurability, DC pushes the frontiers to computing applications, data, and low level services away from centralized virtual nodes to the end users.

One of the main advantages of Dew Computing is in its extreme scalability. The DC model is based on a large number of heterogeneous devices and different types of equipment, ranging from smart phones to intelligent sensors, joined in peer-to-peer ad hoc virtual processing environments consisting of numerous microservices. Although highly heterogeneous, the active equipment in DC is able to perform complex tasks and effectively run a large variety of applications and tools. In order to provide such functionality, the devices in DC are ad hoc programmable and self-adaptive, capable of running applications in a distributed manner without requiring a central communication point (e.g., a master or central device).

Bibliography

[4] Apache License Version 2.0, available at https://www.apache.org/licenses/LICENSE-2.0.html.

[57] J. Arnold, OpenStack Swift: Using, Administering, and Developing for Swift Object Storage. O'Reilly Media, Inc.; 2014.

[96] A. Davies, A. Orsaria, Scale out with GlusterFS, Linux Journal 2013;2013(235):1.

[123] K. Ha, M. Satyanarayanan, OpenStack++ for cloudlet deployment, School of Computer Science Carnegie Mellon University Pittsburgh, 2015.

[220] OpenStack, available at https://www.openstack.org/.

[237] M. Satyanarayanan, P. Bahl, R. Caceres, N. Davies, The case for VM-based cloudlets in mobile computing, Pervasive Computing, IEEE 2009;8(4):14–23.

[238] M. Satyanarayanan, Elijah: Cloudlet-based Mobile Computing, http://elijah.cs.cmu.edu/ [Online].

[272] S.A. Weil, S.A. Brandt, E.L. Miller, D.D. Long, C. Maltzahn, Ceph: a scalable, high-performance distributed file system, Proceedings of the 7th Symposium on Operating Systems Design and Implementation USENIX Association. 2006:307–320.


1  “LVM is a tool for logical volume management which includes allocating disks, striping, mirroring, and resizing logical volumes. With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. LVM physical volumes can be placed on other block devices which might span two or more disks.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset