Chapter 2. Introducing Akka

In this chapter, we introduce Akka, the open source library we use throughout the remainder of this book. If you’re reading this far, you probably already have a good idea about what Akka is, but you might not know the whole story.

Akka is a toolkit that you can use to implement all of the patterns we discuss in the remainder of the book, allowing you to put these techniques to immediate and practical use in your own applications.

What Is Akka?

Akka is described on its home page as “an open source toolkit and runtime simplifying the construction of concurrent and distributed applications on the Java Virtual Machine (JVM).”

It goes on to say that Akka supports multiple programming models but emphasizes actor-based concurrency, with inspiration drawn from Erlang.

This is a good high-level description, but Akka is much more. Let’s look a bit deeper.

Akka Is Open Source

Akka is an open source project released under the Apache 2 License, a recognized open source license, making it free to use and extend in both other open source efforts and in commercial libraries and applications.

Although it is fully usable from Java, Akka is itself written in Scala and gains the benefit of the attributes of this language as a result, including strong type safety, high performance, low memory footprint, and compatibility with all other JVM libraries and languages. One of the key features of the Scala language that is used heavily in Akka is the focus on immutable data structures. Akka makes heavy use of them in its messaging protocols. In fact, it is not uncommon to see Java users write their messages in Scala in order to save time.

There are even adapter toolkits to use Akka from Clojure, an implementation of Lisp on the JVM.

Akka Is Under Active Development

Akka is continually being developed, and usually no more than a few months go by without at least a minor release. The ecosystem of libraries and contributed add-ons is even more active.

There are concerted efforts underway to create versions of Akka for the .NET platform and for JavaScript.

Akka moves fairly rapidly but continues to support stable versions for a long period. The upgrade path from one major version to the next has historically been quite smooth, allowing projects that use Akka to upgrade regularly with minimal risk.

Akka Is Distributed by Design

Akka, like many other implementations of the Actor Model, doesn’t only target multicore, single computer systems—you can also use it with clusters of systems. It is, therefore, designed to present the same programming model for both local and distributed development—virtually nothing changes in the way you develop your code when deploying across many systems, making it easy to develop and test locally and then deploy distributed.

With Akka, it is natural to model interactions as messages, as opposed to systems such as Remote Procedure Call (RPC), for which it is more natural to model interactions as procedure calls. This is an important distinction, as we will see in detail later.

Although it is possible to write code that is aware of the distributed nature of the cluster (e.g., by listening to events as nodes join and leave the cluster), the actual logic within the actors themselves does not change. Whether dealing with local actors or remote, the communication mechanisms, delivery guarantees, failure handling, and other concepts remain unchanged.

Akka provides a key element in the construction of so-called Reactive applications; that is, applications built to support the fundamental attributes discussed in the Reactive Manifesto (which you can see at http://www.reactivemanifesto.org/). The manifesto describes applications that are responsive, resilient, elastic, and message driven. The message-driven part is where Akka comes in, and through its capabilities, supports the rest of the attributes.

In an Akka system, an actor can be local or remote. It is local with respect to any other actor if the sending and receiving actors are in the same JVM.

If an actor is determined to be local, Akka can make certain optimizations in delivery (no serialization or network call is required), but the code is no different than if the receiving actor were remote.

That’s what we mean by “distributed by design”: no code changes between local operation and distributed operation.

Akka Components

Akka consists of not only the core of Akka itself, but of a rich set of optional additional libraries that work with it, so you can customize the portions you use in your project. Most of the Akka components are open source, but there are also commercial add-ons available.

Lightbend provides commercial support and indemnification for Akka as well as for the rest of its reactive platform, and many organizations provide development and consulting services around Akka. This commercial ecosystem further lessens the risk for organizations adopting it, while at the same time not limiting the hobbyist or developer who wants to use Akka on his own.

Akka Actor

The primary component of the Akka library is the implementation of actors themselves, which are the fundamental building block on which Akka is built. All of the attributes of the Actor Model are supplied here, but only on a single JVM—distributed and clustering support are optional components.

Akka actors implement the Actor Model with no shared state and asynchronous message passing. They also support a sophisticated error-handling hierarchy, which we’ll dig into later.

The Akka API intentionally limits the access to an actor. Communication between actors can happen only through message passing. There is no way to call methods in the actor in a synchronous fashion, so actors remain decoupled from one another both in API and in time. It does this through the use of an ActorRef. When an an instance of an actor is created, only an ActorRef is returned. The ActorRef is a wrapper around the actual actor. It isolates that actor from the rest of the code. All communication with the actor must go through the ActorRef, and the ActorRef does not provide any access to the actor that it wraps.

This means that unlike regular object-oriented method calls, whereby the caller blocks, passes control to the called object, and then resumes when the called object returns, the messages to an actor are sent via the ActorRef, and the caller continues immediately. The receiving actor then processes the messages that were sent to it one at a time, likely on a completely different thread from the caller. The fact that messages are processed one at a time is what enables the single-threaded illusion. While inside of an Akka actor, we can be confident that only a single thread will be modifying the state of that actor, as long as we don’t actively break the single-threaded illusion. Later we will discuss ways by which we can break the single-threaded illusion, and why that should be avoided.

The message-driven nature of the Actor Model is implemented in Akka through the ActorRef. An ActorRef provides a number of methods that allow us to send messages to the underlying actor. Those messages typically take the form of immutable case classes. Even though there is nothing preventing us from sending mutable messages to an actor, it is considered a bad practice because it is one way that we can break the single-threaded illusion.

Changes in the behavior of an actor in Akka are implemented through the use of become. Any actor can call context.become to provide a new behavior for the next message. You also can use this technique to change an actor’s state. However, actors can also maintain and change state by using mutable fields and class parameters. We will discuss techniques for changing state and behavior in more detail later.

Child Actors

Akka actors also can create child actors. There are various factory methods available within an actor that allows it to create those children. Any actor in the system will have access to its parent and children and can freely send messages to those other actors. This is also what enables the supervision mechanism in Akka because parent actors will automatically supervise their children.

The supervision mechanism in Akka means that this message-passing structure can focus on the “happy path,” because errors have an entirely separate path through which to propagate.

When an actor encounters an error, the caller is not aware of it—after all, the caller could be long gone or on another machine entirely, so sending it the error isn’t necessarily helpful. Instead, Akka passes errors from an actor to its supervisor actor; that is, the actor that started it in the first place. Those supervisor actors then use a special method to handle those exceptions, and to inform the actor that threw the exception as to what to do next—to ignore the error or to restart (with various options).

This is the origin of the “let it crash” saying that is associated with Akka; in other words, the idea is that it is acceptable for the actor that had the problem to simply “crash.” We don’t try to prevent the failure of the actor. Instead, we handle it appropriately when it happens. If an actor crashes, we can take certain actions to recover that actor. This can include stopping the actor, ignoring the failure and continuing with message processing, or it can even mean restarting the actor. When an actor is restarted, we transparently replace it with a new copy of that actor. Because communication is handled through the ActorRef, the clients of the actor need not be aware that the actor failed. From their perspective, nothing has changed. They continue to point to the same ActorRef, but the actor that supports it has been replaced. This keeps the overall system resilient because the error is isolated to the failing actor. It need not propagate any further.

Figure 2-1 shows three actors in a single actor system within a single JVM. Any actor can send messages to any other, in both directions (although we only show two possible routes).

Actors in a single JVM
Figure 2-1. Actors in a single JVM

In addition to the usual message-passing mechanism of actors, which is point-to-point, Akka also provides a one-to-many messaging mechanism, called the event bus. The event bus allows a single actor to publish a message, and other actors to subscribe to that message by its type. The sending actor then remains completely decoupled from the receiver, which is very valuable in some situations.

Note that the event bus is optimized only for local messaging; when using Akka in a distributed environment, the publish/subscribe module provides equivalent functionality optimized for multiple nodes.

Figure 2-2 shows the three actors again, but this time, instead of communicating directly with one another, they communicate bidirectionally with the event bus, improving decoupling.

Actors and event bus
Figure 2-2. Actors and event bus

Remoting: Actors on Different JVMs

Akka provides location transparency for its actors; that is, when you have an actor reference to which you can send messages, you don’t actually need to know if that actor is on the local system or on another machine altogether.

Remoting provides the ability for the actor system to communicate over a network, and to serialize and deserialize messages so that actors on different JVMs can pass messages to one another.

This is enabled in part through the actor’s unique address. An actor’s address can include not only the unique path to the actor in the hierarchy, but also a network address for that actor. This network address means that we can uniquely identify any actor in any actor system, even when that actor resides in a different JVM running on a different machine. Because we have access to that address, we can use it to send messages to the remote actor as though it were a local actor. The Akka platform will take care of the delivery mechanics for us, including serializing and sending the message.

The serialization mechanism is pluggable, as is the communication protocol, so a number of choices are available. When starting out, the default mechanisms are often sufficient, but as your application grows, you might want to consider swapping in different serializers or communication protocols.

Remoting in Akka can be added by configuration only: no code changes are necessary, although it is also possible to write code to explicitly perform remoting.

With remoting, each node must be aware of the other nodes and must have their network address because the messaging is point-to-point and explicit. Akka Remoting does not include any discovery mechanisms. Those come as part of Akka clustering.

Clustering: Automatic Management of Membership

For larger groups of servers, remoting becomes awkward: adding a node to a group means that every other node in the group must be informed, and message routing can grow pretty complex.

Akka provides the clustering module to help simplify this process.

Akka clustering offers additional capabilities on top of Remoting that make it easier to provide for location-independence. Clustering provides the ability for actor systems running in multiple JVMs to interact with one another and behave as though they are part of the same actor system. These systems form a single, cohesive clustered actor system.

Akka clustering makes the location-transparency of actors much more accessible. Before clustering, you needed at least some part of your code to be aware of the location of remote actors, and you needed to build addresses for those actors to send messages to them. Failover was more difficult to manage: if the node you were communicating with for your remote actors became unavailable or failed, you needed to handle starting up another node and creating connections to the new actors on that node yourself.

With Akka clustering, instead of one node needing to be aware of other nodes on the network directly, it needs to be aware of only one or more seed nodes. These are specially designated nodes that can be used to connect to the cluster by new nodes that are looking to join.

You can (but probably shouldn’t) have a single seed node in your cluster, or you can designate every node as a possible seed. The idea is that as long as one of the seed nodes is available, new nodes can join the cluster.

No special code is required: configuration informs your actor system as to where the seed nodes are, and it contacts one of them automatically on startup.

The new node then goes through a series of lifecycle steps (as seen in Figure 2-3) until it is a full member of the cluster, at which point it is not only capable of sending messages to other actors in the cluster, it can also start new actors, either in its own actor system or on another node of the cluster. In essence, the cluster becomes a single virtual actor system.

Cluster States
Figure 2-3. Cluster states

Although the actors within the cluster can send messages to one another, the cluster management itself is handled separately. A variant of the gossip protocol is used by the actor system itself to manage cluster membership. Random peers communicate what information they have about cluster membership to another node—that node passes on what it knows to another, and so on. After a time, convergence occurs. Convergence is when all nodes get a cohesive picture of the membership of the cluster (and all have the same version of the cluster information structure), yet there is no one central node that contains this information. It is a robust and resilient system, used in many other distributed applications such as Amazon’s Dynamo and Basho’s Riak. Vector clocks, which do not actually track wall-clock time, but are a way of determining a partial ordering of events in a distributed system, are used to reconcile messages received via the gossip protocol into an accurate state of the cluster.

Special event messages are sent automatically as cluster lifecycle events occur—for instance, a new node joining, or an existing node leaving the cluster. You do not need to listen to these events to use the cluster, but if you want more fine-grained control or monitoring, you can.

For nodes determined to be missing from the cluster, a Phi Accrual failure detector is used to determine when a node has become unreachable, and eventually, that the node is down. The cluster supports a heartbeat to support this detection. Of course, a node can leave the cluster intentionally, as well—for instance, during a controlled downscaling or shutdown.

For a description of the Phi Accrual failure detection method, check out the paper “The ϕ Accrual Failure Detector”.

Cluster leader

Each cluster has a leader. Every node can determine algorithmically which node should be the current leader based on the information supplied by gossip; it is not necessary that the leader be elected. The leader, however, is not a point of failure, because a new leader can be established by any node as necessity arises. It is also not especially relevant to developers, because the leader is used only internally by Akka to make certain cluster-related decisions.

The leader does two things for the cluster: it adds new members to the cluster when they become eligible to be added, and it takes them out when appropriate. It also schedules the rebalancing of the cluster as necessary; that is, the movement of actors between members of the cluster. Because the leader has limited duties, the cluster can continue to function even in the absence of a leader. In this case, all traffic among cluster nodes can continue as before; it is only the changes to the cluster that will be delayed until a new leader is established. You will not be able to add or remove nodes from the cluster, but it can still perform all of its other duties despite the absence.

A potential issue with distributed systems is the issue of partitioning. In a cluster, if a node terminates unexpectedly, other nodes simply take over its function, and all is well. However, if a group of nodes is suddenly disconnected from the cluster—say, due to a network failure—but does not terminate, an issue can arise. The surviving nodes carry on, but who are the survivors? Each group of nodes, now isolated from the others, can potentially form a new cluster, leading to a situation called split-brain, in which there should only be one cluster but there are in fact two. In this situation, guarantees such as the uniqueness of a cluster singleton can be compromised.

There are several techniques to deal with this situation. The simplest is to disallow autodowning. When autodowning is enabled, if a node becomes unreachable, the system will automatically remove it from the cluster after a period of time. This seems like a good thing on paper, but in practice this is what leads to difficulties. It leads to the split-brain. One or more downed nodes can form their own cluster, giving rise to the problem. By disabling autodowning, this is no longer possible. Now, when nodes become unreachable, human intervention is required in order to remove the node from the cluster. In this way, new clusters can’t form, because it is impossible to reach the critical convergence required. Generally, autodowning should not be enabled in production.

However, if autodowning is used, there are still other options. It is possible to restrict the minimum size of a cluster (this is something you can configure). In this case, if a single node is disconnected, it cannot form a cluster, because it doesn’t have enough members, so it terminates. Picking the right minimum value is situational, though: there’s no one right answer.

Lightbend also provides a smart split-brain resolver product as a commercial add-on that helps in this situation. It uses more sophisticated techniques to determine if a cluster partition has occurred and then takes appropriate action.

Cluster sharding

The idea of sharding was initially applied to databases, for which a single set of data could grow to be too large to fit in a single node. To spread the data more or less evenly across multiple nodes, the idea of a shard key was defined: some value that was part of the data that had a good, wide distribution. This shard key could be used to chop up the data set into a number of smaller parts.

Akka cluster sharding takes this further by applying the principles of sharding to live actors, allowing them to be distributed across a cluster.

An example of a shard key might be the first letter of the last name of a customer: you could then break up your data into a maximum of 26 shards, wherein shard 1 would have the data for all customers with last names beginning with “A,” for instance.

In practice, this is a terrible shard key, but it’s a simple illustration.

As you begin to add more nodes to your Akka cluster, the number of actors that are available grows substantially, in many cases, and you can apply cluster sharding in much the same manner as you would apply it to a database.

Akka cluster sharding takes the concept of sharding and extends it so that you can shard live actors across a cluster.

Each node can be assigned to a shard region, and there can be (and probably should be) more than one node assigned to each region.

Messages that are intended for use in a cluster-sharded environment are then wrapped in a special envelope that includes a value that can be used to determine the shard region to which that message should be delivered. All messages that resolve to that shard region are delivered only to that region, allowing for state contained in those actors to be stored only on those nodes.

In the previous, simple example, if you have a shard region for all customers whose last names begin with “A,” the value of the customer’s last name can be used to resolve to the shard region, and the message will be delivered to the correct node(s).

Distributed domains with cluster sharding

One of the significant advantages of cluster sharding is that persistence for the state of the actors within a region can be restricted to the shard region nodes—it need not be replicated or shared across the entire cluster. For instance, suppose that you’re keeping an account balance for customers. You can persist this balance by storing every message that changes the balance and then save this journal to disk. In this way, when the actor that encapsulates this state is restarted, it can just read the journal and be back at the correct balance.

With the capabilities of Akka clustering and cluster sharding, it is possible to build a system that uses what has been called a distributed domain. This is the technique of having a single actor holding state for an instance of a domain (say, a customer) somewhere in a cluster-sharded actor system. We will explore this pattern in some depth later in this book.

If the actor in question is for a customer whose last name begins with “A,” its journal need be kept only on the nodes in that shard region—because we can guarantee that the actor for that customer starts up only on one of those nodes—as opposed to the usual cluster situation, in which an actor can be started anywhere in the cluster. You need to take care when applying this technique, however, because it restricts the flexibility of the cluster significantly.

Cluster singleton

Another refinement in clustering is the cluster singleton—that is, a specific actor that must always have one, and exactly one, instance in the cluster. It doesn’t matter where it is in the cluster, but it’s important that there is just the one.

In this case, Akka provides a special means to ensure this: each node is capable of starting the singleton, but only one node can do so at a time. If that actor for some reason fails, the same or another node will start it back up again, ensuring that it remains available as much as possible. Every node that needs to send messages to the singleton has a proxy, which routes messages to the singleton no matter where it is in the cluster.

Of course, like any singleton, there are disadvantages, but the failover mechanism takes away at least one of them. You can still end up with a performance bottleneck, however, so you must use cluster singletons with care.

Akka HTTP

Inspired by the Spray HTTP library, Akka HTTP replaces it while providing a deeper integration with Akka and Akka Streams. It provides a way to build HTTP APIs on top of Akka and is the recommended approach for interfacing Akka with other HTTP-based systems.

TestKit

Core to any good toolkit is the ability to test the code you produce with it, and Akka is no exception. Asynchronous and concurrent programs are notoriously difficult to test, but the facilities that TestKit provides allow you to verify functionality easily and completely, and even to write your code test-first if you choose to do so, with none of the normal difficulties of asynchronous development.

TestKit overcomes one of the most intractable problems with either concurrency or with distribution: how can you test such a system repeatedly and reliably?

The nature of concurrent execution brings with it the difficulty that the exact order of operations is in fact unknown. The same is true of distributed systems, which are, of course, also concurrent but with the added complexity that more than one physical system is involved, introducing elements such as networking and serialization/deserialization into the testing equation.

The TestKit enables this in two ways. The simplest is providing a way to test an actor that temporarily—for the scope of the test—allows access to the actor in a fully synchronous and deterministic fashion. This makes an actor no more difficult to test than any other code. However, it has the potential to hide or even create problems. A fully synchronous and deterministic actor does not represent the real world. It could change the behavior of the actor in ways that allow the test to pass or fail when they might behave otherwise had the actor remained asynchronous.

The second capability that TestKit brings to testing is a means to verify messages sent and received between actors in their normal nondeterministic and asynchronous modes: it does this by providing an easy way to stub or fake actors, along with methods to assert that certain messages have been received within specific timeout periods, without the need to specify in what order the actual operations occurred.

A final add-on, which is not specifically a part of the Akka TestKit, but works very well with it, is the multi-JVM test. This is a means for a single test to launch an entirely new instance of the JVM, simulating a network of nodes and allowing interaction between the isolated actor systems to be verified. This feature was developed by the Akka team to test actors in close-to-production setups, across multiple virtual machines (VMs), and even multiple physical nodes.

Contrib

A library called “contrib” is available for Akka, which contains many different contributed tools that have not made their way into their own top-level modules yet. This includes useful tools for throttling messages, aggregating them, and more. It is worth exploring, with many uncut gems inside.

Akka OSGi

The OSGi model has much to recommend it as a host environment for Akka, and specific support is provided by this module, which allows the OSGi lifecycle to initialize the actor system of modules with Akka support at the right time.

Akka HTTP

Akka HTTP provides the recommended way to expose actors as REST endpoints, and build RESTful web services.

Akka Streams

Akka Streams provides a higher-level API for interacting with actors while at the same time providing automatic handling of “back pressure” (which we will discuss in detail later on).

Streams provides a way of building complex structures called streams and graphs that are based on actors. You can use these structures to consume and transform data via a convenient and familiar domain-specific language (DSL).

These streams follow the Reactive Streams initiative.

Akka’s Implementation of the Actor Model

We have said that Akka provides the Actor Model (and a few other things) on the JVM. Let’s examine that a bit more closely, and see how Akka gives us the attributes of the Actor Model specifically.

Akka’s Actors in the Actor Model

The fundamental unit of computation in the Actor Model is the actor, and Akka provides this directly, with a number of traits that provide actors and specializations of actors. In an upcoming version of Akka, typed actors will in fact remove the trait called “Actor,” but will replace it with traits specific to defining the behavior of actors, which results in the same thing, semantically.

Actors in Akka are where the computation occurs, as prescribed by Hewitt.

A simple actor looks like this:

import akka.actor.Actor

class DemoActor extends Actor {
  def receive = {
    case _ => println("Received a message")
  }
}

Message Passing

The Actor Model specifies that message passing should be the only way for actors to communicate, and that all processing should occur in response to this message passing.

In Akka, messages are the only means for actors to interact with one another and the outside world. Messages are passed to other actors via a nonblocking queue called the mailbox. The object reference of the actor itself is not used directly—only an intermediary called the ActorRef, (for Actor Reference). This queue is normally first-in/first-out, but can take other forms if desired.

There are three message-passing mechanisms in Akka, to be precise. Let’s take a look at each one.

Tell

The preferred mechanism for sending messages is the tell, sometimes called “bang,” based on the name of the shorthand for the method, “!”.

You specify a “tell” by using the “!” operator, like this:

projectsActor ! List

Which is the equivalent of

projectsActor.tell(List)

(Let’s assume that List is a case object here.)

The tell method is the classic “fire and forget” message in the Actor Model: it neither blocks nor waits for any response.

Ask

You need to use caution when using the ask method because it is easy to write blocking code that will significantly reduce the advantages of an asynchronous system with ask.

The shorthand for ask is “?”, as demonstrated here:

projectsActor ? List

An ask indicates that a response is expected. The response is captured in a future because the actor to which the ask is sent might not respond immediately.

Publish/subscribe

The final means of message sending in Akka is via the event bus, described previously. The sender of a message uses a reference to the event bus to publish a message.

A receiver of the message must independently subscribe to the type of the message, and will therefore receive every message of that type that another actor publishes. The two actors are not directly aware of each other, and no sender reference is available for a message received this way.

Actor Systems

A single actor is no actor at all, according to the Actor Model. Akka provides the concept of actor systems; that is, groups of actors that are able to exchange messages readily, whether they are running in a single JVM or across many.

Although it is possible for actors in different actor systems to communicate, it is not common.

An actor system also serves as the facility to create or locate actors, like so:

    import akka.actor.{Actor, ActorSystem, Props}

    val system = ActorSystem("demo")

    val actor = system.actorOf(Props[DemoActor])

Creating new actors

Actors in Akka can be created both from outside the system, as well as by other actors, satisfying Hewitt’s requirement that new actors can create “child” actors.

Changing behavior

Actors in Akka have the ability to swap out the behavior that they use to respond to messages with another behavior, which can handle perhaps a completely different set of messages. Akka does this with the become and unbecome operations with untyped actors. Akka Typed, a future version of actors that we will explore in more detail in just a moment, returns the behavior for handling the next message after every message is handled.

Akka goes a step further and supplies a convenient DSL for creating finite-state machine actors specifically, a common use case for changing behavior.

The Akka Typed Project

In Akka 2.4, an experimental project has been added to the main Akka distribution: Akka Typed.

This project is actually the latest in a series of efforts to reconcile the strongly typed nature of Scala with the nature of actor receive methods. Currently, you can think of a receive method as a partial function that takes any type and returns a unit, in essence.

This is because any message class can be sent to any actor—nothing in the type system stops this, although many custom attempts have been made to introduce type constraints.

Akka Typed starts from a simple premise: an actor is all about its behavior, and that behavior can have a type. That type indicates the set of messages that are valid to be handled by that behavior, and no others can be handled. Thus, why not have the compiler prevent them from being sent in the first place?

Akka Typed also eliminates the Actor trait itself, instead supplying a few types to directly declare behaviors and the types of the messages they accept.

Instead of lifecycle methods, Akka Typed follows the Actor Model even more strictly: lifecycle events are represented by messages, instead.

In this new version of Akka, the entire actor system is typed, as well. Creating an actor system provides a constructor that takes a Props object for the top-level actor.

Akka Typed is still experimental, but shows great promise.

Conclusion

Akka is a rich implementation of the Actor Model designed to work with your existing JVM languages, allowing you to take full advantage of actors from the languages you already use.

If you weren’t already using a JVM-based language, however, the opportunity to work with the huge and mature ecosystem of libraries available on the JVM still makes Akka an attractive option.

Now we have introduced both the model and the implementation of that model that we will use for the remainder of our discussions. In Chapter 3, we will start discussing the architecture and design approaches that can allow you to make full use of actors with Akka.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset