In this section, the dockerized Brownfield microservice developed in Chapter 8, Containerizing Microservices with Docker, will be deployed into the AWS cloud and managed with Mesos and Marathon.
For the purposes of demonstration, only three of the services (Search, Search API Gateway, and Website) are covered in the explanations:
The logical architecture of the target state implementation is shown in the preceding diagram. The implementation uses multiple Mesos slaves to execute dockerized microservices with a single Mesos master. The Marathon scheduler component is used to schedule dockerized microservices. Dockerized microservices are hosted on the Docker Hub registry. Dockerized microservices are implemented using Spring Boot and Spring Cloud.
The following diagram shows the physical deployment architecture:
As shown in the preceding diagram, in this example, we will use four EC2 instances:
For a real production setup, multiple Mesos masters as well as multiple instances of Marathon are required for fault tolerance.
Launch the four t2.micro EC2 instances that will be used for this deployment. All four instances have to be on the same security group so that the instances can see each other using their local IP addresses.
The following tables show the machine details and IP addresses for indicative purposes and to link subsequent instructions:
Instance ID |
Private DNS/IP |
Public DNS/IP |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
Replace the IP and DNS addresses based on your AWS EC2 configuration.
The following software versions will be used for the deployment. The deployment in this section follows the physical deployment architecture explained in the earlier section:
The detailed instructions to set up ZooKeeper, Mesos, and Marathon are available at https://open.mesosphere.com/getting-started/install/.
Perform the following steps for a minimal installation of ZooKeeper, Mesos, and Marathon to deploy the BrownField microservice:
sudo apt-get -y install oracle-java8-installer
sudo apt-get install docker
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E56151BF DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]') CODENAME=$(lsb_release -cs) # Add the repository echo "deb http://repos.mesosphere.com/${DISTRO} ${CODENAME} main" | sudo tee /etc/apt/sources.list.d/mesosphere.list sudo apt-get -y update
sudo apt-get -y install mesos marathon
Repeat the preceding steps on all the three EC2 instances reserved for the Mesos slave execution. As the next step, ZooKeeper and Mesos have to be configured on the machine identified for the Mesos master.
Connect to the machine reserved for the Mesos master and Marathon scheduler. In this case, 172.31.54.69
will be used to set up ZooKeeper, the Mesos master, and Marathon.
There are two configuration changes required in ZooKeeper, as follows:
/etc/zookeeper/conf/myid
to a unique integer between 1
and 255
, as follows:Open vi /etc/zookeeper/conf/myid and set 1.
/etc/zookeeper/conf/zoo.cfg
. Update the file to reflect the following changes:# specify all zookeeper servers
# The first port is used by followers to connect to the leader
# The second one is used for leader election
server.1= 172.31.54.69:2888:3888
#server.2=zookeeper2:2888:3888
#server.3=zookeeper3:2888:3888
Replace the IP addresses with the relevant private IP address. In this case, we will use only one ZooKeeper server, but in a production scenario, multiple servers are required for high availability.
Make changes to the Mesos configuration to point to ZooKeeper, set up a quorum, and enable Docker support via the following steps:
/etc/mesos/zk
to set the following value. This is to point Mesos to a ZooKeeper instance for quorum and leader election:zk:// 172.31.54.69:2181/mesos
/etc/mesos-master/quorum
file and set the value as 1
. In a production scenario, we may need a minimum quorum of three:vi /etc/mesos-master/quorum
mesos-slave
configuration:echo 'docker,mesos' > /etc/mesos-slave/containerizers
All the required configuration changes are implemented. The easiest way to start Mesos, Marathon, and Zookeeper is to run them as services, as follows:
sudo service zookeeper start sudo service mesos-master start sudo service mesos-slave start sudo service marathon start
sudo service zookeeper stop sudo service mesos-master stop sudo service mesos-slave stop sudo service marathon stop
In this example, instead of using the Mesos slave service, we will use a command-line version to invoke the Mesos slave to showcase additional input parameters. Stop the Mesos slave and use the command line as mentioned here to start the slave again:
$sudo service mesos-slave stop $sudo /usr/sbin/mesos-slave --master=172.31.54.69:5050 --log_dir=/var/log/mesos --work_dir=/var/lib/mesos --containerizers=mesos,docker --resources="ports(*):[8000-9000, 31000-32000]"
The command-line parameters used are explained as follows:
--master=172.31.54.69:5050
: This parameter is to tell the Mesos slave to connect to the correct Mesos master. In this case, there is only one master running at 172.31.54.69:5050
. All the slaves connect to the same Mesos master.--containerizers=mesos,docker
: This parameter is to enable support for Docker container execution as well as noncontainerized executions on the Mesos slave instances.--resources="ports(*):[8000-9000, 31000-32000]
: This parameter indicates that the slave can offer both ranges of ports when binding resources. 31000
to 32000
is the default range. As we are using port numbers starting with 8000
, it is important to tell the Mesos slave to allow exposing ports starting from 8000
as well.Perform the following steps to verify the installation of Mesos and Marathon:
I0411 18:11:39.684809 16665 slave.cpp:1030] Forwarding total oversubscribed resources
The preceding message indicates that the Mesos slave started sending the current state of resource availability periodically to the Mesos master.
http://54.85.107.37:8080
to inspect the Marathon UI. Replace the IP address with the public IP address of the EC2 instance:As there are no applications deployed so far, the Applications section of the UI is empty.
5050
, by going to http://54.85.107.37:5050
:The Slaves section of the console shows that there are three activated Mesos slaves available for execution. It also indicates that there is no active task.
In the previous section, we successfully set up Mesos and Marathon. In this section, we will take a look at how to deploy the BrownField PSS application previously developed using Mesos and Marathon.
bootstrap.properties
files to reflect the Config server IP address.The hostname setup can be done using the eureka.instance.hostname
property. However, when running on AWS specifically, an alternate approach is to define a bean in the microservices to pick up AWS-specific information, as follows:
@Configuration class EurekaConfig { @Bean public EurekaInstanceConfigBean eurekaInstanceConfigBean() { EurekaInstanceConfigBean config = new EurekaInstanceConfigBean(new InetUtils(new InetUtilsProperties())); AmazonInfo info = AmazonInfo.Builder.newBuilder().autoBuild("eureka"); config.setDataCenterInfo(info); info.getMetadata().put(AmazonInfo.MetaDataKey.publicHostname.getName(), info.get(AmazonInfo.MetaDataKey.publicIpv4)); config.setHostname(info.get(AmazonInfo.MetaDataKey.localHostname)); config.setNonSecurePortEnabled(true); config.setNonSecurePort(PORT); config.getMetadataMap().put("instanceId", info.get(AmazonInfo.MetaDataKey.localHostname)); return config; }
The preceding code provides a custom Eureka server configuration using the Amazon host information using Netflix APIs. The code overrides the hostname and instance ID with the private DNS. The port is read from the Config server. This code also assumes one host per service so that the port number stays constant across multiple deployments. This can also be overridden by dynamically reading the port binding information at runtime.
The previous code has to be applied in all microservices.
docker build -t search-service:1.0 . docker tag search-service:1.0 rajeshrv/search-service:1.0 docker push rajeshrv/search-service:1.0 docker build -t search-apigateway:1.0 . docker tag search-apigateway:1.0 rajeshrv/search-apigateway:1.0 docker push rajeshrv/search-apigateway:1.0 docker build -t website:1.0 . docker tag website:1.0 rajeshrv/website:1.0 docker push rajeshrv/website:1.0
The Docker images are now published to the Docker Hub registry. Perform the following steps to deploy and run BrownField PSS services:
{ "id": "search-service-1.0", "cpus": 0.5, "mem": 256.0, "instances": 1, "container": { "docker": { "type": "DOCKER", "image": "rajeshrv/search-service:1.0", "network": "BRIDGE", "portMappings": [ { "containerPort": 0, "hostPort": 8090 } ] } } }
The preceding JSON code will be stored in the search.json
file. Similarly, create a JSON file for other services as well.
The JSON structure is explained as follows:
id
: This is the unique ID of the application. This can be a logical name.cpus
and mem
: This sets the resource constraints for this application. If the resource offer does not satisfy this resource constraint, Marathon will reject this resource offer from the Mesos master.instances
: This decides how many instances of this application to start with. In the preceding configuration, by default, it starts one instance as soon as it gets deployed. Marathon maintains the number of instances mentioned at any point.container
: This parameter tells the Marathon executor to use a Docker container for execution.image
: This tells the Marathon scheduler which Docker image has to be used for deployment. In this case, this will download the search-service:1.0
image from the Docker Hub repository rajeshrv
.network
: This value is used for Docker runtime to advise on the network mode to be used when starting the new docker container. This can be BRIDGE or HOST. In this case, the BRIDGE mode will be used.portMappings
: The port mapping provides information on how to map the internal and external ports. In the preceding configuration, the host port is set as 8090
, which tells the Marathon executor to use 8090
when starting the service. As the container port is set as 0
, the same host port will be assigned to the container. Marathon picks up random ports if the host port value is 0
."healthChecks": [ { "protocol": "HTTP", "portIndex": 0, "path": "/admin/health", "gracePeriodSeconds": 100, "intervalSeconds": 30, "maxConsecutiveFailures": 5 } ]
curl -X POST http://54.85.107.37:8080/v2/apps -d @search.json -H "Content-type: application/json"
Repeat this step for all the other services as well.
The preceding step will automatically deploy the Docker container to the Mesos cluster and start one instance of the service.
The steps for this are as follows:
The Scale Application button allows administrators to specify how many instances of the service are required. This can be used to scale up as well as scale down instances.
http://52.205.251.150:8761
:http://54.172.213.51:8001
in a browser to verify the Website application.