This chapter will explore the next level of detail regarding edge workloads and your first hands-on activity. You will learn how AWS IoT Greengrass meets the needs for designing and delivering modern edge ML solutions. You will learn how to prepare your edge device to work with AWS by deploying a tool that checks your device for compatible requirements. Additionally, you will learn how to install the IoT Greengrass core software and deploy your first IoT Greengrass core device. You will learn about the structure of components, examine the fundamental unit of software in IoT Greengrass, and write your first edge workload component.
By the end of this chapter, you should start to feel comfortable with the basics of IoT Greengrass and its local development life cycle.
In this chapter, we are going to cover the following main topics:
The technical requirements for this chapter are the same as those described in the Hands-on prerequisites section in Chapter 1, Introduction to the Data-Driven Edge with Machine Learning. Please refer to the full requirements mentioned in that chapter. As a reminder, you will need the following:
You can access this chapter's technical resources from the GitHub repository, under the chapter2 folder, at https://github.com/PacktPublishing/Intelligent-Workloads-at-the-Edge/tree/main/chapter2.
The previous chapter introduced the concept of an edge solution along with the three key kinds of tools that define an edge solution with ML applications. This chapter provides more detail regarding the layers of an edge solution. The three layers addressed in this section are as follows:
Learning more about these layers is important because they will inform how you, as the IoT architect, make trade-offs when designing your edge ML solution. First, we'll start by defining the business logic layer.
The business logic layer is where all the code of your edge solution lives. This code can take many shapes, such as precompiled binaries (such as a C program), shell scripts, code evaluated by a runtime (such as a Java or Python program), and ML models. Additionally, code can be organized in a few different ways such as shipping everything into a monolithic application, splitting up code into services or libraries, or bundling code to run in a container. All of these options come with implications for architecting and shipping an edge ML solution, such as security, cost, consistency, productivity, and durability. Some of the challenges of delivering code for the business logic layer are as follows:
To address the challenges of writing your business layer logic, the best practice for shipping code to the edge is to use isolated services where practical.
On your Home Base Solutions hub device (the fictional product we are creating from the story in Chapter 1, Introduction to the Data-Driven Edge with Machine Learning), code will be deployed and run as isolated services. In this context, a service is a self-contained unit of business logic that is either invoked by another entity to perform a task or performs a task on its own. Isolation means that the service will bundle with it the code, resources, and dependencies it needs for its operation. For example, a service you will create in Chapter 7, Machine Learning Workloads at the Edge, will run code to read from a data source or collection of images, periodically compute inferences using a bundled ML model, then publish any inference results to a local stream or the cloud. This pattern of isolated services is selected for two reasons:
You can deploy updates to individual services without touching other running services and, therefore, reduce the risk of impact on them. Decoupled, service-oriented architecture is a best practice for designing well-architected cloud solutions that are also a good fit for edge ML solutions where multiple services are simultaneously running and emphasize a need for reliability. For example, a service that interfaces a sensor writes new measurements to a data structure and nothing more; it has a single job and doesn't need to be aware of how the data is consumed by a later capability.
Examples of isolation include Python virtual environments, which explicitly specify a Python runtime version and package, and Docker Engine, which uses containers to bundle dependencies, resources, and achieve process isolation on the host. The following diagram illustrates the separation of concerns achieved with isolated services:
In tandem, the patterns of isolation and services offer compelling benefits for edge ML solutions. Of course, every decision in development comes with trade-offs. The solution would be simpler if deployed as a singular monolith of code and faster to derive a minimum viable product. We opt for more architectural complexity because this leads to better resiliency and a scaling up of the solution over time. We lean on strong patterns and good tooling to balance that complexity.
IoT Greengrass is designed with this pattern in mind. Later, in this chapter and throughout the book, you will learn how to use this pattern with IoT Greengrass to develop well-architected edge ML solutions.
A cyber-physical solution is defined by the use of physical interfaces to interact with the analog world. These interfaces come in two classifications: sensors for taking measurements from the analog world and actuators for exerting change back upon it. Some machines do both, such as a refrigerator that senses internal temperature and activates its compressor to cycle a refrigerant through coils. In these cases, the aggregation of sensing and actuating is logical, meaning the sensor and actuator have a relationship but are functionally independent and are coordinated via a mechanism such as a switch, circuit, or microcontroller.
Sensors that perform analog-to-digital conversion do so by sampling the voltage from an electrical signal and converting it into a digital value. These digital values are interpreted by code to derive data such as temperature, light, and pressure. Actuators convert digital signals into analog actions, typically, by manipulating the voltage going to a switch or circuit. A command to engage a motor is interpreted as raising a voltage to the level that activates the circuit. Diving deeper into the electrical engineering of physical interfaces is beyond the scope of this book. Please refer to the References section for recommendations on deeper dives on that topic. The following diagram shows a simple analog example of a refrigerator and the relationship between the thermostat (sensor), switch (controller), and compressor (actuator):
It's important to understand the patterns of input and output delivered by a cyber-physical solution and the relationship to a higher-level outcome delivered through an edge ML solution. Throughout the project delivered in this book, you will gain hands-on experience applying these patterns. Some of the services of the Home Base Solutions hub device will serve as interfaces to the physical layer, providing new measurements from the sensors and converting the commands to change the state of local devices. If you are working with a physical edge device, such as the Raspberry Pi, you will get some experience of using code to interact with the physical interfaces of that device.
The third layer to introduce for our edge solution anatomy is the network interface. A differentiator between our definitions of cyber-physical solutions and edge solutions is that an edge solution will, at some point, interact with another entity over a network. For example, the design of our new appliance monitoring kit for Home Base Solutions uses wireless communication between the monitoring kit and the hub device. There is no physical connection between the two for the purposes of analog-to-digital signal conversion from the monitor's sensors.
Similarly, the hub device will also exchange messages with a cloud service to store telemetry to use in the training of the ML model, to deploy new resources to the device, and to alert customers of recognized events. The following diagram illustrates the flow of messages and the relationships between a Sensor, Actuator, hub device (Gateway), and the cloud service:
Wireless communications are common in IoT solutions, and specific implementations enable connectivity at a wide range of distances. Each specification and implementation makes trade-offs for range, data transmission rate, hardware cost, and energy consumption. Short-range radio specifications such as Zigbee (IEEE 802.15.4), Bluetooth (IEEE 802.15.1), and WiFi (IEEE 802.11) are suitable for bridging devices within personal and local area networks. Long-range radio specifications such as conventional cellular networks (for example, GSM, CDMA, and LTE) and low-power wide-area networks (LPWANs) such as LoRaWAN and NB-IoT deliver connectivity options for devices deployed (either static or roaming) across a particular campus, city, or region.
Wired communications are still common for connecting edge devices such as TVs, game consoles, and PCs to home network solutions such as switches and routers over Ethernet. Wired connectivity is less common in smart home products due to the limited number of Ethernet ports on home networking routers (typically, there are only 1–8 ports), the restrictions regarding where the device can be placed, and the burden of adding wires throughout the home.
For example, the Home Base Solutions appliance monitoring kit would likely use a Zigbee, or equivalent implementation, and a battery to balance energy consumption with the anticipated data rate. If the kit required a power supply from a nearby outlet, Wi-Fi becomes more of an option; however, it would limit the overall product utility as the placement of the kinds of appliances to monitor don't always have a spare outlet. Additionally, it wouldn't make sense to use Ethernet to connect the kits to the hub directly since customers likely wouldn't find all of the extra wires running throughout the home appealing. The hub device that communicates with the kit could use Ethernet or Wi-Fi to bridge to the customer's local network to gain access to the public internet.
Now that you have a better understanding of the three layers of an edge solution, let's evaluate the selected edge runtime solution and how it implements each layer.
The most important question to answer in a book about using IoT Greengrass to deliver edge ML solutions is why IoT Greengrass? When evaluating the unique challenges of edge ML solutions and the key tools required to deliver them, you want to select tools that solve as many problems for you as possible while staying out of your way in terms of being productive. IoT Greengrass is a purpose-built tool with IoT and ML solutions at the forefront of the value proposition.
IoT Greengrass is prescriptive in how it solves the problem of the undifferentiated heavy lifting of common requirements while remaining non-prescriptive in how you implement your business logic. This means that the out-of-box experience yields many capabilities for rapid iteration without being obstructive about how you use them to reach your end goals. The following is a list of some of the capabilities that IoT Greengrass has to offer:
In playing your role as the Home Base Solutions architect with a solution to build, you could propose that the engineering team invest the time and resources to build out all this functionality and test that it is production-ready. However, the IoT Greengrass baseline services and optional add-ons are ready to accelerate the development life cycle and come vetted by AWS, where security is the top priority.
IoT Greengrass does not do everything for you. In fact, a fresh installation of IoT Greengrass doesn't do anything but wait for further instruction in the form of a deployment. Think of it as a blank canvas, paint, and brush. It's everything you need to get started, but you have to develop the solution that it runs. Let's review the operating model for IoT Greengrass, both at the edge and within the cloud.
IoT Greengrass is both a managed service running on AWS and an edge runtime tool. The managed service is where your devices are defined individually and in groups. When you want to push a new deployment to the edge, you are, in fact, invoking an API in the managed service that is then responsible for communicating with the edge runtime to coordinate the delivery of that deployment. Here is a sequence diagram showing the process of you, as the developer, configuring a component and requesting that a device running the IoT Greengrass core software receive and run that component:
The component is the fundamental unit of functionality that is deployed to a device running IoT Greengrass. Components are defined by a manifest file, called a recipe, which tells IoT Greengrass what the name, version, dependencies, and instructions are for that component. In addition to this, a component can define zero or more artifacts that are fetched during deployment. These artifacts can be binaries, source code, compiled code, archives, images, or data files; really any kind of file or resource that is stored on disk. Component recipes can define dependencies on other components that get resolved via a graph by the IoT Greengrass software.
During a deployment activity, one or more components are added or updated in that deployment. The component's artifacts are downloaded to the local device from the cloud; then, the component is started by way of evaluating life cycle instructions. A life cycle instruction could be something that happens during startup; it could be the main command to run, such as starting a Java application or something to do after the component has concluded running. Components might continue in the running state indefinitely or perform a task and exit. The following diagram provides an example of the component graph:
That's everything we need to cover before taking your first steps toward readying your edge device to run a solution with IoT Greengrass!
In the following sections, you will validate that your edge device is ready to run the IoT Greengrass software, install the software, and then author your first component. In addition to this, you will get a closer look at components and deployments through the hands-on activities to come.
IoT Device Tester (IDT) is a software client provided by AWS to assess device readiness for use in AWS IoT solutions. It assists developers by running a series of qualification tests to validate whether the destination system is ready to run the IoT Greengrass core software. Additionally, it runs a series of tests to prove that edge capabilities are already present, such as establishing MQTT connections to AWS and running ML models locally. IDT works for one device you are testing locally or scales up to run customized test suites against any number of device groups, so long as they are accessible over the network.
In the context of your role as the IoT architect at Home Base Solutions, you should use IDT to prove that your target edge device platform (in this case, platform refers to hardware and software) is capable of running the runtime orchestration tool of choice, IoT Greengrass. This pattern of using tools to prove compatibility is a best practice over manually evaluating the target platform and/or assuming that a certain combination of listed requirements is met. For example, a potential device platform might advertise that its hardware requirements and operating system meet your needs, but it could be missing a critical library dependency that doesn't surface until later in the development life cycle. It is best to prove early that everything you need is present and accounted for.
Note
IDT does more than qualify hardware for running IoT Greengrass core software. The tool can also qualify hardware running FreeRTOS to validate that the device is capable of interoperating with AWS IoT Core. Developers can write their own custom tests and bundle them into suites to incorporate into your software development life cycle (SDLC).
The following steps will enable you to prepare your Raspberry Pi device for use as the edge system (that is, the Home Base Solutions hub device in our fictional project) and configure your AWS account before finally running the IDT software. Optionally, you can skip the Booting the Raspberry Pi and Configuring the AWS account and permissions sections if you already have a device and AWS account configured for use. If you are using a different platform for your edge device, you just need to ensure you can reach the device over SSH with a system user that has root permissions from your command-and-control device.
The following steps were run on a Raspberry Pi 3B with a clean installation of the May 2021 release of the Raspberry Pi OS. Please refer to https://www.raspberrypi.org/software/ for the Raspberry Pi Imager tool. Use your command-and-control system to run the imaging tool to flash a Micro SD card with a fresh image of the Raspberry Pi OS. For this book's project, we recommended that you use a blank slate to avoid any unforeseen consequences of preexisting software and configuration changes. Here is a screenshot of the Raspberry Pi Imager tool and the image to select:
The following is a list of steps to perform after flashing the Micro SD card with the Imager tool:
At this milestone, your Raspberry Pi device is configured to join the same local network of your command-and-control system and can be accessed via a remote shell session. If you are using a different device or a virtual machine as your edge device for the hands-on project, you should be able to access that device via SSH. A good test to check whether this is working properly is to try connecting to your edge device from your command-and-control system using a Terminal application (or PuTTY on Windows), with ssh pi@raspberrypi. Replace raspberrypi if you have a different hostname from step 6. Next, you will configure your AWS account in order to run IDT on your edge device.
Complete all of the steps in this section on your command-and-control system. For readers who don't yet have an AWS account (skip to step 5 if you already have access to an account), perform the following:
Note
At this point, you should have access to an AWS account with an administrative user. Complete the following steps to set up the AWS CLI (please skip to step 7 if you already have the AWS CLI configured with your admin user).
Next, you will use your Admin user to create a few more resources in preparation for the following sections to use the IDT and install the IoT Greengrass core software. These are permissions resources, similar to your Admin user, which will be used by IDT and IoT Greengrass software to interact with AWS on your behalf.
That concludes all of the preparatory steps to configure your AWS account, permissions, and CLI. The next milestone is to install the IDT client and prepare it to test the Home Base Solutions prototype hub device.
You will run the IDT software from your command-and-control system, and the IDT will access the edge device system remotely through SSH to run the tests.
Note
The following steps reflect the configuration and use of IDT at the time of writing. If you get stuck, it might be that the latest version differs from the version that we used at the time of writing. You can check the AWS documentation for IDT for the latest guidance on installation, configuration, and use. Please refer to https://docs.aws.amazon.com/greengrass/v2/developerguide/device-tester-for-greengrass-ug.html.
Follow these steps to use IDT to verify the edge device system is ready to run IoT Greengrass. All of the following steps are completed on macOS using IoT Greengrass core software v2.4.0 and IDT v4.2.0 with test suite GGV2Q_2.0.1. For Windows, Linux, or later AWS software versions, please alter the commands and directories as needed:
"auth": {
"method": "file",
"credentials": {
"profile": "idtgg"
}
}
Running IDT will start a local application that connects to your edge device over SSH and completes the series of tests. It will stop upon encountering the first failed test case or run until all test cases have passed. If you are running IDT against a fresh installation of your Raspberry Pi, as defined by previous steps, you should observe an output similar to the following:
========== Test Summary ==========
Execution Time: 3s
Tests Completed: 2
Tests Passed: 1
Tests Failed: 1
Tests Skipped: 0
----------------------------------
Test Groups:
pretestvalidation: PASSED
coredependencies: FAILED
----------------------------------
Failed Tests:
Group Name: coredependencies
Test Name: javaversion
Reason: Encountered error while fetching java version on the device: Failed to run Java version command with error: Command '{java -version 2>&1 map[] 30s}' exited with code 127. Error output: .
The step to install Java on the Raspberry Pi was intentionally left out in order to demonstrate how IDT identifies missing dependencies; apologies for the deception! If you ran the IDT test suite and passed all of the test cases, then you are ahead of schedule and can skip to the Installing IoT Greengrass section.
Your test suite should now pass the Java requirement test. If you encounter other failures, you will need to use the test report and logs in the idt/devicetester_greengrass_v2_mac/results folder to triage and fix them. Some common missteps include missing AWS credentials, AWS credentials without sufficient permissions, and incorrect paths to resources defined in userdata.json. A fully passed suite of test cases looks like this:
========== Test Summary ==========
Execution Time: 22m59s
Tests Completed: 7
Tests Passed: 7
Tests Failed: 0
Tests Skipped: 0
----------------------------------
Test Groups:
pretestvalidation: PASSED
coredependencies: PASSED
version: PASSED
component: PASSED
lambdadeployment: PASSED
mqtt: PASSED
----------------------------------
This concludes the introductory use of IDT to analyze and assist in preparing your devices to use IoT Greengrass. Here, the best practice is to use software tests, not just for your own code but to assess whether the edge device itself is ready to work with your solution. Lean on tools such as IDT that do the heavy lifting of proving that the device is ready to use and validate this for each new type of device enrolled or major solution version released. You should be able to configure IDT for your next project and qualify a new device or group of devices to run IoT Greengrass. In the next section, you will learn how to install IoT Greengrass on your device in order to configure your first edge component.
Now that you have used IDT to validate that your edge device is compatible with IoT Greengrass, the next milestone in this chapter is to install IoT Greengrass.
From your edge device (that is, the prototype Home Base Solutions hub), open the Terminal app, or use your command-and-control device to remotely access it using SSH:
sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE
-jar ./greengrass/lib/Greengrass.jar
--aws-region us-west-2
--thing-name hbshub001
--thing-group-name hbshubprototypes
--tes-role-name GreengrassV2TokenExchangeRole
--tes-role-alias-name GreengrassCoreTokenExchangeRoleAlias
--component-default-user ggc_user:ggc_group
--provision true
--setup-system-service true
--deploy-dev-tools true
Created device configuration
Successfully configured Nucleus with provisioned resource details!
Creating a deployment for Greengrass first party components to the thing group
Configured Nucleus to deploy aws.greengrass.Cli component
Successfully set up Nucleus as a system service
The installation of the IoT Greengrass core software and the provisioning of the initial resources is much smoother after validating compatibility with the IDT suite. Now your edge device has the first fundamental tool installed: the runtime orchestrator. Let's review the resources that have been created at the edge and in AWS from this provisioning step.
On your edge device, the IoT Greengrass software was installed in the /greengrass/v2 file path. In that directory, the public and private keypairs are generated for connecting to AWS, service logs, a local repository of packages for recipes and artifacts, and a directory for past and present deployments pushed to this device. Feel free to explore the directory at /greengrass/v2 to get familiar with what is stored on the device; though, you will need to escalate permissions using sudo to browse everything.
The installation added the first component to your IoT Greengrass environment, named aws.greengrass.Nucleus. The nucleus component is the foundation of IoT Greengrass; it is the only mandatory component, and it facilitates key functionality such as deployments, orchestration, and life cycle management for all other components. Without the nucleus component, there is no IoT Greengrass.
Additionally, the installation created the first deployment made to your device through the use of the --deploy-dev-tools true argument. That deployment installed a component named aws.greengrass.Cli. This second component includes a script, called greengrass-cli, that is used for local development tasks such as reviewing deployments, components, and logs. It can also be used to create new components and deployments. Remember, with IoT Greengrass, you can work locally on the device or push deployments remotely to it through AWS. Remote deployments are introduced in Chapter 4, Extending the Cloud to the Edge.
In AWS, a few different resources were created. First, a new thing was added to the thing registry in IoT Core. A thing is a logical representation of a physical device to which credentials, metadata, and other configuration are attached. The name of the created thing is hbshub001 from the IoT Greengrass provisioning argument, --thing-name. Similarly, a new thing group was also created in the registry, named hbshubprototypes, from the --thing-group-name provisioning argument. A thing group contains zero or more things and thing groups. The design of IoT Greengrass uses thing groups to identify sets of edge devices that should have the same deployments running on them. For example, if you provisioned another hub prototype device, you would add it to the same hbshubprototypes thing group such that the new prototype deployments would travel to all of your prototype devices.
Additionally, your hbshub001 thing has an entity attached to it called a certificate. The certificate is the record stored in IoT Core that was generated with the x.509 public and private keypair by the installation of IoT Greengrass. The keypair is stored on your device in the /greengrass/v2 directory and is used to establish mutually authenticated connections to AWS. The certificate is how AWS recognizes the device when it connects with its private key (the certificate is attached to the hbshub001 thing record) and knows how to look up permissions for the device. Those permissions are defined in another resource called the IoT policy.
An IoT policy is similar to an AWS IAM policy in that it defines explicit permissions for what an actor is allowed to do when interacting with AWS. In the case of an IoT policy, the actor is the device and permissions are for actions such as opening a connection, publishing and receiving messages, and accessing static resources defined in deployments. Devices get their permissions through their certificate, meaning a thing is attached to the certificate, and the certificate is attached to one or more policies. Here's a sketch of how these basic resources are related in the edge and the cloud:
In the cloud service of IoT Greengrass, a few more resources were defined for the initial provisioning of your device and its first deployment. An IoT Greengrass core is a mapping of a device (which is also known as a thing), the components and deployments running on the device, and the associated thing group the device is in. Additionally, a core stores metadata such as the version of IoT Greengrass core software installed and the status of the last known health check. Here is an alternate view of the relationship graph with IoT Greengrass resources included:
Now that you have installed IoT Greengrass and have an understanding of the resources created on provisioning, let's review what a component looks like when deployed to inform your implementation of the Hello, world component.
The most basic milestone of any developer education is the Hello, world example. For your first edge component deployed to IoT Greengrass, you will create a simple Hello, world application in order to reinforce concepts such as component definition, a dependency graph, and how to create a new deployment.
Before you get started with drafting a new component, take a moment to familiarize yourself with the existing components that have already been deployed by using the IoT Greengrass CLI. This CLI was installed by the --deploy-dev-tools true argument that was passed in during the installation. This tool is designed to help you with a local development loop; however, as a best practice, it is not installed in production solutions. It is installed at /greengrass/v2/bin/greengrass-cli. The following steps demonstrate how to use this tool:
pi@raspberrypi:~ $ /greengrass/v2/bin/greengrass-cli component list
java.lang.RuntimeException: Unable to create ipc client
at com.aws.greengrass.cli.adapter.impl.NucleusAdapterIpcClientImpl.getIpcClient(NucleusAdapterIpcClientImpl.java:260)
…
at com.aws.greengrass.cli.CLI.main(CLI.java:57)
Caused by: java.io.IOException: Not able to find auth information in directory: /greengrass/v2/cli_ipc_info. Please run CLI as authorized user or group.
Uh oh, what happened in this step? This command failed with a Java stack trace and errors such as java.lang.RuntimeException: Unable to create ipc client and Please run CLI as authorized user or group. This is an example of IoT Greengrass security principles at work. By default, the IoT Greengrass software is installed as the root system user. Only the root user, or a system user added to /etc/sudoers, can interact with the IoT Greengrass core software, even through the IoT Greengrass CLI. Components are run as the default system user identified in the configuration (please refer to the --component-default-user argument in the installation command), or each component can define an override system user to run as. Run the command again using sudo (superuser do):
Note
There is a shortcut you can add to avoid typing the full path of /greengrass/v2/bin/greengrass-cli every time you want to use the CLI. You can add the /greengrass/v2/bin directory to your sudoers secure_path so that the greengrass-cli script can be used by typing in the name of the script. Use visudo to append the path to the Defaults secure_path list. This results in the use of a CLI such as sudo greengrass-cli component list as your system user in the sudo group.
Now you can view the list of components, including the aws.greengrass.Cli component that installed the very CLI you are using! The CLI script can be run by any system user; however, it will only successfully interact with the local IoT Greengrass installation when run as root (via sudo) or a system user belonging to a system group defined in the component's configuration AuthorizedPosixGroups (which defaults to null). Note that even the nucleus component appears in the list:
Component Name: aws.greengrass.Cli
Version: 2.4.0
State: RUNNING
Configuration: {"AuthorizedPosixGroups":null}
The preceding output shows you the status of the component as a brief summary. You can view the current state the component is in; in this case, this is RUNNING, indicating the component's life cycle is either on or available. That makes sense because the CLI should always be available to us while the component is being deployed. Components that run once, perform a task, and close would show a state of FINISHED after they have completed their life cycle tasks.
That's just the component's status, so next, let's take a look at the component's recipe and artifacts. As defined earlier, a component is made up of two resources: a recipe file and a set of artifacts. So what does the recipe file for the CLI component look like? You can find this in the /greengrass/v2/packages/recipes directory.
Selection of recipe.yaml for aws.greengrass.C--
RecipeFormatVersion: "2020-01-25"
ComponentName: "aws.greengrass.Cli"
ComponentVersion: "2.4.0"
ComponentType: "aws.greengrass.plugin"
ComponentDescription: "The Greengrass CLI component provides a local command-line interface that you can use on Greengrass core devices to develop and debug components locally. The Greengrass CLI lets you create local deployments and restart components on the Greengrass core device, for example."
ComponentPublisher: "AWS"
ComponentDependencies:
aws.greengrass.Nucleus:
VersionRequirement: ">=2.4.0 <2.5.0"
DependencyType: "SOFT"
Manifests:
- Platform:
os: "linux"
Lifecycle: {}
Artifacts:
- Uri: "greengrass:UbhqXXSJj65QLVH5UqL6nBRterSKIhQu5FKeVAStZGc=/aws.greengrass.cli.client.zip"
Digest: "uziZS73Z6dKgQgB0tna9WCJ1KhtyhsAb/DSv2Eaev8I="
Algorithm: "SHA-256"
Unarchive: "ZIP"
Permission:
Read: "ALL"
Execute: "ALL"
- Uri: "greengrass:2U_cb2X7-GFaXPMsXRutuT_zB6CdImClH0DSNVvzy1Y=/aws.greengrass.Cli.jar"
Digest: "UpynbTgG+wYShtkcAr3X+l8/9QerGwaMw5U4IiicrMc="
Algorithm: "SHA-256"
Unarchive: "NONE"
Permission:
Read: "OWNER"
Execute: "NONE"
Lifecycle: {}
There are a few important observations to review in this file:
With this review of the component structure, it's time to write your own component with an artifact and a recipe.
As mentioned earlier, the first component that we wish to create is a simple "Hello, world" application. In this component, you will create a shell script that prints Hello, world with the echo command. That shell script is an artifact for your component. Additionally, you will write a recipe file that tells IoT Greengrass how to use that shell script as a component. Finally, you will use the local IoT Greengrass CLI to deploy this component and check it works.
Local component development uses artifacts and recipe files available on the local disk, so you will need to create some folders for your working files. There is no folder in /greengrass/v2 that is designed to store your working files. Therefore, you will create a simple folder tree and add your component files there:
hello.sh
#!/bin/bash
if [ -z $1 ]; then
target="world"
else
target=$1
fi
echo "Hello, $target"
com.hbs.hub.HelloWorld-1.0.0.json
{
"RecipeFormatVersion": "2020-01-25",
"ComponentName": "com.hbs.hub.HelloWorld",
"ComponentVersion": "1.0.0",
"ComponentDescription": "My first AWS IoT Greengrass component.",
"ComponentPublisher": "Home Base Solutions",
"ComponentConfiguration": {
"DefaultConfiguration": {
"Message": "world!"
}
},
"Manifests": [
{
"Platform": {
"os": "linux"
},
"Lifecycle": {
"Run": ". {artifacts:path}/hello.sh '{configuration:/Message}'"
}
}
]
}
This recipe is straightforward: it defines a life cycle step to run our hello.sh script that it will find in the deployed artifacts path. One new addition that has not yet been covered is the component configuration. The ComponentConfiguration object allows developers to define arbitrary key-value pairs that can be referenced in the rest of the recipe file. In this scenario, we define a default value to pass as an argument to the script. This value can be overridden when deploying a component to customize how each edge device uses the deployed component.
So, how do you test a component now that you've written the recipe and provided the artifacts? The next step is to create a new deployment that tells the local IoT Greengrass environment to load your new component and start evaluating life cycle events for it. This is where the IoT Greengrass CLI can help.
sudo /greengrass/v2/bin/greengrass-cli deployment create --recipeDir ~/hbshub/recipes --artifactDir ~/hbshub/artifacts --merge "com.hbs.hub.HelloWorld=1.0.0"
Local deployment submitted! Deployment Id: b0152914-869c-4fec-b24a-37baf50f3f69
Components currently running in Greengrass:
Component Name: com.hbs.hub.HelloWorld
Version: 1.0.0
State: FINISHED
Configuration: {"Message":"world!"}
2021-05-26T22:22:02.325Z [INFO] (pool-2-thread-32) com.hbs.hub.HelloWorld: shell-runner-start. {scriptName=services.com.hbs.hub.HelloWorld.lifecycle.Run, serviceName=com.hbs.hub.HelloWorld, currentState=STARTING, command=["/greengrass/v2/packages/artifacts/com.hbs.hub.HelloWorld/1.0.0/hello.sh 'world..."]}
2021-05-26T22:22:02.357Z [INFO] (Copier) com.hbs.hub.HelloWorld: stdout. Hello, world!. {scriptName=services.com.hbs.hub.HelloWorld.lifecycle.Run, serviceName=com.hbs.hub.HelloWorld, currentState=RUNNING}
2021-05-26T22:22:02.365Z [INFO] (Copier) com.hbs.hub.HelloWorld: Run script exited. {exitCode=0, serviceName=com.hbs.hub.HelloWorld, currentState=RUNNING}
Congratulations! You have written and deployed your first component to your Home Base Solutions prototype hub using IoT Greengrass. In the log output, you can observe two noteworthy observations. First, you can view the chronology of the component's life cycle change state from STARTING to RUNNING before reporting a successful exit code back to IoT Greengrass. The component concludes at that point, so we don't view an entry in the log that shows it move to the FINISHED state, although that is visible in the greengrass.log file.
Second, you can view the message written to STDOUT with an exclamation point included (world!). This means that the script received your component's default configuration instead of falling back on the default built into hello.sh (world). You could also override the default configuration value of "world!" in the recipe file with a custom value included in the deployment command. You'll learn how to use that technique to configure fleets in Chapter 4, Extending the Cloud to the Edge.
In this chapter, you learned the basics regarding a specific tool we will use throughout this book that satisfies one of the key needs of any edge ML solution, that is, the runtime orchestrator. IoT Greengrass provides out-of-the-box features to focus developers on their business solutions instead of the undifferentiated work to architect a flexible, resilient edge runtime and deployment mechanism. You learned that the fundamental unit of software in IoT Greengrass is the component, which is made up of a recipe and a set of artifacts, and components make their way to the solution via deployments. You learned how to validate that a device is ready to work with IoT Greengrass using the IDT. You learned how to install IoT Greengrass, develop your first component, and get it running in the local environment.
In the next chapter, we will take a deeper dive into how IoT Greengrass works by exploring how it enables gateway functionality, common protocols used at the edge, security best practices, and builds out new components used to sense and actuate in a cyber-physical solution.
Before moving on to the next chapter, test your knowledge by answering these questions. The answers can be found at the end of the book:
Please refer to the following resources for additional information on the concepts discussed in this chapter: