Chapter 2: Foundations of Edge Workloads

This chapter will explore the next level of detail regarding edge workloads and your first hands-on activity. You will learn how AWS IoT Greengrass meets the needs for designing and delivering modern edge ML solutions. You will learn how to prepare your edge device to work with AWS by deploying a tool that checks your device for compatible requirements. Additionally, you will learn how to install the IoT Greengrass core software and deploy your first IoT Greengrass core device. You will learn about the structure of components, examine the fundamental unit of software in IoT Greengrass, and write your first edge workload component.

By the end of this chapter, you should start to feel comfortable with the basics of IoT Greengrass and its local development life cycle.

In this chapter, we are going to cover the following main topics:

  • The anatomy of an edge ML solution
  • IoT Greengrass for the win
  • Checking compatibility with IoT Device Tester
  • Installing IoT Greengrass
  • Your first edge component

Technical requirements

The technical requirements for this chapter are the same as those described in the Hands-on prerequisites section in Chapter 1, Introduction to the Data-Driven Edge with Machine Learning. Please refer to the full requirements mentioned in that chapter. As a reminder, you will need the following:

  • A Linux-based system to deploy the IoT Greengrass software. A Raspberry Pi 3B, or later, is recommended. The installation instructions are similar to other Linux-based systems. Please refer to the following GitHub repository for further guidance when the hands-on steps differ for systems other than a Raspberry Pi.
  • A system to install and use the AWS Command-Line Interface (CLI), enabling access to the AWS Management Console website (typically, your PC/laptop).

You can access this chapter's technical resources from the GitHub repository, under the chapter2 folder, at https://github.com/PacktPublishing/Intelligent-Workloads-at-the-Edge/tree/main/chapter2.

The anatomy of an edge ML solution

The previous chapter introduced the concept of an edge solution along with the three key kinds of tools that define an edge solution with ML applications. This chapter provides more detail regarding the layers of an edge solution. The three layers addressed in this section are as follows:

  • The business logic layer includes the customized code that dictates the solution's behavior.
  • The physical interface layer connects your solution to the analog world with sensors and actuators.
  • The network interface layer connects your solution to other digital entities in the wider network.

Learning more about these layers is important because they will inform how you, as the IoT architect, make trade-offs when designing your edge ML solution. First, we'll start by defining the business logic layer.

Designing code for business logic

The business logic layer is where all the code of your edge solution lives. This code can take many shapes, such as precompiled binaries (such as a C program), shell scripts, code evaluated by a runtime (such as a Java or Python program), and ML models. Additionally, code can be organized in a few different ways such as shipping everything into a monolithic application, splitting up code into services or libraries, or bundling code to run in a container. All of these options come with implications for architecting and shipping an edge ML solution, such as security, cost, consistency, productivity, and durability. Some of the challenges of delivering code for the business logic layer are as follows:

  • Writing and testing code that will run on your edge hardware platforms. For example, writing code that will work on variations of hardware platforms as incremental new versions are rolled out. You will want to minimize the number of forks of code you maintain that meet the needs of all your hardware platforms.
  • Designing a robust edge solution that encompasses many features. For example, bundling features to process new sensor data, analyze data, and communicate with web services that do not create conflicts with common dependencies or local resources.
  • Writing code with a team of people all working on an aggregate edge solution. For example, a monolithic application with many contributors can require each author to fully know the solution to make an incremental change.

To address the challenges of writing your business layer logic, the best practice for shipping code to the edge is to use isolated services where practical.

Isolated services

On your Home Base Solutions hub device (the fictional product we are creating from the story in Chapter 1, Introduction to the Data-Driven Edge with Machine Learning), code will be deployed and run as isolated services. In this context, a service is a self-contained unit of business logic that is either invoked by another entity to perform a task or performs a task on its own. Isolation means that the service will bundle with it the code, resources, and dependencies it needs for its operation. For example, a service you will create in Chapter 7, Machine Learning Workloads at the Edge, will run code to read from a data source or collection of images, periodically compute inferences using a bundled ML model, then publish any inference results to a local stream or the cloud. This pattern of isolated services is selected for two reasons:

  • The first reason is that a service-oriented architecture enables architects to design capabilities that are decoupled from one another. Decoupling means we use data structures, such as buffers, queues, and streams, to add a layer of abstraction between our services, reducing dependencies to allow services to run independently.

    You can deploy updates to individual services without touching other running services and, therefore, reduce the risk of impact on them. Decoupled, service-oriented architecture is a best practice for designing well-architected cloud solutions that are also a good fit for edge ML solutions where multiple services are simultaneously running and emphasize a need for reliability. For example, a service that interfaces a sensor writes new measurements to a data structure and nothing more; it has a single job and doesn't need to be aware of how the data is consumed by a later capability.

  • The second reason is that the isolation of code enables developers to focus on what that code does instead of where the code is going or how dependencies are managed at the destination. By using principles of isolation to bundle runtime dependencies and resources with code, we get stronger consistency that code will work deterministically wherever it is deployed. Developers free up the effort required for dependency management and have more confidence that the code will work the same way on the edge platform, which is likely different than their development environment. That's not to say an edge solution developer won't need to test the behavior of their code against physical interfaces such as sensors and actuators. However, it does mean that development teams can deliver self-contained services that work independently regardless of the rest of the services deployed in the aggregate edge solution.

    Examples of isolation include Python virtual environments, which explicitly specify a Python runtime version and package, and Docker Engine, which uses containers to bundle dependencies, resources, and achieve process isolation on the host. The following diagram illustrates the separation of concerns achieved with isolated services:

Figure 2.1 – An edge solution using decoupled, isolated services

Figure 2.1 – An edge solution using decoupled, isolated services

In tandem, the patterns of isolation and services offer compelling benefits for edge ML solutions. Of course, every decision in development comes with trade-offs. The solution would be simpler if deployed as a singular monolith of code and faster to derive a minimum viable product. We opt for more architectural complexity because this leads to better resiliency and a scaling up of the solution over time. We lean on strong patterns and good tooling to balance that complexity.

IoT Greengrass is designed with this pattern in mind. Later, in this chapter and throughout the book, you will learn how to use this pattern with IoT Greengrass to develop well-architected edge ML solutions.

Physical interfaces

A cyber-physical solution is defined by the use of physical interfaces to interact with the analog world. These interfaces come in two classifications: sensors for taking measurements from the analog world and actuators for exerting change back upon it. Some machines do both, such as a refrigerator that senses internal temperature and activates its compressor to cycle a refrigerant through coils. In these cases, the aggregation of sensing and actuating is logical, meaning the sensor and actuator have a relationship but are functionally independent and are coordinated via a mechanism such as a switch, circuit, or microcontroller.

Sensors that perform analog-to-digital conversion do so by sampling the voltage from an electrical signal and converting it into a digital value. These digital values are interpreted by code to derive data such as temperature, light, and pressure. Actuators convert digital signals into analog actions, typically, by manipulating the voltage going to a switch or circuit. A command to engage a motor is interpreted as raising a voltage to the level that activates the circuit. Diving deeper into the electrical engineering of physical interfaces is beyond the scope of this book. Please refer to the References section for recommendations on deeper dives on that topic. The following diagram shows a simple analog example of a refrigerator and the relationship between the thermostat (sensor), switch (controller), and compressor (actuator):

Figure 2.2 – An analog controller with a sensor and an actuator

Figure 2.2 – An analog controller with a sensor and an actuator

It's important to understand the patterns of input and output delivered by a cyber-physical solution and the relationship to a higher-level outcome delivered through an edge ML solution. Throughout the project delivered in this book, you will gain hands-on experience applying these patterns. Some of the services of the Home Base Solutions hub device will serve as interfaces to the physical layer, providing new measurements from the sensors and converting the commands to change the state of local devices. If you are working with a physical edge device, such as the Raspberry Pi, you will get some experience of using code to interact with the physical interfaces of that device.

Network interfaces

The third layer to introduce for our edge solution anatomy is the network interface. A differentiator between our definitions of cyber-physical solutions and edge solutions is that an edge solution will, at some point, interact with another entity over a network. For example, the design of our new appliance monitoring kit for Home Base Solutions uses wireless communication between the monitoring kit and the hub device. There is no physical connection between the two for the purposes of analog-to-digital signal conversion from the monitor's sensors.

Similarly, the hub device will also exchange messages with a cloud service to store telemetry to use in the training of the ML model, to deploy new resources to the device, and to alert customers of recognized events. The following diagram illustrates the flow of messages and the relationships between a Sensor, Actuator, hub device (Gateway), and the cloud service:

Figure 2.3 – An edge device exchanging messages with local sensors, actuators, and the cloud

Figure 2.3 – An edge device exchanging messages with local sensors, actuators, and the cloud

Wireless communications are common in IoT solutions, and specific implementations enable connectivity at a wide range of distances. Each specification and implementation makes trade-offs for range, data transmission rate, hardware cost, and energy consumption. Short-range radio specifications such as Zigbee (IEEE 802.15.4), Bluetooth (IEEE 802.15.1), and WiFi (IEEE 802.11) are suitable for bridging devices within personal and local area networks. Long-range radio specifications such as conventional cellular networks (for example, GSM, CDMA, and LTE) and low-power wide-area networks (LPWANs) such as LoRaWAN and NB-IoT deliver connectivity options for devices deployed (either static or roaming) across a particular campus, city, or region.

Wired communications are still common for connecting edge devices such as TVs, game consoles, and PCs to home network solutions such as switches and routers over Ethernet. Wired connectivity is less common in smart home products due to the limited number of Ethernet ports on home networking routers (typically, there are only 1–8 ports), the restrictions regarding where the device can be placed, and the burden of adding wires throughout the home.

For example, the Home Base Solutions appliance monitoring kit would likely use a Zigbee, or equivalent implementation, and a battery to balance energy consumption with the anticipated data rate. If the kit required a power supply from a nearby outlet, Wi-Fi becomes more of an option; however, it would limit the overall product utility as the placement of the kinds of appliances to monitor don't always have a spare outlet. Additionally, it wouldn't make sense to use Ethernet to connect the kits to the hub directly since customers likely wouldn't find all of the extra wires running throughout the home appealing. The hub device that communicates with the kit could use Ethernet or Wi-Fi to bridge to the customer's local network to gain access to the public internet.

Now that you have a better understanding of the three layers of an edge solution, let's evaluate the selected edge runtime solution and how it implements each layer.

IoT Greengrass for the win

The most important question to answer in a book about using IoT Greengrass to deliver edge ML solutions is why IoT Greengrass? When evaluating the unique challenges of edge ML solutions and the key tools required to deliver them, you want to select tools that solve as many problems for you as possible while staying out of your way in terms of being productive. IoT Greengrass is a purpose-built tool with IoT and ML solutions at the forefront of the value proposition.

IoT Greengrass is prescriptive in how it solves the problem of the undifferentiated heavy lifting of common requirements while remaining non-prescriptive in how you implement your business logic. This means that the out-of-box experience yields many capabilities for rapid iteration without being obstructive about how you use them to reach your end goals. The following is a list of some of the capabilities that IoT Greengrass has to offer:

  • Security at the edge: IoT Greengrass is installed with root permissions and uses operating system user permissions to protect the code and resources deployed at the edge from tampering.
  • Security to the cloud: IoT Greengrass uses mutual transport layer security (TLS) with public key infrastructure to exchange messages between the edge and the cloud. Resources are fetched during deployments using HTTPS and AWS Signature Version 4 to verify the identity of the requester and protect data in transit.
  • Runtime orchestration: Developers can design applications however they prefer (with monoliths, services, or containers) and deploy them to the edge with ease. IoT Greengrass provides hooks for smartly integrating with component life cycle events, or developers can ignore them and simply bootstrap applications with a single command. Individual components can be added or updated without interrupting other running services. A dependency tree allows developers to abstract out the installation of libraries and configuration activities to decouple from code artifacts.
  • Logging and monitoring: By default, IoT Greengrass creates logs for each component and allows developers to indicate which log files should be synchronized to the cloud for operational purposes. Additionally, the cloud service keeps track of device health automatically, making it easier for team members to identify and respond to unhealthy devices.
  • Scaling up the fleet: Deploying updates to one device is not much different than deploying updates to a fleet of devices. It is easy to define groups, classify similar devices together, and then push updates to groups of devices using a managed deployment service.
  • Native integrations: AWS provides many components to deploy into IoT Greengrass solutions that augment the baseline functionality and also for integrating with other AWS services. A stream management component enables you to define, write to, and consume streams at the edge. A Docker application manager allows you to download Docker images from public repositories or private repositories in Amazon Elastic Container Registry. Pretrained and optimized ML models are available for tasks such as object detection and image classification with Deep Learning Runtime and TensorFlow Lite.

In playing your role as the Home Base Solutions architect with a solution to build, you could propose that the engineering team invest the time and resources to build out all this functionality and test that it is production-ready. However, the IoT Greengrass baseline services and optional add-ons are ready to accelerate the development life cycle and come vetted by AWS, where security is the top priority.

IoT Greengrass does not do everything for you. In fact, a fresh installation of IoT Greengrass doesn't do anything but wait for further instruction in the form of a deployment. Think of it as a blank canvas, paint, and brush. It's everything you need to get started, but you have to develop the solution that it runs. Let's review the operating model for IoT Greengrass, both at the edge and within the cloud.

Reviewing IoT Greengrass architecture

IoT Greengrass is both a managed service running on AWS and an edge runtime tool. The managed service is where your devices are defined individually and in groups. When you want to push a new deployment to the edge, you are, in fact, invoking an API in the managed service that is then responsible for communicating with the edge runtime to coordinate the delivery of that deployment. Here is a sequence diagram showing the process of you, as the developer, configuring a component and requesting that a device running the IoT Greengrass core software receive and run that component:

Figure 2.4 – Pushing a deployment through IoT Greengrass

Figure 2.4 – Pushing a deployment through IoT Greengrass

The component is the fundamental unit of functionality that is deployed to a device running IoT Greengrass. Components are defined by a manifest file, called a recipe, which tells IoT Greengrass what the name, version, dependencies, and instructions are for that component. In addition to this, a component can define zero or more artifacts that are fetched during deployment. These artifacts can be binaries, source code, compiled code, archives, images, or data files; really any kind of file or resource that is stored on disk. Component recipes can define dependencies on other components that get resolved via a graph by the IoT Greengrass software.

During a deployment activity, one or more components are added or updated in that deployment. The component's artifacts are downloaded to the local device from the cloud; then, the component is started by way of evaluating life cycle instructions. A life cycle instruction could be something that happens during startup; it could be the main command to run, such as starting a Java application or something to do after the component has concluded running. Components might continue in the running state indefinitely or perform a task and exit. The following diagram provides an example of the component graph:

Figure 2.5 – An example graph of components showing the life cycle and dependencies

Figure 2.5 – An example graph of components showing the life cycle and dependencies

That's everything we need to cover before taking your first steps toward readying your edge device to run a solution with IoT Greengrass!

In the following sections, you will validate that your edge device is ready to run the IoT Greengrass software, install the software, and then author your first component. In addition to this, you will get a closer look at components and deployments through the hands-on activities to come.

Checking compatibility with IoT Device Tester

IoT Device Tester (IDT) is a software client provided by AWS to assess device readiness for use in AWS IoT solutions. It assists developers by running a series of qualification tests to validate whether the destination system is ready to run the IoT Greengrass core software. Additionally, it runs a series of tests to prove that edge capabilities are already present, such as establishing MQTT connections to AWS and running ML models locally. IDT works for one device you are testing locally or scales up to run customized test suites against any number of device groups, so long as they are accessible over the network.

In the context of your role as the IoT architect at Home Base Solutions, you should use IDT to prove that your target edge device platform (in this case, platform refers to hardware and software) is capable of running the runtime orchestration tool of choice, IoT Greengrass. This pattern of using tools to prove compatibility is a best practice over manually evaluating the target platform and/or assuming that a certain combination of listed requirements is met. For example, a potential device platform might advertise that its hardware requirements and operating system meet your needs, but it could be missing a critical library dependency that doesn't surface until later in the development life cycle. It is best to prove early that everything you need is present and accounted for.

Note

IDT does more than qualify hardware for running IoT Greengrass core software. The tool can also qualify hardware running FreeRTOS to validate that the device is capable of interoperating with AWS IoT Core. Developers can write their own custom tests and bundle them into suites to incorporate into your software development life cycle (SDLC).

The following steps will enable you to prepare your Raspberry Pi device for use as the edge system (that is, the Home Base Solutions hub device in our fictional project) and configure your AWS account before finally running the IDT software. Optionally, you can skip the Booting the Raspberry Pi and Configuring the AWS account and permissions sections if you already have a device and AWS account configured for use. If you are using a different platform for your edge device, you just need to ensure you can reach the device over SSH with a system user that has root permissions from your command-and-control device.

Booting the Raspberry Pi

The following steps were run on a Raspberry Pi 3B with a clean installation of the May 2021 release of the Raspberry Pi OS. Please refer to https://www.raspberrypi.org/software/ for the Raspberry Pi Imager tool. Use your command-and-control system to run the imaging tool to flash a Micro SD card with a fresh image of the Raspberry Pi OS. For this book's project, we recommended that you use a blank slate to avoid any unforeseen consequences of preexisting software and configuration changes. Here is a screenshot of the Raspberry Pi Imager tool and the image to select:

Figure 2.6 – The Raspberry Pi Imager tool and the image to select

Figure 2.6 – The Raspberry Pi Imager tool and the image to select

The following is a list of steps to perform after flashing the Micro SD card with the Imager tool:

  1. Insert the Micro SD card into the Raspberry Pi.
  2. Turn on the Raspberry Pi by plugging in the power source.
  3. Complete the first-time boot wizard. Update the default password, set your locale preferences, and connect to Wi-Fi (this is optional if you are using Ethernet).
  4. Open the Terminal app and run sudo apt-get update and sudo apt-get upgrade.
  5. Reboot the Pi.
  6. Open the Terminal app and run the hostname command. Copy the value and make a note of it; for example, write it in a scratch file on your command-and-control system. On Raspberry Pi devices, the default is raspberrypi.
  7. Open the Raspberry Pi preferences app and enable the SSH interface. This must be enabled for the IDT to access the device. Open Preferences, choose Raspberry Pi Configuration, choose Interfaces, and enable SSH.

At this milestone, your Raspberry Pi device is configured to join the same local network of your command-and-control system and can be accessed via a remote shell session. If you are using a different device or a virtual machine as your edge device for the hands-on project, you should be able to access that device via SSH. A good test to check whether this is working properly is to try connecting to your edge device from your command-and-control system using a Terminal application (or PuTTY on Windows), with ssh pi@raspberrypi. Replace raspberrypi if you have a different hostname from step 6. Next, you will configure your AWS account in order to run IDT on your edge device.

Configuring the AWS account and permissions

Complete all of the steps in this section on your command-and-control system. For readers who don't yet have an AWS account (skip to step 5 if you already have access to an account), perform the following:

  1. Create your AWS account. Navigate to https://portal.aws.amazon.com/billing/signup in your web browser and complete the prompts. You will require an email address, phone number, and credit card.
  2. Sign in to the AWS Management Console using your root login and navigate to the Identity and Access Management (IAM) service. You can find this at https://console.aws.amazon.com/iam/.
  3. Use the IAM service console to set up your administrative group and user account. It is a best practice to create a new user for yourself instead of continuing to use the root user login. You will use that new user to complete any later AWS steps using the AWS management console or AWS CLI:
    1. Create a new user called Admin. Select both the Programmatic access and AWS Management Console access types. Skip the sections for permissions and tags. Choose Create user in the Review section. In the confirmation section, make a note in your scratch file of the dedicated sign-in link for your account. Ensure that you download the CSV file with your AWS credentials, referred to as Access Key and Secret Key, which you will use to interact with the AWS CLI. Note that the CSV file also includes the user's password that is later used to log in to the AWS management console (instead of the current root user).
    2. Create a new user group called Administrators, add the Admin user to it, and attach to it the policy named AdministratorAccess (it is easier to use the filter field and type that in). This policy is managed by AWS to grant administrator-level permissions to users. The best practice is to relate permissions to groups and then assign users to groups to inherit permissions. This makes it easier to audit permissions and understand the kinds of access that users have from well-named groups.
  4. Log out of the AWS management console.

    Note

    At this point, you should have access to an AWS account with an administrative user. Complete the following steps to set up the AWS CLI (please skip to step 7 if you already have the AWS CLI configured with your admin user).

  5. Install the AWS CLI. Platform-specific instructions can be found at https://aws.amazon.com/cli/. In this book, the AWS CLI steps will use AWS CLI v2.
  6. Once installed, configure the AWS CLI and use the credentials you downloaded for the Admin user. This user will be stored as the default profile in your local AWS settings:
    1. In your Terminal or PowerShell application, run aws configure.
    2. When prompted for AWS Access Key ID and AWS Secret Access Key, use the values from the downloaded credentials file in step 3A. Do not use the password value here.
    3. When prompted for a default region, you can specify the AWS region you want to use by default for any commands. Note that some of the AWS CLI steps mentioned in this book require you to provide an explicit region in the command. at other times, the region is implicitly selected from this configuration step. In this book, the steps will use region us-west-2 by default and remind readers to substitute when applicable.
    4. When prompted for a default output format, you can choose any of [json, yaml, text, table]. The author's preference is json and will be reflected in any examples of AWS CLI output that appear in the book.

    Next, you will use your Admin user to create a few more resources in preparation for the following sections to use the IDT and install the IoT Greengrass core software. These are permissions resources, similar to your Admin user, which will be used by IDT and IoT Greengrass software to interact with AWS on your behalf.

  7. Log in to the AWS management console using your custom sign-in link from step 3A. Use the Admin username and password provided in the CSV file containing credentials.
  8. Return to the IAM service console, which can be found at https://console.aws.amazon.com/iam/.
  9. Create a new user named idtgg (short for IDT and Greengrass) and select the Programmatic access type. This user will not require a password for management console access. Skip the permissions and tags sections. Make sure that you download the CSV file containing the credentials for this user as well.
  10. Create a new policy. The first step is to define the permissions for the policy. Choose the JSON tab and paste the contents from this accompanying file into the book resources repository: chapter2/policies/idtgg-policy.json. Skip the tags section. In the review section, enter the name of idt-gg-permissions, enter the description of permissions for IoT Device Tester and IoT Greengrass, and choose Create policy.
  11. Create a new user group with the name of Provision-IDT-Greengrass, select the idt-gg-permissions policy, and select the idtgg user. Choose Create group. You have now set up a new group, attached permissions, and assigned the programmatic access user that will serve as authentication and authorization for the IDT client and IoT Greengrass provisioning tool.
  12. In your Terminal or PowerShell application, configure a new AWS CLI profile for this new user:
    1. Run aws configure --profile idtgg.
    2. When prompted for access keys and secret keys, use the new values from the credentials CSV file downloaded in step 9.
    3. When prompted for the default region, use the book's default of us-west-2 or the AWS region you are using for all of the projects in this book.

That concludes all of the preparatory steps to configure your AWS account, permissions, and CLI. The next milestone is to install the IDT client and prepare it to test the Home Base Solutions prototype hub device.

Configuring IDT

You will run the IDT software from your command-and-control system, and the IDT will access the edge device system remotely through SSH to run the tests.

Note

The following steps reflect the configuration and use of IDT at the time of writing. If you get stuck, it might be that the latest version differs from the version that we used at the time of writing. You can check the AWS documentation for IDT for the latest guidance on installation, configuration, and use. Please refer to https://docs.aws.amazon.com/greengrass/v2/developerguide/device-tester-for-greengrass-ug.html.

Follow these steps to use IDT to verify the edge device system is ready to run IoT Greengrass. All of the following steps are completed on macOS using IoT Greengrass core software v2.4.0 and IDT v4.2.0 with test suite GGV2Q_2.0.1. For Windows, Linux, or later AWS software versions, please alter the commands and directories as needed:

  1. On your command-and-control system, open a web browser and navigate to https://docs.aws.amazon.com/greengrass/v2/developerguide/dev-test-versions.html.
  2. Underneath Latest IDT version for AWS IoT Greengrass and IDT software downloads, click on the link for the platform that matches your system. This will open a file download prompt. Save the archive to your local file system; for example, C:projectsidt on Windows or ~/projects/idt on macOS and Linux:
    Figure 2.7 – The AWS documentation website for downloading IDT; exact text and versions might differ

    Figure 2.7 – The AWS documentation website for downloading IDT; exact text and versions might differ

  3. Unzip the archive contents in place in the directory. In a file explorer, double-click on the archive to extract them. If using Terminal, use a command such as unzip devicetester_greengrass_v2_4.0.2_testsuite_1.1.0_mac.zip. This is what the directory looks like on macOS:
    Figure 2.8 – macOS Finder showing the directory contents after unzipping the IDT archive

    Figure 2.8 – macOS Finder showing the directory contents after unzipping the IDT archive

  4. Open a new tab in your browser and paste the following link to prompt a download of the latest IoT Greengrass core software: https://d2s8p88vqu9w66.cloudfront.net/releases/greengrass-2.4.0.zip (if this link doesn't work, you can find the latest guidance at https://docs.aws.amazon.com/greengrass/v2/developerguide/quick-installation.html#download-greengrass-core-v2).
  5. Rename the downloaded file as aws.greengrass.nucleus.zip and move it to an IDT directory such as ~/projects/idt/devicetester_greengrass_v2_mac/products/aws.greengrass.nucleus.zip:
    Figure 2.9 – The IoT Greengrass software in place to be used by IDT

    Figure 2.9 – The IoT Greengrass software in place to be used by IDT

  6. Open a text file such as ~/projects/idt/devicetester_greengrass_v2_mac/configs/config.json and update the following values:
    1. (Optionally) update awsRegion if you are not using the book's default of us-west-2.
    2. To use the idtgg profile that you configured earlier, set the value of auth as follows:

     "auth": {

       "method": "file",

       "credentials": {

         "profile": "idtgg"

       }

     }

  7. Open a text file such as ~/projects/idt/devicetester_greengrass_v2_mac/configs/device.json and update the following values:
    1. "id": "pool1".
    2. "sku": "hbshub" (hbshub stands for Home Base Solutions hub).
    3. Underneath "features", for the name-value pair named "arch", set "value": "armv7l" (this is for Raspberry Pi devices; alternatively, you can choose the appropriate architecture for your device).
    4. Underneath "features", for the remaining name-value pairs such as "ml", "docker", and "streamManagement", set "value": "no". For now, we will disable these tests because we have no immediate plans to use the tested features. Feel free to enable them if you'd like to evaluate your device's compatibility, although expect the tests to fail on a freshly imaged Raspberry Pi.
    5. Underneath "devices", set "id": "raspberrypi" (or any device ID you prefer).
    6. Underneath "connectivity", set the value of "ip" to the IP address of your edge device (for Raspberry Pi users, the value is the output of step 6 from the Booting the Raspberry Pi section).
    7. Underneath "auth", set "method": "password".
    8. Underneath "credentials", set the value of "user" to the username used to SSH to the edge device (typically, this will be "pi" for Raspberry Pi users).
    9. Underneath "credentials", set the value of "password" to the password used to SSH to the edge device.
    10. Underneath "credentials", delete the line for "privKeyPath".
    11. Save the changes to this file. You can view a sample version of this file in the book's GitHub repository at chapter2/policies/idt-device-sample.json.
  8. Open a text file such as ~/projects/idt/devicetester_greengrass_v2_mac/configs/userdata.json and update the following values. Ensure that you specify absolute paths instead of relative paths:
    1. "TempResourcesDirOnDevice": "/tmp/idt".
    2. "InstallationDirRootOnDevice": "/greengrass".
    3. "GreengrassNucleusZip": "Users/ryan/projects/idt/devicetester_greengrass_v2_mac/products/aws.greengrass.nucleus.zip" (update this based on where you stored the aws.greengrass.nucleus.zip file in step 5 of this section).
    4. Save the changes to this file. You can view a sample version of this file in the book's GitHub repository at chapter2/policies/idt-userdata-sample.json.
  9. Open an application such as Terminal on macOS/Linux or PowerShell on Windows.
  10. Change your present working directory to where the IDT launcher is located:
    1. ~/projects/idt/devicetester_greengrass_v2_mac/bin on macOS
    2. ~/projects/idt/devicetester_greengrass_v2_linux/bin on Linux
    3. C:projectsidtdevicetester_greengrass_v2_winin on Windows
  11. Run the command to start IDT:
    1. ./devicetester_mac_x86-64 run-suite --userdata userdata.json on macOS
    2. ./devicetester_linux_x86-64 run-suite --userdata userdata.json on Linux
    3. devicetester_win_x86-64.exe run-suite --userdata userdata.json on Windows

    Running IDT will start a local application that connects to your edge device over SSH and completes the series of tests. It will stop upon encountering the first failed test case or run until all test cases have passed. If you are running IDT against a fresh installation of your Raspberry Pi, as defined by previous steps, you should observe an output similar to the following:

    ========== Test Summary ==========

    Execution Time:   3s

    Tests Completed:   2

    Tests Passed:     1

    Tests Failed:     1

    Tests Skipped:     0

    ----------------------------------

    Test Groups:

        pretestvalidation:     PASSED

        coredependencies:     FAILED

    ----------------------------------

    Failed Tests:

        Group Name: coredependencies

            Test Name: javaversion

                Reason: Encountered error while fetching java version on the device: Failed to run Java version command with error: Command '{java -version 2>&1 map[] 30s}' exited with code 127. Error output: .

    The step to install Java on the Raspberry Pi was intentionally left out in order to demonstrate how IDT identifies missing dependencies; apologies for the deception! If you ran the IDT test suite and passed all of the test cases, then you are ahead of schedule and can skip to the Installing IoT Greengrass section.

  12. To fix this missing dependency, return to your Raspberry Pi interface and open the Terminal app.
  13. Install Java on the Pi using sudo apt-get install default-jdk.
  14. Return to your command-and-control system and run IDT again (repeat the command in step 11).

Your test suite should now pass the Java requirement test. If you encounter other failures, you will need to use the test report and logs in the idt/devicetester_greengrass_v2_mac/results folder to triage and fix them. Some common missteps include missing AWS credentials, AWS credentials without sufficient permissions, and incorrect paths to resources defined in userdata.json. A fully passed suite of test cases looks like this:

========== Test Summary ==========

Execution Time:   22m59s

Tests Completed:   7

Tests Passed:     7

Tests Failed:     0

Tests Skipped:     0

----------------------------------

Test Groups:

    pretestvalidation:   PASSED

    coredependencies:   PASSED

    version:       PASSED

    component:       PASSED

    lambdadeployment:   PASSED

    mqtt:         PASSED

----------------------------------

This concludes the introductory use of IDT to analyze and assist in preparing your devices to use IoT Greengrass. Here, the best practice is to use software tests, not just for your own code but to assess whether the edge device itself is ready to work with your solution. Lean on tools such as IDT that do the heavy lifting of proving that the device is ready to use and validate this for each new type of device enrolled or major solution version released. You should be able to configure IDT for your next project and qualify a new device or group of devices to run IoT Greengrass. In the next section, you will learn how to install IoT Greengrass on your device in order to configure your first edge component.

Installing IoT Greengrass

Now that you have used IDT to validate that your edge device is compatible with IoT Greengrass, the next milestone in this chapter is to install IoT Greengrass.

From your edge device (that is, the prototype Home Base Solutions hub), open the Terminal app, or use your command-and-control device to remotely access it using SSH:

  1. Change the directory to your user's home directory: cd ~/.
  2. Download the IoT Greengrass core software: curl -s https://d2s8p88vqu9w66.cloudfront.net/releases/greengrass-nucleus-latest.zip > greengrass-nucleus-latest.zip.
  3. Unzip the archive: unzip greengrass-nucleus-latest.zip -d greengrass && rm greengrass-nucleus-latest.zip.
  4. Your edge device requires AWS credentials in order to provision cloud resources on your behalf. You can use the same credentials that you created for the idtgg user in the previous Configuring the AWS account and permissions section:
    1. export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
    2. export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
  5. Install the IoT Greengrass core software using the following command. If you are using an AWS region other than us-west-2, update the value of the --aws-region argument. You can copy and paste this command from chapter2/commands/provision-greengrass.sh:

    sudo -E java -Droot="/greengrass/v2" -Dlog.store=FILE

      -jar ./greengrass/lib/Greengrass.jar

      --aws-region us-west-2

      --thing-name hbshub001

      --thing-group-name hbshubprototypes

      --tes-role-name GreengrassV2TokenExchangeRole

      --tes-role-alias-name GreengrassCoreTokenExchangeRoleAlias

      --component-default-user ggc_user:ggc_group

      --provision true

      --setup-system-service true

      --deploy-dev-tools true

  6. That's it! The last few lines of output from this provisioning command should look like this:

    Created device configuration

    Successfully configured Nucleus with provisioned resource details!

    Creating a deployment for Greengrass first party components to the thing group

    Configured Nucleus to deploy aws.greengrass.Cli component

    Successfully set up Nucleus as a system service

The installation of the IoT Greengrass core software and the provisioning of the initial resources is much smoother after validating compatibility with the IDT suite. Now your edge device has the first fundamental tool installed: the runtime orchestrator. Let's review the resources that have been created at the edge and in AWS from this provisioning step.

Reviewing what has been created so far

On your edge device, the IoT Greengrass software was installed in the /greengrass/v2 file path. In that directory, the public and private keypairs are generated for connecting to AWS, service logs, a local repository of packages for recipes and artifacts, and a directory for past and present deployments pushed to this device. Feel free to explore the directory at /greengrass/v2 to get familiar with what is stored on the device; though, you will need to escalate permissions using sudo to browse everything.

The installation added the first component to your IoT Greengrass environment, named aws.greengrass.Nucleus. The nucleus component is the foundation of IoT Greengrass; it is the only mandatory component, and it facilitates key functionality such as deployments, orchestration, and life cycle management for all other components. Without the nucleus component, there is no IoT Greengrass.

Additionally, the installation created the first deployment made to your device through the use of the --deploy-dev-tools true argument. That deployment installed a component named aws.greengrass.Cli. This second component includes a script, called greengrass-cli, that is used for local development tasks such as reviewing deployments, components, and logs. It can also be used to create new components and deployments. Remember, with IoT Greengrass, you can work locally on the device or push deployments remotely to it through AWS. Remote deployments are introduced in Chapter 4, Extending the Cloud to the Edge.

In AWS, a few different resources were created. First, a new thing was added to the thing registry in IoT Core. A thing is a logical representation of a physical device to which credentials, metadata, and other configuration are attached. The name of the created thing is hbshub001 from the IoT Greengrass provisioning argument, --thing-name. Similarly, a new thing group was also created in the registry, named hbshubprototypes, from the --thing-group-name provisioning argument. A thing group contains zero or more things and thing groups. The design of IoT Greengrass uses thing groups to identify sets of edge devices that should have the same deployments running on them. For example, if you provisioned another hub prototype device, you would add it to the same hbshubprototypes thing group such that the new prototype deployments would travel to all of your prototype devices.

Additionally, your hbshub001 thing has an entity attached to it called a certificate. The certificate is the record stored in IoT Core that was generated with the x.509 public and private keypair by the installation of IoT Greengrass. The keypair is stored on your device in the /greengrass/v2 directory and is used to establish mutually authenticated connections to AWS. The certificate is how AWS recognizes the device when it connects with its private key (the certificate is attached to the hbshub001 thing record) and knows how to look up permissions for the device. Those permissions are defined in another resource called the IoT policy.

An IoT policy is similar to an AWS IAM policy in that it defines explicit permissions for what an actor is allowed to do when interacting with AWS. In the case of an IoT policy, the actor is the device and permissions are for actions such as opening a connection, publishing and receiving messages, and accessing static resources defined in deployments. Devices get their permissions through their certificate, meaning a thing is attached to the certificate, and the certificate is attached to one or more policies. Here's a sketch of how these basic resources are related in the edge and the cloud:

Figure 2.10 – Illustrating the relationships between the IoT Core thing registry and edge resources

Figure 2.10 – Illustrating the relationships between the IoT Core thing registry and edge resources

In the cloud service of IoT Greengrass, a few more resources were defined for the initial provisioning of your device and its first deployment. An IoT Greengrass core is a mapping of a device (which is also known as a thing), the components and deployments running on the device, and the associated thing group the device is in. Additionally, a core stores metadata such as the version of IoT Greengrass core software installed and the status of the last known health check. Here is an alternate view of the relationship graph with IoT Greengrass resources included:

Figure 2.11 – Illustrating the relationships between IoT Core, IoT Greengrass, and the edge device

Figure 2.11 – Illustrating the relationships between IoT Core, IoT Greengrass, and the edge device

Now that you have installed IoT Greengrass and have an understanding of the resources created on provisioning, let's review what a component looks like when deployed to inform your implementation of the Hello, world component.

Creating your first edge component

The most basic milestone of any developer education is the Hello, world example. For your first edge component deployed to IoT Greengrass, you will create a simple Hello, world application in order to reinforce concepts such as component definition, a dependency graph, and how to create a new deployment.

Reviewing an existing component

Before you get started with drafting a new component, take a moment to familiarize yourself with the existing components that have already been deployed by using the IoT Greengrass CLI. This CLI was installed by the --deploy-dev-tools true argument that was passed in during the installation. This tool is designed to help you with a local development loop; however, as a best practice, it is not installed in production solutions. It is installed at /greengrass/v2/bin/greengrass-cli. The following steps demonstrate how to use this tool:

  1. Try invoking the help command. In the Terminal app of your edge device, run /greengrass/v2/bin/greengrass-cli help.
  2. You should view the output of the help command, including references to the component, deployment, and logs commands. Try invoking the help command on the component command: /greengrass/v2/bin/greengrass-cli help component.
  3. You should view instructions regarding how to use the component command. Next, try invoking the component list command to show all of the locally installed components, /greengrass/v2/bin/greengrass-cli component list:

    pi@raspberrypi:~ $ /greengrass/v2/bin/greengrass-cli component list

    java.lang.RuntimeException: Unable to create ipc client

      at com.aws.greengrass.cli.adapter.impl.NucleusAdapterIpcClientImpl.getIpcClient(NucleusAdapterIpcClientImpl.java:260)

      at com.aws.greengrass.cli.CLI.main(CLI.java:57)

    Caused by: java.io.IOException: Not able to find auth information in directory: /greengrass/v2/cli_ipc_info. Please run CLI as authorized user or group.

    Uh oh, what happened in this step? This command failed with a Java stack trace and errors such as java.lang.RuntimeException: Unable to create ipc client and Please run CLI as authorized user or group. This is an example of IoT Greengrass security principles at work. By default, the IoT Greengrass software is installed as the root system user. Only the root user, or a system user added to /etc/sudoers, can interact with the IoT Greengrass core software, even through the IoT Greengrass CLI. Components are run as the default system user identified in the configuration (please refer to the --component-default-user argument in the installation command), or each component can define an override system user to run as. Run the command again using sudo (superuser do):

  4. sudo /greengrass/v2/bin/greengrass-cli component list

    Note

    There is a shortcut you can add to avoid typing the full path of /greengrass/v2/bin/greengrass-cli every time you want to use the CLI. You can add the /greengrass/v2/bin directory to your sudoers secure_path so that the greengrass-cli script can be used by typing in the name of the script. Use visudo to append the path to the Defaults secure_path list. This results in the use of a CLI such as sudo greengrass-cli component list as your system user in the sudo group.

    Now you can view the list of components, including the aws.greengrass.Cli component that installed the very CLI you are using! The CLI script can be run by any system user; however, it will only successfully interact with the local IoT Greengrass installation when run as root (via sudo) or a system user belonging to a system group defined in the component's configuration AuthorizedPosixGroups (which defaults to null). Note that even the nucleus component appears in the list:

    Component Name: aws.greengrass.Cli

        Version: 2.4.0

        State: RUNNING

        Configuration: {"AuthorizedPosixGroups":null}

    The preceding output shows you the status of the component as a brief summary. You can view the current state the component is in; in this case, this is RUNNING, indicating the component's life cycle is either on or available. That makes sense because the CLI should always be available to us while the component is being deployed. Components that run once, perform a task, and close would show a state of FINISHED after they have completed their life cycle tasks.

    That's just the component's status, so next, let's take a look at the component's recipe and artifacts. As defined earlier, a component is made up of two resources: a recipe file and a set of artifacts. So what does the recipe file for the CLI component look like? You can find this in the /greengrass/v2/packages/recipes directory.

  5. You don't need to run the following A and B commands. They are included here to show you how to find the file contents later on:
    1. To find the recipe file, use sudo ls /greengrass/v2/packages/recipes/.
    2. To inspect the file, use sudo less /greengrass/v2/recipes/[email protected] (note that your filename will be different):

    Selection of recipe.yaml for aws.greengrass.C--

    RecipeFormatVersion: "2020-01-25"

    ComponentName: "aws.greengrass.Cli"

    ComponentVersion: "2.4.0"

    ComponentType: "aws.greengrass.plugin"

    ComponentDescription: "The Greengrass CLI component provides a local command-line interface that you can use on Greengrass core devices to develop and debug components locally. The Greengrass CLI lets you create local deployments and restart components on the Greengrass core device, for example."

    ComponentPublisher: "AWS"

    ComponentDependencies:

      aws.greengrass.Nucleus:

        VersionRequirement: ">=2.4.0 <2.5.0"

        DependencyType: "SOFT"

    Manifests:

    - Platform:

        os: "linux"

      Lifecycle: {}

      Artifacts:

      - Uri: "greengrass:UbhqXXSJj65QLVH5UqL6nBRterSKIhQu5FKeVAStZGc=/aws.greengrass.cli.client.zip"

        Digest: "uziZS73Z6dKgQgB0tna9WCJ1KhtyhsAb/DSv2Eaev8I="

        Algorithm: "SHA-256"

        Unarchive: "ZIP"

        Permission:

          Read: "ALL"

          Execute: "ALL"

      - Uri: "greengrass:2U_cb2X7-GFaXPMsXRutuT_zB6CdImClH0DSNVvzy1Y=/aws.greengrass.Cli.jar"

        Digest: "UpynbTgG+wYShtkcAr3X+l8/9QerGwaMw5U4IiicrMc="

        Algorithm: "SHA-256"

        Unarchive: "NONE"

        Permission:

          Read: "OWNER"

          Execute: "NONE"

    Lifecycle: {}

There are a few important observations to review in this file:

  • Component names use a reverse domain scheme that is similar to the namespacing Java package. Your custom components in this book's project will start with com.hbs.hub, signifying components written for the Home Base Solutions hub product.
  • This component is tied to specific versions of the IoT Greengrass nucleus, which is why the version is 2.4.0. Your components can specify any version here, and the best practice is to follow the semantic versioning specification.
  • The ComponentType property is only used by AWS plugins such as this CLI. Your custom components will not define this property.
  • This component only works with a specific version of the nucleus, so it defines a soft dependency on the aws.greengrass.nucleus component. Your custom components do not need to specify a nucleus dependency by default. This is where you will define dependencies on other components, for example, a component that ensures Python3 is installed before loading a component with a Python application.
  • This component defines no specific life cycle activities, either at the global level or specific to the linux platform version of the manifest.
  • The artifacts defined are for specific IoT Greengrass service files. You can view these files on disk in the /greengrass/v2/packages/artifacts directory. Your artifact URIs will use the s3://path/to/my/file pattern when deploying them from the cloud. During local development, your manifest does not need to define artifacts, as they are expected to already be on disk.
  • Note the permissions on the two artifacts. The ZIP file can be read by any system user. In comparison, the JAR file can only be read by the OWNER, which in this scenario, means the default system user that was defined at installation, for example, the ggc_user user.

With this review of the component structure, it's time to write your own component with an artifact and a recipe.

Writing your first component

As mentioned earlier, the first component that we wish to create is a simple "Hello, world" application. In this component, you will create a shell script that prints Hello, world with the echo command. That shell script is an artifact for your component. Additionally, you will write a recipe file that tells IoT Greengrass how to use that shell script as a component. Finally, you will use the local IoT Greengrass CLI to deploy this component and check it works.

Local component development uses artifacts and recipe files available on the local disk, so you will need to create some folders for your working files. There is no folder in /greengrass/v2 that is designed to store your working files. Therefore, you will create a simple folder tree and add your component files there:

  1. From the Terminal app of your edge device, change the directory to your user's home directory: cd ~/.
  2. Create a new folder to hold your local component resources: mkdir -p hbshub/{artifacts,recipes}.
  3. Next, create the path for a new artifact and add a shell script to its folder. Let's choose the component name of com.hbs.hub.HelloWorld and start the version at 1.0.0. Change the directory to the artifacts folder: cd hbshub/artifacts.
  4. Make a new directory for your component's artifacts: mkdir -p com.hbs.hub.HelloWorld/1.0.0.
  5. Create a new file for the shell script: touch com.hbs.hub.HelloWorld/1.0.0/hello.sh.
  6. Give this file write permissions: chmod +x com.hbs.hub.HelloWorld/1.0.0/hello.sh.
  7. Open the file in an editor: nano com.hbs.hub.HelloWorld/1.0.0/hello.sh.
  8. Inside this editor, add the following content (this is also available in this chapter's GitHub repository):

    hello.sh

    #!/bin/bash

    if [ -z $1 ]; then

            target="world"

    else

            target=$1

    fi

    echo "Hello, $target"

  9. Test your script with and without an argument. The script will print Hello, world unless provided with an argument to substitute for world:
    1. ./com.hbs.hub.HelloWorld/1.0.0/hello.sh
    2. ./com.hbs.hub.HelloWorld/1.0.0/hello.sh friend
  10. That's all you need for your component's artifact. Next, you will learn how to take advantage of passing in arguments from inside the recipe file. Change the directory to the recipes directory: cd ~/hbshub/recipes.
  11. Open the editor to create the recipe file: nano com.hbs.hub.HelloWorld-1.0.0.json.
  12. Add the following content to the file. You can also copy this file from the book's GitHub repository:

    com.hbs.hub.HelloWorld-1.0.0.json

    {

      "RecipeFormatVersion": "2020-01-25",

      "ComponentName": "com.hbs.hub.HelloWorld",

      "ComponentVersion": "1.0.0",

      "ComponentDescription": "My first AWS IoT Greengrass component.",

      "ComponentPublisher": "Home Base Solutions",

      "ComponentConfiguration": {

        "DefaultConfiguration": {

          "Message": "world!"

        }

      },

      "Manifests": [

        {

          "Platform": {

            "os": "linux"

          },

          "Lifecycle": {

            "Run": ". {artifacts:path}/hello.sh '{configuration:/Message}'"

          }

        }

      ]

    }

    This recipe is straightforward: it defines a life cycle step to run our hello.sh script that it will find in the deployed artifacts path. One new addition that has not yet been covered is the component configuration. The ComponentConfiguration object allows developers to define arbitrary key-value pairs that can be referenced in the rest of the recipe file. In this scenario, we define a default value to pass as an argument to the script. This value can be overridden when deploying a component to customize how each edge device uses the deployed component.

    So, how do you test a component now that you've written the recipe and provided the artifacts? The next step is to create a new deployment that tells the local IoT Greengrass environment to load your new component and start evaluating life cycle events for it. This is where the IoT Greengrass CLI can help.

  13. Use the following command to create a new deployment that includes your new component:

    sudo /greengrass/v2/bin/greengrass-cli deployment create   --recipeDir ~/hbshub/recipes --artifactDir ~/hbshub/artifacts --merge "com.hbs.hub.HelloWorld=1.0.0"

  14. You should view a response similar to the following:

    Local deployment submitted! Deployment Id: b0152914-869c-4fec-b24a-37baf50f3f69

  15. You can verify that the component was successfully deployed (and has already finished running) with sudo /greengrass/v2/bin/greengrass-cli component list:

    Components currently running in Greengrass:

    Component Name: com.hbs.hub.HelloWorld

        Version: 1.0.0

        State: FINISHED

        Configuration: {"Message":"world!"}

  16. You can view the output of this component in its log file: sudo less /greengrass/v2/logs/com.hbs.hub.HelloWorld.log (remember, the /greengrass/v2 directory is owned by root, so the log files must also be accessed with sudo):

    2021-05-26T22:22:02.325Z [INFO] (pool-2-thread-32) com.hbs.hub.HelloWorld: shell-runner-start. {scriptName=services.com.hbs.hub.HelloWorld.lifecycle.Run, serviceName=com.hbs.hub.HelloWorld, currentState=STARTING, command=["/greengrass/v2/packages/artifacts/com.hbs.hub.HelloWorld/1.0.0/hello.sh 'world..."]}

    2021-05-26T22:22:02.357Z [INFO] (Copier) com.hbs.hub.HelloWorld: stdout. Hello, world!. {scriptName=services.com.hbs.hub.HelloWorld.lifecycle.Run, serviceName=com.hbs.hub.HelloWorld, currentState=RUNNING}

    2021-05-26T22:22:02.365Z [INFO] (Copier) com.hbs.hub.HelloWorld: Run script exited. {exitCode=0, serviceName=com.hbs.hub.HelloWorld, currentState=RUNNING}

Congratulations! You have written and deployed your first component to your Home Base Solutions prototype hub using IoT Greengrass. In the log output, you can observe two noteworthy observations. First, you can view the chronology of the component's life cycle change state from STARTING to RUNNING before reporting a successful exit code back to IoT Greengrass. The component concludes at that point, so we don't view an entry in the log that shows it move to the FINISHED state, although that is visible in the greengrass.log file.

Second, you can view the message written to STDOUT with an exclamation point included (world!). This means that the script received your component's default configuration instead of falling back on the default built into hello.sh (world). You could also override the default configuration value of "world!" in the recipe file with a custom value included in the deployment command. You'll learn how to use that technique to configure fleets in Chapter 4, Extending the Cloud to the Edge.

Summary

In this chapter, you learned the basics regarding a specific tool we will use throughout this book that satisfies one of the key needs of any edge ML solution, that is, the runtime orchestrator. IoT Greengrass provides out-of-the-box features to focus developers on their business solutions instead of the undifferentiated work to architect a flexible, resilient edge runtime and deployment mechanism. You learned that the fundamental unit of software in IoT Greengrass is the component, which is made up of a recipe and a set of artifacts, and components make their way to the solution via deployments. You learned how to validate that a device is ready to work with IoT Greengrass using the IDT. You learned how to install IoT Greengrass, develop your first component, and get it running in the local environment.

In the next chapter, we will take a deeper dive into how IoT Greengrass works by exploring how it enables gateway functionality, common protocols used at the edge, security best practices, and builds out new components used to sense and actuate in a cyber-physical solution.

Knowledge check

Before moving on to the next chapter, test your knowledge by answering these questions. The answers can be found at the end of the book:

  1. Which of the following is the best practice for how to organize code in edge ML solutions? A monolithic application or isolated services?
  2. What is the benefit of decoupling services in your edge architecture?
  3. What is the benefit of isolating your code and dependencies from other services?
  4. What is one trade-off to consider when choosing between wired and wireless networking implementations in IoT solutions?
  5. What is an example of a smart home device that uses both a sensor and an actuator?
  6. What are the two kinds of resources that define an IoT Greengrass component?
  7. True or false: A component must define at least one artifact in its recipe.
  8. Why is it a good design principle that, by default, only the root system user can interact with files in the IoT Greengrass directory?
  9. True or false: Components can be deployed to IoT Greengrass devices either locally or remotely.
  10. Can you think of three different methods that you could use to update the behavior of your Hello, world component to print Hello, Home Base Solutions customer!?

References

Please refer to the following resources for additional information on the concepts discussed in this chapter:

  • The semantic versioning specification at https://semver.org.
  • Service-Oriented Architecture: Analysis and Design for Services and Microservices by Erl Thomas, Pearson, 2016.
  • Foundations of Analog and Digital Electric Circuits by Anant Agarwal, Jeffrey H. Lang, and Morgan Kaufmann, 2005.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset