Chapter 12: Webhooks and Microservices

In the previous chapter, we dove into the wonderful world of APIs and SDKs to determine how they could be utilized to implement custom solutions and what the costs or benefits that could be associated with each type are. We learned about the different types of requests that are common in both Marketing Cloud and web development generally. Then, we really honed in on each request type and the overall structure that allows us to communicate with services across the web.

In this chapter, we're going to expand on our API knowledge to understand the concept of using an event-driven framework that lets us act on data in real time rather than with user-driven requests. This is a powerful tool for a variety of reasons, particularly how it allows us to automate functionality, both within Marketing Cloud and external to the platform, which isn't really possible to do efficiently with traditional APIs as defined in our last chapter.

Also, we'll go back to a conversation on application design and structure and introduce the concept of microservices. You will see how this concept differs from traditional application development and architecture we might be more familiar with. This will help inform us on how to best structure the overall application in order to provide the most benefit to us as developers working on a system and ensuring we build services in an efficient, easy-to-maintain, and sustainable fashion. As an overview, in this chapter we will be covering the following:

  • Webhooks versus APIs
  • An event-based example
  • Microservices…assemble!

While examining the aforementioned topics, you may find it difficult to form a connection between the theory and the work we do within Marketing Cloud. We'll show an example for webhooks that should help clarify how some of these ideas are relevant to our work as Marketing Cloud developers. We also hope that you consider each item carefully and consider possible similarities to the work you do on a daily basis. While it's true that the last few chapters of this book contain quite a bit of theory, each item has a direct correlation to the work that we do within Marketing Cloud even outside the scope of custom application development. With some careful consideration, we think you will see the connections and feel more empowered to view your work both in Marketing Cloud and external to it in a new, and hopefully more informed, light. Now, without further ado, let's take a look at the first topic of our chapter, webhooks.

Technical requirements

The full code for the chapter can be found in the GitHub repository located here: https://github.com/PacktPublishing/Automating-Salesforce-Marketing-Cloud/tree/main/Chapter12.

Webhooks versus APIs

So, before we get started comparing these two, let's define what webhooks are and how they can be utilized within an application. Webhooks, also referred to as web callbacks or reverse APIs, are a method that allows an app or service to send real-time data to another application or service when some given event has occurred. Whenever an event is triggered, the webhook registers the event and aggregates all of the data for the request. The request is then sent to a URL, specified in a configuration within the service registering the event, in the form of an HTTP request.

Webhooks allow us to process logic efficiently when an event occurs within the service providing the webhook. The information structure passed from the webhook is decided by the service provider passing the event. Webhooks can also be utilized to connect events and functionality on two disparate services so that some event on one platform triggers another event on a separate platform without any user delegation or input. An example of this might be where we configure webhooks within both GitHub and Slack and utilize them together such that a message is posted on a Slack channel whenever a new commit has been made and merged into our master branch. By using webhooks, we can allow services to talk to each other in an automated way, which allows us to construct functionality that may not be possible using the standard API approaches discussed in the previous chapter.

It's quite easy to confuse webhooks with APIs, especially given that they both communicate utilizing the same methods. Also, the responses for webhooks can feature a very similar structure as that of a traditional API request. We can even see how they could be utilized separately to construct similar functionality in a given scenario. In the previous chapter, we examined APIs in depth and saw that we could utilize requests to return the state of a given resource. If the primary use case for webhooks is event-driven requests, couldn't we also just use an API call to determine the status of an event? Sure, we could do that. Utilizing this method would be an implementation of a concept known as continuous polling. The following diagram illustrates the continuous polling concept:

Figure 12.1 – Continuous polling visualization

Figure 12.1 – Continuous polling visualization

Using continuous polling, we configure our client to make regular requests to the server to determine whether anything has changed with a given resource. We can execute these calls in either an automated or manual fashion. But the primary thing to note is that we request the server at some interval to return a status of a resource and run this process continuously. We make an arbitrary number of requests that will tell us whether an event has occurred, and to send data about the event when it has, even though most of our requests to the server will return a status that no change has occurred. As you can see in Figure 12.1, our client makes a total of three requests to the server for the resource status but is only returned meaningful information that a change has happened on the third request. You could visualize this approach as a parent and child on a long car ride, with the child continuously asking, Are we there yet?. Of course, at some point, the answer will be yes but it will only come after many nos and noticeable frustration from the parent.

Obviously, this is not an ideal approach. We're essentially wasting resources on both ends making requests to the server when the result of the request will probably not provide any meaningful information on the resource we are seeking information about. While we're still able to detect changes, this comes with several disadvantages, such as the following:

  • Resources on both the client and server sides are expended for every request. This leads to inherent inefficiencies in how our application will function.
  • We could overwhelm the server if our data is needed in real time. Thousands of requests per minute could lead to overages in API limit allotment such that our calls would fail even if an event has occurred.
  • The data returned when the polling does detect an event has occurred is inherently stale. If your polling frequency is a few minutes, or hours, then the data returned is indicative of the state of a resource at that specific instance and may become invalid at any interval between your polling cycles.

That's not to say that there is no use for polling generally, but it is by its very nature an inefficient process. If the event data is not time-sensitive, and our schedule is on the order of a day or more, then a simple automated poll to check status is not likely to provide significant risk or impact to resources. Still, this provides little gain but with only slimmer risks. So, how do webhooks compare to continuous polling in terms of functionality and efficiency? The following is a visualization of a webhook:

Figure 12.2 – Webhook visualization

Figure 12.2 – Webhook visualization

As you can see in the preceding diagram, webhooks have a simple overall structure. Rather than initiating requests from the client to the server to determine the current state of a resource, we instead subscribe to a webhook that will notify us when the event has occurred by posting data to an endpoint that we define within the service where the event occurs.

It's not hard to see the benefits of utilizing webhooks over continuous polling in order to retrieve data when a given event occurs. Unlike polling, we can ensure that our data is updated in real time since our application will receive data corresponding to an event as soon as that event has occurred. This ensures that we don't suffer from the stale data problem that is present with polling. In addition to this, we don't overwhelm resources since we're not making continuous requests to return the resource states. Only a single request is needed and is executed by the server, to your configured endpoint, only when a specified event has occurred. There's no need to worry about governor limits or scaling your polling procedures.

In review, the primary difference between the two is that, with an API, a user makes a request to retrieve data from an endpoint and then receives a response. Webhooks are simply HTTP messages that are sent as the result of some event on a third-party service. Webhooks and APIs are closely related, with webhooks even being referred to as reverse APIs by developers. Now that we know the nomenclature and definitions, let's dig a little deeper with a practical example for demonstration.

An event-based example

We now have a solid grasp on what a webhook is, how it differs from an API, and the types of scenarios where it can be a more effective solution than traditional API requests or polling. What might not be so obvious is how we can utilize them to extend our capabilities within Marketing Cloud to improve our current processes. To that end, it might be helpful for us to look at a simple webhook implementation that works with Marketing Cloud to automate an internal process common to many organizations.

For our example, let's say that we are currently using Content Builder to create and manage our email campaigns within Marketing Cloud. Furthermore, we are also utilizing content blocks within Content Builder to modularize our development processes and keep things compartmentalized and easy to maintain. Unfortunately, we've identified issues with version control when working on content as a team and want to minimize impact when multiple people are working on an individual campaign. To allow for more efficient backups, we're also utilizing GitHub to version-control our code and to keep backup repositories of our production content to prevent an untimely deletion.

This process works well, but it's still cumbersome to sync the data from our repository and Marketing Cloud and the updates are still manual and prone to user error. It sure would be great if we could just automatically add content to Content Builder whenever we've pushed new content to our repository. That's where webhooks come in!

By utilizing a webhook with GitHub, we can automatically post-commit data whenever a push event is triggered for our repository. Even better, we can configure an endpoint that our webhook can post to. This endpoint can contain logic to process the new files and create them within Marketing Cloud automatically.

First, let's outline exactly what we are trying to implement. When a file is created and pushed to our GitHub repository, we want to trigger an event in Marketing Cloud that will grab the raw data from the file and create a code snippet content block containing that data within Content Builder. While we could also selectively update or remove content based on the event in GitHub, we'll stick to just creating new content now for ease of demonstration.

We'll also assume that we will structure our repository such that our code snippet content blocks are nested in individual folders whose names correspond to a category ID in Content Builder. So, an example of our repository might look like this:

Figure 12.3 – Webhook example project structure

Figure 12.3 – Webhook example project structure

Finally, let's also call out that our example won't be applicable for pulling larger files (>1 MB) from the GitHub REST API. Although it is possible by utilizing additional API routes, it is beyond the scope of this example. Before we can implement our new webhook functionality, we're going to need a few things so we can get started setting everything up:

  • An account within GitHub
  • A repository to manage content
  • A JSON Code Resource page in Marketing Cloud to serve as our webhook endpoint

That's all that we need to get started building our solution. With these items set up, let's take a look at configuring the webhook for our GitHub events.

Configuring the GitHub webhook

There are certain steps required to begin the configuration of the GitHub webhook. Please follow these listed steps to begin the configuration:

  1. First, we'll need to configure the webhook and application token on the GitHub website in order to set up our integration.
  2. To create a webhook for a specific repository, navigate to that repository's landing page and select Settings from the page's menu.
  3. Then, we'll navigate to the Webhooks section in the Settings menu and select Add Webhook.
  4. This will bring up a Configuration menu that we can use to define where the webhook should post data, the structure of the data to be sent, and what event type should trigger a request.

In this menu, we'll define the following items:

  • Payload URL: This will be the URL that we want to post the event to whenever it is triggered within GitHub. For our purposes, we will configure a JSON Code Resource CloudPage to accept requests from GitHub and to perform our content creation functionality when an event is fired.
  • Content Type: The type of data that the webhook will post to our endpoint. Available options are application/json and application/x-www-form-urlencoded. The appropriate choice will depend on your system and preference, but for ease of use, let's go with application/json for this example.
  • Secret: This is an optional input that allows you to specify a secret token that can then be utilized to secure your webhook and ensure that only events from GitHub are processed by the endpoint's logic as valid events. We certainly wouldn't want to execute functionality when the request is made from a random source, so implementing some security protocols such as the secret token is important for production implementations. While we could implement other protocols, such as checking the referrer or other request information, this provides us with a simple mechanism of securely authenticating that a given request originated from GitHub. Since our example is purely for demonstration, we'll skip this step and just focus on the functionality for now.

After we've configured the primary information about our webhook, listed previously, we'll then want to configure what types of events will trigger a request. While there are many different event types that we could target, from the forking of the repository to the creation of new issues, we will select Just the push event. Since we only want to create content once some new content has been pushed to our repository, only firing the webhook for push events should be sufficient. Then, we simply set our webhook as active and save. Now our webhook is live and will automatically send data when a push event has occurred for our repository!

Our webhook has been configured and we are all set to post events to our new endpoint, but we've still got one more configuration to take care of within GitHub. While we are ready to receive event data from our webhook, we need to set up a new access token in order to retrieve the raw content data from our repository to generate the content block in Marketing Cloud.

To do this, navigate to the User Settings menu and select Developer Settings. From there, we'll select Personal Access Tokens and create a new access token. In this configuration, we'll want to provide it with a helpful name that identifies the webhook service that we have constructed. We'll also need to set an expiration date for the token or configure it to never expire. Finally, let's select the token to allow the repository scope to read content that has been pushed to our repository. After saving your configuration, you will be presented with your access token. This token will not be visible again, so ensure you store the token for your application before navigating away from the page.

That's all we need for our example service in order to receive events from GitHub and authenticate them into our repository and retrieve the raw content from our push event. Now, we'll head over to Marketing Cloud to configure the endpoint we specified for our webhook.

Setting up the endpoint

With our webhook set up on the GitHub side, which will send data to our endpoint when an event occurs, now it's time to configure our endpoint to execute our content creation logic when data has been posted to it. As stated previously, we can utilize a JSON Code Resource page to serve as our endpoint, but we could just as easily construct this to be hosted on any server and utilize an array of technologies. Since our purpose here is to demonstrate an example scenario, and given that most of you will be familiar with Code Resource, we'll select this option to process our logic. So, how do we begin? Well, first, we'll need to highlight the relevant pieces of data from the GitHub request that we can key off of to retrieve the relevant data and create our newly pushed content within Marketing Cloud:

{

   "repository":{

      "contents_url":"https://api.github.com/repositorys/

        {username}/{repositoryname}/contents/{+path}"

   },

   "commits":[

      {

         "added":[

            "12345/contentBlock1.html",

            "67890/contentBlock2.html"

         ],

      }

   ]

}

While there is much more data returned in the event request from our webhook, this is the overall structure that contains the relevant pieces that we will utilize for our example solution. First, note that there is a property called contents_url within our JSON payload. This value provides us with the base URL of our repository that can be utilized to make API calls to find files with a specific path within our repository. In addition, we have the added array under the commits property, which will house any files that have been newly added as a result of our push event within GitHub.

With our generalized payload structure in hand, let's define the individual pieces of functionality that we'll want our webhook to execute in order to create our code snippet content block within Content Builder:

  1. Retrieve the payload from GitHub and grab JSON for further processing.
  2. Parse the added array within the payload in order to determine the files that have been created in our latest push event.
  3. Retrieve the category ID, filename, and raw file contents of each item in the added array.
  4. Create all-new code snippet content blocks within the specified folders within Content Builder.

Step 1 – retrieving the payload

First, we'll need to set up the script to retrieve the payload data and allow us to further process the JSON data being posted from the webhook. To do this, we'll utilize both the GetPostData() and ParseJSON() SSJS functions, which will retrieve the data and parse the JSON object:

<script runat=server>

Platform.Load("core", "1.1.1");

var postData = Platform.Request.GetPostData();

var json = Platform.Function.ParseJSON(postData);

</script>

Now that we've pulled in the JSON data and have it ready for processing, we need to assign the relevant data points we highlighted in the payload to variables that we can utilize for further processing.

Steps 2 and 3 – parsing the added array and retrieving the contents

Now, we'll grab the contents_url parameter from the payload. Notice, in our example, the value in the payload is appended with the {+path} substring. We'll want to remove this portion from our variable as it's not relevant for pulling the final path to the files that we wish to retrieve. Finally, we'll also grab the added array from the commits property so that we can iterate through each added file and retrieve its contents:

var baseContentsURL = json.repository.contents_url;

baseContentsURL = baseContentsURL.slice(0, baseContentsURL.lastIndexOf('/') + 1);

var addedFilesInCommit = json.commits[0].added;

That's all we need in order to accomplish the aforementioned items, and we now have our variables assigned for the base content path URL as well as our added array. With that in hand, we need to write our function to call the GitHub REST API to return the raw contents of our newly pushed files. Let's take a look at what that script looks like and then break down its components a little further:

function getRawGithubData(assetPath, contentURL) {

    var accessToke = "YOUR GITHUB ACCESS TOKEN";

    var auth = 'token ' + accessToken;

    var url = contentURL + assetPath;

    var req = new Script.Util.HttpRequest(url);

    req.emptyContentHandling = 0;

    req.retries = 2;

    req.continueOnError = true;

    req.contentType = "application/json"

    req.setHeader("Authorization", auth);

    req.setHeader("user-agent", "marketing-cloud");

    req.setHeader("Accept",

      "application/vnd.github.VERSION.raw");

    req.method = "GET";

    var resp = req.send();

    var resultString = String(resp.content);

    return resultString;

}

As you can see here, we are using Script.Util in order to make a GET API request to GitHub to retrieve our file content. To make this request, we'll need our function to accept parameters for contentURL, which we assigned to a variable and formatted in the previous step, and the path of the file that we'll pull from our added array assigned previously as well. Before we can complete our API call, we'll need to further define the following items in our request:

  • Authorization header: This allows us to authenticate our call into the GitHub API to confirm that only we can retrieve the data relevant to an individual file. For this header, we'll simply need to concatenate token followed by the GitHub personal access token that we created and saved in the GitHub configuration portion of this example.
  • User-agent header: A user-agent header is a requirement on GitHub REST API calls, so we'll have to pass a value for this header in our API call for it to function. The exact value doesn't matter, but it should be reflective of the platform/purpose of the call with which we are planning to execute. For our purposes here, we'll set this value to marketing-cloud.
  • Accept header: We will specify this header to let GitHub know that we want to return the raw data of the file in the request-response. This allows us to utilize the exact contents of the file without any further processing or decoding on our end.

That's all that we need to define to make our request in GitHub in order to retrieve the file contents of whatever asset path we pass into this function. We'll make our request and return the content of that request as an output of the function so that we are able to retrieve the file contents and upload the asset to Marketing Cloud. With our function set up to retrieve the contents of the files added during the commit, we'll now need to write our function that writes this content to Marketing Cloud.

Step 4 – creating new content

While we could utilize several methods in order to create this content, such as the Content Builder REST API, for ease of use (and to save us from setting up packages and authenticating into Marketing Cloud), we'll use a platform function approach to creating this content. Before we dive in, it's important to note that the documentation outlining the possible routes and functionality within Content Builder can be found in the official documentation located here: https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/content-api.html.

Let's take a look at what that function looks like before outlining what's going on:

function createAsset(assetName, assetContent, assetId, assetCategoryId) {

    var asset = Platform.Function.CreateObject("Asset");

    var nameIdReference =

      Platform.Function.CreateObject("nameIdReference");

    Platform.Function.SetObjectProperty(nameIdReference,

      "Id", assetId);

    Platform.Function.SetObjectProperty(asset, "AssetType",

      nameIdReference);

    var categoryNameIdReference = Platform.Function

      .CreateObject("categoryNameIdReference");

    Platform.Function.SetObjectProperty(

      categoryNameIdReference, "Id", assetCategoryId);

    Platform.Function.SetObjectProperty(asset, "Category",

      categoryNameIdReference);

    Platform.Function.SetObjectProperty(asset, "Name",

      assetName);

    Platform.Function.SetObjectProperty(asset, "Content",

      assetContent);

    Platform.Function.SetObjectProperty(asset,

      "ContentType", "application/json");

    var statusAndRequest = [0, 0];

    var response = Platform.Function.InvokeCreate(asset,

      statusAndRequest, null);

    return response;

}

Here, we are outlining a function called createAsset that will take some parameters and utilize them to actually create our code snippet content block within Marketing Cloud. Our function should accept parameters for the following properties of our Content Builder asset:

  • Asset type ID
  • Category/folder ID
  • Name
  • Content

First, we'll need to define the asset type that our content belongs to. While we have written our function to make this process generic, we could also hardcode it directly if we are only utilizing this webhook to process data for a given type. Here, we'll let the function take it as a parameter and assign the type ID according to that. Next, we'll need to retrieve the categoryId parameter and define that value for the Category ID property of our asset initialization. This ID will specify exactly what folder we wish to insert this asset into. Finally, we'll grab both the asset name and content parameters and then assign them accordingly to our asset object. Then, our function will create the asset with the values defined previously and insert this content into the specified folder within Content Builder. Now, all that we need to do is iterate through the added items in the GitHub JSON payload and invoke the preceding two functions to retrieve the content and create it in Marketing Cloud:

for (var i in addedFilesInCommit) {

    var assetPath = addedFilesInCommit[i];

    var categoryId = assetPath.substring(0,

      assetPath.indexOf("/"));

    var contentName =

      assetPath.split("/").pop().replace(".html", "");

    var contentData = getRawGithubData(assetPath,

      baseContentsURL);

    createAsset(contentName, contentData, 220, categoryId);

}

Notice here, we are iterating through each item in the array and then assigning an assetPath parameter that will equal the path of the file that has been pushed to our GitHub repository. Because this path contains both the name of the file and the category ID, as defined in the naming convention we discussed at the start of this example, we'll want to parse out each of those values separately from the added array item within each iteration. Finally, we'll invoke our GitHub REST API call function and assign it to a variable that will now contain the raw content of the file we've retrieved. After that, it's as simple as calling our createAsset function, noting that we are passing in a value of 220 for our asset type ID as this corresponds to code snippet content blocks within the Content Builder API asset model. For a complete list of asset type IDs, please refer to the Content Builder Asset Type documentation, located at https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/base-asset-types.html.

That's it! With the preceding code saved and published to the endpoint that we defined within the GitHub webhook configurations, we are now all set in order to start syncing our push event file data with Content Builder. Whenever we've added, committed, and pushed an event to our repository, this endpoint and logic will automatically be processed and the new content will be created within Marketing Cloud from the data we've pushed to the repository.

This was a somewhat simplistic example, but we hope that it helps highlight the different ways that we can utilize webhooks to create an event-driven set of functionality within Marketing Cloud that automates our processes or provides some new degree of efficiency. Utilizing the aforementioned solution, we could easily scale it to more comprehensively handle our assets or even define our own schema for mass creating any set of Marketing Cloud objects that are accessible either through platform functions or API routes defined within the documentation.

In addition to using external services in order to generate event-driven functionality, there are webhook services within the Marketing Cloud ecosystem that allow the user to subscribe to certain events and then receive automated requests posted to a defined endpoint whenever an activity has occurred. One such method is Event Notification Service, which provides a webhook functionality that allows developers to receive relevant deliverability and engagement metrics on an email or text deployment automatically. This allows us to further automate processes in order to provide an immediate response or insight following some user engagement with our content. So, say we have an order confirmation email that contains a link for support or further information. We could set up a webhook that receives a request when the link is clicked and then takes some further immediate action (such as emailing a more detailed order receipt to the user).

The core concepts for utilizing the GitHub integration and Event Notification Service remain largely the same. Though the steps to accomplish both will differ in their configuration or endpoint, the basic premise is that we utilize the following steps to create our integration:

  1. Configure an endpoint to receive requests.
  2. Register and verify that our endpoint is able to securely authenticate and process calls from the webhook.
  3. Create the webhook subscription event such that an event will be fired to our provided endpoint whenever a defined action has occurred.

With Event Notification Service, these steps are largely generated through API requests to defined routes within the Marketing Cloud REST API. In our GitHub example, these are done through simple User Interface (UI) configurations made within the repository settings, but the overall flow necessary for constructing these solutions is essentially the same.

Understanding the importance of event-driven requests, and how they can be utilized both within Marketing Cloud and externally in order to generate real-time functionality, is key. Familiarity with the distinction between webhooks and APIs allows us to choose the appropriate tools for a given use case and allows developers to select the appropriate tool for a given task and ensure that we're keeping efficiency and maintainability at the forefront of our application development. Now that we have introduced the concept of webhooks, let's move on to another concept that can aid us in building solutions that are both efficient and can scale.

Microservices, assemble!

It's no secret to any of you that business requirements or existing flows can change, sometimes on a daily or weekly basis. As such, development teams are compelled to adapt to changing circumstances by extending new functionality into a given service or by altering its capabilities to meet both existing and new challenges. Unfortunately, it's not always so simple to extend functionality or revise existing solutions within our applications. We may have portions of our code base that are generic but intertwined and dependent on the execution of some other component.

When starting a project, when the focus is narrow, the code base can be very manageable and somewhat self-contained since it should encapsulate all of the base functionality outlined in the discovery process. Over time, additional functionality and components are added such that the code base and build, integration, and test processes can become cumbersome to manage or decouple. With more and more code being utilized in a central location, best practices were developed for ways to modularize the functionality of the application to make the code more maintainable and generalized (that is, they can be used by other parts of your application). Unfortunately, all of these individual modules must still be compiled together into a single code base in order to deploy the application. So, regardless of how the improved modularity of the application has impacted the developers working on it, at the end of the day, it still needs to come together in a single deployment for the entire code base to go to production. Enter microservices.

Microservices are an architectural pattern that differ from a monolithic approach in both the structure of the development process, as well as that of deployments. In a microservice architecture, we break down individual pieces of functionality into discreet, loosely coupled entities that have their own code base and can be deployed and managed independently from the rest of the application. So, when we have a simple update or new service addition, we can both develop and deploy the individual piece of functionality as a separate code base rather than worry about app-wide testing and deployments or integrations. Before we clarify this topic any further, let's take a look at the monolithic approach for building applications and then compare that with a microservices architecture so that we can see the pros and cons of each:

Figure 12.4 – Monolithic architecture diagram

Figure 12.4 – Monolithic architecture diagram

A monolithic architecture is used for traditional server-side applications where the entire functionality or service is based on a single application. The entire functionality of the site is coded and deployed as a single entity and all dependencies are intertwined together. As you can see in Figure 12.4, a monolithic architecture comprises a UI, a server-side application (Business Logic and Data Access Layer in the preceding figure), and a database that contains relevant information that we can read and write with our application. Now that we have a base definition of what a monolithic architecture comprises, let's take a look at some advantages and disadvantages of this architecture.

Advantages of monolithic architecture

This is the architecture that most developers in Marketing Cloud will be familiar with concerning application development. A single suite of tools or technologies is selected to solve a given range of use cases, and the development process will more or less flow through a common build and deployment process that will be global in its management of the code base. Not only is this process intuitive, particularly when coming from a hobbyist or more isolated developer experience, but it can also allow developers to get started on a project quickly.

First, let's look at some of its advantages:

  • Simple to develop: Because this is the traditional method of developing applications, it's quite likely that your development team feels comfortable utilizing this architectural pattern for your application. In addition, when fleshing out your workflow and desired functionality for the application in the planning stages, it is much simpler to structure and build your application from the ground up in a monolithic architecture that allows for code reuse and shared datasets. Separating your code logically into components that are still related within a single application can introduce the concept of modularity without having to build individually separate services.
  • Simple to deploy: This might be the most obvious benefit of utilizing a monolithic architecture. Simply put, it's much easier to stage and deploy a single code base than to manage multiple directories or files. Rather than worrying about varying build processes or service quirks, it's all rolled into one package that you can put into production at once.
  • End-to-end testing: It should come as no surprise that end-to-end testing of your application or service is much easier when the entire suite of functionality is hosted within a single code base. There are many tools out there that can automate our testing procedures much more easily when our application is unified within a single code base.

Disadvantages of monolithic architecture

As you can see, some of the most key advantages of utilizing this architecture are related to its simplicity to develop, test, and deploy. Most developers will be familiar with this workflow and easily understand the efficiencies that it can provide, particularly during the initial build and deployment process. That being said, this is not without a few disadvantages as well. Let's take a look at a few key costs when implementing this approach:

  • Changes can be complex: Utilizing a single code base to provide all of the features of your application or service can become very difficult to manage when the overall size and complexity of your code are significant. If we have an individual feature or component to develop or extend, we are unable to isolate that code individually from the other components in our code base and we must test and deploy the entire application as a single entity just to accommodate this change.
  • Scalability: Let's say we have developed a suite of services that will automate business tasks in our organizations in addition to providing some functionality related to customer experience (an API gateway for email data, for example). Some of the functionality is used quite rarely, while others are much more in demand and receive lots of traffic each day. We could implement elastic scaling of our services so that the servers can process spikes in traffic and allocate more resources when many requests are being made simultaneously. Unfortunately, with a monolithic architecture, we can't selectively scale the individual portions that may receive the most traffic since the entire code base is effectively a single, coupled entity. This means we have to scale the entire application, even though only a small handful of components might require it. This can lead to poor user experiences or costly resource use that could be avoided with a more decoupled architecture.
  • New technology barrier: When we use a monolithic architecture, decisions about the technologies to utilize need to be made as part of the overall discovery process. As requirements change, and new languages, tools, or services are created to more efficiently handle common development issues, we may want to implement or utilize these new technologies to more efficiently deliver our features or to provide some capability that isn't supported in our current implementation. Utilizing a monolithic approach, we may have to rewrite large portions of our application to support this new technology, which might not be feasible from a time management or financial cost perspective.

As you can see, there are some obvious advantages and costs associated with utilizing a monolithic architecture when building applications or services. While it may be more intuitive to use this approach, and even desired when the level of complexity in the application is known to remain small, these advantages come at the cost of maintainability and scalability, which may be substantial barriers when considering your implementation.

Let's now take a look at an alternative approach that was created to address some of these concerns:

Figure 12.5 – Microservice architecture diagram

Figure 12.5 – Microservice architecture diagram

As you can see from the preceding figure, the structure of this architecture differs substantially from a monolithic approach. Instead of the more linear flow that we outlined with a monolithic approach, here we've decoupled our services and routed them through an API gateway. This allows us to route the appropriate request to individual microservices, providing a level of service decoupling that is not possible in the other architecture. For clarity's sake, let's define what each of these pieces does at a high level:

  • Client: The client can be any type, including a mobile application, single-page app, or integration services. It's essentially how a user or service interacts with your application.
  • API gateway: An API gateway is a sort of reverse proxy that sits between the client and your application microservices. It routes requests from the client to the appropriate microservice needed to perform some action. This is not a required entity for a microservices architecture as your client could call the necessary microservices directly, but it can sometimes be more efficient to utilize one (or multiple) API gateways that can route requests more efficiently or offer additional features, such as authentication and caching, that might not be easily implemented in a direct communication pattern.
  • Microservice: As the name implies, microservices are small, independent pieces of functionality that are maintained in their own separate code base and are deployed individually from other microservices and features of an application. They will generally be grouped by the domain that they fall within (such as order management, shipping information, and cart functionality) and are accessed utilizing simple requests.
  • Database: This is the actual datastore for the microservice, and it is the database for the information processed by the service.

As you can see, microservices differ from a monolithic approach in some distinct and important ways. First, it decouples functionality by their domain or purpose into entirely separate code bases, languages, deployments, and build processes. From an end user perspective, the functionality of a monolithic and microservice application is essentially the same but the method with which that functionality is built is quite different. In the monolithic approach, we're taking all our disparate features and services and rolling them up in a single application that a user interacts with. With microservices, however, we separate our application into a subset of smaller applications that interact with our application in such a way that the overall suite of services mirrors what our single application could do but in a much more efficient and manageable structure for developers. To illustrate this difference a bit more, let's list some characteristics of a microservice that define its purpose and how it can be managed:

  • Microservices should be small, loosely coupled, and managed independently of other services in the application. Each service should use a separate code base.
  • Each service can be created, managed, and deployed independently of any other service within the application.
  • Services should communicate with other services utilizing well-defined APIs, though the internal implementation details of a given service are hidden from other services.
  • For the most part, each microservice will have its own private database that it maintains separately from other services.

The key takeaway from these points is that each service is its own self-contained entity that can be managed wholly separate from the other services comprising an application. This lends itself well to extending the functionality of your application as you can have disparate teams contributing to the same overall functionality while still retaining the overall architecture of your implementation. Whether you are using completely different technologies or languages, hosting platforms, or any other differing item in the development process, as long as you have a common set of instructions for accessing and processing data from the service, they can be implemented within the same architecture to drive the functionality of an application.

Now that we have examined some of the advantages and disadvantages of a monolithic architecture, let's take a look at the microservices model in order to determine the pros and cons of its implementation.

Advantages of microservices architecture

Here are the advantages:

  • Loose coupling: Since each microservice is essentially its own mini-application, the risk that a change in one portion of the application will cause unanticipated changes in another is minimized greatly. Not only does this allow us to more easily maintain, test, and troubleshoot individual pieces of functionality of our application, but it also prevents a single errant piece of code in one part of our application from causing a widespread outage. This also allows us to provide more granular monitoring for individual components of our application rather than a more global service.
  • Technology flexibility: We've highlighted this previously, but it's an important advantage when using this architecture. Because each of our services is maintained and deployed individually, no individual component is tied to the overall technology stack of any other service. This allows us to more easily upgrade or implement functionality utilizing the most up-to-date tools and technologies, which may provide a substantial benefit when compared to the initial implementation decisions during project discovery. Additionally, it widens the scope of the teams that can work on functionality for an application since it allows developers to work on their own piece of functionality in isolation from the application in whatever technology stack they feel most comfortable with.
  • Encourages refactoring: When you've got an application that has become highly complex, and where functionality for different items can have interdependent relationships with the same sets of code, it can be discouraging to rewrite or refactor your code base. The adage if it isn't broke, don't fix it is commonly used in this context as the benefits that we might derive from improving our code can sometimes conflict with the costs of testing, maintenance, and downtime if the code in question is shared across the application. Microservices allow us to more granularly improve specific, self-contained sets of functionality without as much worry that our refactoring will have unintended consequences. This encourages developers to continuously refine their implementation to make it more understandable, efficient, and maintainable.

Disadvantages of microservices architecture

These are the disadvantages:

  • Lack of governance: We shared that technology flexibility and loose coupling are key advantages of utilizing this architecture, but they also can become an issue as well. While having the freedom to implement new technologies or languages for each individual service in our application allows us to expand the scope of people who can contribute and ensures that we can apply more efficient technologies more easily, it can come at a cost if done too frequently. Since there is no centralized framework with which each service is developed (though there can be a business requirement), you may end up with so many different languages and frameworks being used that some services become unmaintainable. Implementing the flavor-of-the-month framework for a given service might seem great at the time, but could be a niche item or unmaintained tool before you know it.
  • Complexity: While it's true that we have offloaded a lot of the complexity of each individual service into its self-contained code base, we've also introduced quite a bit more complexity into the system as a whole. More complex integration and routing can introduce complexities that are implicitly handled by the likely more simplified routing present within traditional monolithic applications.
  • Data consistency: With each microservice using a private datastore and responsible for its own data persistence, it can be difficult to derive data consistency across your application. While there are services that can help manage this, and even different application patterns specific to this issue, it's a common concern when utilizing microservices in data-heavy applications.
  • Latency and congestion: Because our services need to communicate directly with the application or other services, we can introduce congestion or latency in our network if the methods within our services are highly dependent or poorly structured. For instance, if we have a service A, which calls service B, which then calls service C, and so on, we can incur significant latency that will affect the overall user experience or even the general functionality of our application.

Each implementation comes with its own set of benefits and challenges, and the type that you choose to implement will be based on a multitude of factors, such as the complexity of your application, the expected roadmap for functionality, and the scope of development resources available. Though the benefits of microservices are clear, it is often recommended that, unless constructing complicated enterprise applications, you utilize a monolithic approach to begin with. This is so that you can more quickly get an idea into production and determine whether the future needs of your application merit the implementation of a microservices architecture. Segmenting your code into more modular components, simplifying build and deployment processes, and keeping the data model more self-contained are all ways that you can build within a monolithic structure while still keeping your application flexible enough for an eventual pivot to microservices if the move seems warranted. Finally, microservices are not necessarily better. If making simple changes to your application requires you to update and deploy 5-10 different services, then it defeats the purpose of using this architecture. It's simply a method for managing complex functionality within an application when the logic can be easily decoupled and managed by multiple teams using their preferred technologies.

Outside of understanding these two approaches concerning application development, there is a benefit in considering these architectures and their cost/benefit even among constructing simple functionality within Marketing Cloud. For instance, let's consider we have a simple automation that contains some simple scripts that will create data extensions, queries, or journeys. We could write a single script that reads some input, from a data extension perhaps, and then uses that data to determine which function in the script to execute (create a data extension or create a query, for example), but that doesn't feel like an efficient method for implementing this solution. For one, we now have multiple, unrelated pieces of functionality being housed within the same code base, which makes it more difficult to maintain and could lead to a small error in one piece effectively derailing the entire script. A more efficient solution would be to have each script separated to only handle the domain that is relevant for its functionality. In this instance, we might have a script for creating data extensions, one for creating queries and another for creating journeys.

By compartmentalizing the individual pieces into their own, distinct Script activities, we've created a system where single errors in one script have little to no impact on our other scripts and allow us to make updates more selectively to individual pieces of functionality rather than constantly tweaking a single Script activity to manage all pieces. Now, you might be hard-pressed to consider this implementation a true example of a microservices architecture as it is traditionally understood within web application development but a lot of the same benefits can be derived by utilizing this system as with microservices. Obviously, the understanding of this in the web application space is hugely beneficial for us as Marketing Cloud developers as well as when we are building complex applications that interact across services to automate some functionality within Marketing Cloud. That being said, we hope that the takeaway from this chapter for you has been that you can utilize these generic concepts, with regard to both microservices and the other topics we've discussed so far in the book, in order to start thinking differently about how your work in the platform itself is done. While you may not always find a quick correlation with the work you're doing on a daily basis, understanding these architecture patterns will inform how you operate within Marketing Cloud and can allow you to approach problems from a more knowledgeable perspective that will drive efficiency and maintainability in your solutions.

Summary

We've covered several different key ideas within this chapter that we hope you found both informative and enlightening for how you consider the work that you do as a Marketing Cloud developer. The differences between webhooks and APIs, and how we can utilize webhooks to create an event-driven, real-time solution, are so important in taking your integration with Marketing Cloud to the next level. As we have seen the rise of many platforms and services that implement webhooks, such as GitHub, Discord, and Slack, there have arisen numerous opportunities for automation across disparate systems to allow functionality that would otherwise be either impossible or wildly inefficient.

In addition to discussing webhooks, we also went through an example that creates content whenever a push event has occurred within GitHub. Obviously, our example was somewhat simplistic with many assumptions made for ease of demonstration, but it should provide a strong springboard for you to take this functionality to the next level. As Git has become an indispensable tool for teams across all development spaces, integrating this technology with Marketing Cloud through automated services can be a powerful multiplier that will increase the efficiency and happiness of developers or marketers working within the platform.

Finally, we reviewed what microservices are and how this architectural pattern differs from the traditional monolithic approach to application development. We highlighted some of the advantages and disadvantages of each approach and carefully considered how different factors, such as application complexity, team capabilities, or modularity, can affect our decision in regard to the optimal solution for our given use case. We also took a step back to consider how these ideas could be envisioned in the context of Marketing Cloud automation, and how automation itself can be thought of as a microservice architecture.

After reading this chapter, you should feel inspired to create event-driven functionality that can create value for your organization or developer experience. You should also be able to more clearly see how to apply the concepts in this book to the work that you do in Marketing Cloud, even outside of the context of custom application development.

In our next chapter, we're going to tackle custom Journey Builder activities, specifically an activity that can greatly expand both the utility and capabilities of Journey Builder within Salesforce Marketing Cloud.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset