In the previous chapter, we dove into the wonderful world of APIs and SDKs to determine how they could be utilized to implement custom solutions and what the costs or benefits that could be associated with each type are. We learned about the different types of requests that are common in both Marketing Cloud and web development generally. Then, we really honed in on each request type and the overall structure that allows us to communicate with services across the web.
In this chapter, we're going to expand on our API knowledge to understand the concept of using an event-driven framework that lets us act on data in real time rather than with user-driven requests. This is a powerful tool for a variety of reasons, particularly how it allows us to automate functionality, both within Marketing Cloud and external to the platform, which isn't really possible to do efficiently with traditional APIs as defined in our last chapter.
Also, we'll go back to a conversation on application design and structure and introduce the concept of microservices. You will see how this concept differs from traditional application development and architecture we might be more familiar with. This will help inform us on how to best structure the overall application in order to provide the most benefit to us as developers working on a system and ensuring we build services in an efficient, easy-to-maintain, and sustainable fashion. As an overview, in this chapter we will be covering the following:
While examining the aforementioned topics, you may find it difficult to form a connection between the theory and the work we do within Marketing Cloud. We'll show an example for webhooks that should help clarify how some of these ideas are relevant to our work as Marketing Cloud developers. We also hope that you consider each item carefully and consider possible similarities to the work you do on a daily basis. While it's true that the last few chapters of this book contain quite a bit of theory, each item has a direct correlation to the work that we do within Marketing Cloud even outside the scope of custom application development. With some careful consideration, we think you will see the connections and feel more empowered to view your work both in Marketing Cloud and external to it in a new, and hopefully more informed, light. Now, without further ado, let's take a look at the first topic of our chapter, webhooks.
The full code for the chapter can be found in the GitHub repository located here: https://github.com/PacktPublishing/Automating-Salesforce-Marketing-Cloud/tree/main/Chapter12.
So, before we get started comparing these two, let's define what webhooks are and how they can be utilized within an application. Webhooks, also referred to as web callbacks or reverse APIs, are a method that allows an app or service to send real-time data to another application or service when some given event has occurred. Whenever an event is triggered, the webhook registers the event and aggregates all of the data for the request. The request is then sent to a URL, specified in a configuration within the service registering the event, in the form of an HTTP request.
Webhooks allow us to process logic efficiently when an event occurs within the service providing the webhook. The information structure passed from the webhook is decided by the service provider passing the event. Webhooks can also be utilized to connect events and functionality on two disparate services so that some event on one platform triggers another event on a separate platform without any user delegation or input. An example of this might be where we configure webhooks within both GitHub and Slack and utilize them together such that a message is posted on a Slack channel whenever a new commit has been made and merged into our master branch. By using webhooks, we can allow services to talk to each other in an automated way, which allows us to construct functionality that may not be possible using the standard API approaches discussed in the previous chapter.
It's quite easy to confuse webhooks with APIs, especially given that they both communicate utilizing the same methods. Also, the responses for webhooks can feature a very similar structure as that of a traditional API request. We can even see how they could be utilized separately to construct similar functionality in a given scenario. In the previous chapter, we examined APIs in depth and saw that we could utilize requests to return the state of a given resource. If the primary use case for webhooks is event-driven requests, couldn't we also just use an API call to determine the status of an event? Sure, we could do that. Utilizing this method would be an implementation of a concept known as continuous polling. The following diagram illustrates the continuous polling concept:
Using continuous polling, we configure our client to make regular requests to the server to determine whether anything has changed with a given resource. We can execute these calls in either an automated or manual fashion. But the primary thing to note is that we request the server at some interval to return a status of a resource and run this process continuously. We make an arbitrary number of requests that will tell us whether an event has occurred, and to send data about the event when it has, even though most of our requests to the server will return a status that no change has occurred. As you can see in Figure 12.1, our client makes a total of three requests to the server for the resource status but is only returned meaningful information that a change has happened on the third request. You could visualize this approach as a parent and child on a long car ride, with the child continuously asking, Are we there yet?. Of course, at some point, the answer will be yes but it will only come after many nos and noticeable frustration from the parent.
Obviously, this is not an ideal approach. We're essentially wasting resources on both ends making requests to the server when the result of the request will probably not provide any meaningful information on the resource we are seeking information about. While we're still able to detect changes, this comes with several disadvantages, such as the following:
That's not to say that there is no use for polling generally, but it is by its very nature an inefficient process. If the event data is not time-sensitive, and our schedule is on the order of a day or more, then a simple automated poll to check status is not likely to provide significant risk or impact to resources. Still, this provides little gain but with only slimmer risks. So, how do webhooks compare to continuous polling in terms of functionality and efficiency? The following is a visualization of a webhook:
As you can see in the preceding diagram, webhooks have a simple overall structure. Rather than initiating requests from the client to the server to determine the current state of a resource, we instead subscribe to a webhook that will notify us when the event has occurred by posting data to an endpoint that we define within the service where the event occurs.
It's not hard to see the benefits of utilizing webhooks over continuous polling in order to retrieve data when a given event occurs. Unlike polling, we can ensure that our data is updated in real time since our application will receive data corresponding to an event as soon as that event has occurred. This ensures that we don't suffer from the stale data problem that is present with polling. In addition to this, we don't overwhelm resources since we're not making continuous requests to return the resource states. Only a single request is needed and is executed by the server, to your configured endpoint, only when a specified event has occurred. There's no need to worry about governor limits or scaling your polling procedures.
In review, the primary difference between the two is that, with an API, a user makes a request to retrieve data from an endpoint and then receives a response. Webhooks are simply HTTP messages that are sent as the result of some event on a third-party service. Webhooks and APIs are closely related, with webhooks even being referred to as reverse APIs by developers. Now that we know the nomenclature and definitions, let's dig a little deeper with a practical example for demonstration.
We now have a solid grasp on what a webhook is, how it differs from an API, and the types of scenarios where it can be a more effective solution than traditional API requests or polling. What might not be so obvious is how we can utilize them to extend our capabilities within Marketing Cloud to improve our current processes. To that end, it might be helpful for us to look at a simple webhook implementation that works with Marketing Cloud to automate an internal process common to many organizations.
For our example, let's say that we are currently using Content Builder to create and manage our email campaigns within Marketing Cloud. Furthermore, we are also utilizing content blocks within Content Builder to modularize our development processes and keep things compartmentalized and easy to maintain. Unfortunately, we've identified issues with version control when working on content as a team and want to minimize impact when multiple people are working on an individual campaign. To allow for more efficient backups, we're also utilizing GitHub to version-control our code and to keep backup repositories of our production content to prevent an untimely deletion.
This process works well, but it's still cumbersome to sync the data from our repository and Marketing Cloud and the updates are still manual and prone to user error. It sure would be great if we could just automatically add content to Content Builder whenever we've pushed new content to our repository. That's where webhooks come in!
By utilizing a webhook with GitHub, we can automatically post-commit data whenever a push event is triggered for our repository. Even better, we can configure an endpoint that our webhook can post to. This endpoint can contain logic to process the new files and create them within Marketing Cloud automatically.
First, let's outline exactly what we are trying to implement. When a file is created and pushed to our GitHub repository, we want to trigger an event in Marketing Cloud that will grab the raw data from the file and create a code snippet content block containing that data within Content Builder. While we could also selectively update or remove content based on the event in GitHub, we'll stick to just creating new content now for ease of demonstration.
We'll also assume that we will structure our repository such that our code snippet content blocks are nested in individual folders whose names correspond to a category ID in Content Builder. So, an example of our repository might look like this:
Finally, let's also call out that our example won't be applicable for pulling larger files (>1 MB) from the GitHub REST API. Although it is possible by utilizing additional API routes, it is beyond the scope of this example. Before we can implement our new webhook functionality, we're going to need a few things so we can get started setting everything up:
That's all that we need to get started building our solution. With these items set up, let's take a look at configuring the webhook for our GitHub events.
There are certain steps required to begin the configuration of the GitHub webhook. Please follow these listed steps to begin the configuration:
In this menu, we'll define the following items:
After we've configured the primary information about our webhook, listed previously, we'll then want to configure what types of events will trigger a request. While there are many different event types that we could target, from the forking of the repository to the creation of new issues, we will select Just the push event. Since we only want to create content once some new content has been pushed to our repository, only firing the webhook for push events should be sufficient. Then, we simply set our webhook as active and save. Now our webhook is live and will automatically send data when a push event has occurred for our repository!
Our webhook has been configured and we are all set to post events to our new endpoint, but we've still got one more configuration to take care of within GitHub. While we are ready to receive event data from our webhook, we need to set up a new access token in order to retrieve the raw content data from our repository to generate the content block in Marketing Cloud.
To do this, navigate to the User Settings menu and select Developer Settings. From there, we'll select Personal Access Tokens and create a new access token. In this configuration, we'll want to provide it with a helpful name that identifies the webhook service that we have constructed. We'll also need to set an expiration date for the token or configure it to never expire. Finally, let's select the token to allow the repository scope to read content that has been pushed to our repository. After saving your configuration, you will be presented with your access token. This token will not be visible again, so ensure you store the token for your application before navigating away from the page.
That's all we need for our example service in order to receive events from GitHub and authenticate them into our repository and retrieve the raw content from our push event. Now, we'll head over to Marketing Cloud to configure the endpoint we specified for our webhook.
With our webhook set up on the GitHub side, which will send data to our endpoint when an event occurs, now it's time to configure our endpoint to execute our content creation logic when data has been posted to it. As stated previously, we can utilize a JSON Code Resource page to serve as our endpoint, but we could just as easily construct this to be hosted on any server and utilize an array of technologies. Since our purpose here is to demonstrate an example scenario, and given that most of you will be familiar with Code Resource, we'll select this option to process our logic. So, how do we begin? Well, first, we'll need to highlight the relevant pieces of data from the GitHub request that we can key off of to retrieve the relevant data and create our newly pushed content within Marketing Cloud:
{
"repository":{
"contents_url":"https://api.github.com/repositorys/
{username}/{repositoryname}/contents/{+path}"
},
"commits":[
{
"added":[
"12345/contentBlock1.html",
"67890/contentBlock2.html"
],
}
]
}
While there is much more data returned in the event request from our webhook, this is the overall structure that contains the relevant pieces that we will utilize for our example solution. First, note that there is a property called contents_url within our JSON payload. This value provides us with the base URL of our repository that can be utilized to make API calls to find files with a specific path within our repository. In addition, we have the added array under the commits property, which will house any files that have been newly added as a result of our push event within GitHub.
With our generalized payload structure in hand, let's define the individual pieces of functionality that we'll want our webhook to execute in order to create our code snippet content block within Content Builder:
First, we'll need to set up the script to retrieve the payload data and allow us to further process the JSON data being posted from the webhook. To do this, we'll utilize both the GetPostData() and ParseJSON() SSJS functions, which will retrieve the data and parse the JSON object:
<script runat=server>
Platform.Load("core", "1.1.1");
var postData = Platform.Request.GetPostData();
var json = Platform.Function.ParseJSON(postData);
</script>
Now that we've pulled in the JSON data and have it ready for processing, we need to assign the relevant data points we highlighted in the payload to variables that we can utilize for further processing.
Now, we'll grab the contents_url parameter from the payload. Notice, in our example, the value in the payload is appended with the {+path} substring. We'll want to remove this portion from our variable as it's not relevant for pulling the final path to the files that we wish to retrieve. Finally, we'll also grab the added array from the commits property so that we can iterate through each added file and retrieve its contents:
var baseContentsURL = json.repository.contents_url;
baseContentsURL = baseContentsURL.slice(0, baseContentsURL.lastIndexOf('/') + 1);
var addedFilesInCommit = json.commits[0].added;
That's all we need in order to accomplish the aforementioned items, and we now have our variables assigned for the base content path URL as well as our added array. With that in hand, we need to write our function to call the GitHub REST API to return the raw contents of our newly pushed files. Let's take a look at what that script looks like and then break down its components a little further:
function getRawGithubData(assetPath, contentURL) {
var accessToke = "YOUR GITHUB ACCESS TOKEN";
var auth = 'token ' + accessToken;
var url = contentURL + assetPath;
var req = new Script.Util.HttpRequest(url);
req.emptyContentHandling = 0;
req.retries = 2;
req.continueOnError = true;
req.contentType = "application/json"
req.setHeader("Authorization", auth);
req.setHeader("user-agent", "marketing-cloud");
req.setHeader("Accept",
"application/vnd.github.VERSION.raw");
req.method = "GET";
var resp = req.send();
var resultString = String(resp.content);
return resultString;
}
As you can see here, we are using Script.Util in order to make a GET API request to GitHub to retrieve our file content. To make this request, we'll need our function to accept parameters for contentURL, which we assigned to a variable and formatted in the previous step, and the path of the file that we'll pull from our added array assigned previously as well. Before we can complete our API call, we'll need to further define the following items in our request:
That's all that we need to define to make our request in GitHub in order to retrieve the file contents of whatever asset path we pass into this function. We'll make our request and return the content of that request as an output of the function so that we are able to retrieve the file contents and upload the asset to Marketing Cloud. With our function set up to retrieve the contents of the files added during the commit, we'll now need to write our function that writes this content to Marketing Cloud.
While we could utilize several methods in order to create this content, such as the Content Builder REST API, for ease of use (and to save us from setting up packages and authenticating into Marketing Cloud), we'll use a platform function approach to creating this content. Before we dive in, it's important to note that the documentation outlining the possible routes and functionality within Content Builder can be found in the official documentation located here: https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/content-api.html.
Let's take a look at what that function looks like before outlining what's going on:
function createAsset(assetName, assetContent, assetId, assetCategoryId) {
var asset = Platform.Function.CreateObject("Asset");
var nameIdReference =
Platform.Function.CreateObject("nameIdReference");
Platform.Function.SetObjectProperty(nameIdReference,
"Id", assetId);
Platform.Function.SetObjectProperty(asset, "AssetType",
nameIdReference);
var categoryNameIdReference = Platform.Function
.CreateObject("categoryNameIdReference");
Platform.Function.SetObjectProperty(
categoryNameIdReference, "Id", assetCategoryId);
Platform.Function.SetObjectProperty(asset, "Category",
categoryNameIdReference);
Platform.Function.SetObjectProperty(asset, "Name",
assetName);
Platform.Function.SetObjectProperty(asset, "Content",
assetContent);
Platform.Function.SetObjectProperty(asset,
"ContentType", "application/json");
var statusAndRequest = [0, 0];
var response = Platform.Function.InvokeCreate(asset,
statusAndRequest, null);
return response;
}
Here, we are outlining a function called createAsset that will take some parameters and utilize them to actually create our code snippet content block within Marketing Cloud. Our function should accept parameters for the following properties of our Content Builder asset:
First, we'll need to define the asset type that our content belongs to. While we have written our function to make this process generic, we could also hardcode it directly if we are only utilizing this webhook to process data for a given type. Here, we'll let the function take it as a parameter and assign the type ID according to that. Next, we'll need to retrieve the categoryId parameter and define that value for the Category ID property of our asset initialization. This ID will specify exactly what folder we wish to insert this asset into. Finally, we'll grab both the asset name and content parameters and then assign them accordingly to our asset object. Then, our function will create the asset with the values defined previously and insert this content into the specified folder within Content Builder. Now, all that we need to do is iterate through the added items in the GitHub JSON payload and invoke the preceding two functions to retrieve the content and create it in Marketing Cloud:
for (var i in addedFilesInCommit) {
var assetPath = addedFilesInCommit[i];
var categoryId = assetPath.substring(0,
assetPath.indexOf("/"));
var contentName =
assetPath.split("/").pop().replace(".html", "");
var contentData = getRawGithubData(assetPath,
baseContentsURL);
createAsset(contentName, contentData, 220, categoryId);
}
Notice here, we are iterating through each item in the array and then assigning an assetPath parameter that will equal the path of the file that has been pushed to our GitHub repository. Because this path contains both the name of the file and the category ID, as defined in the naming convention we discussed at the start of this example, we'll want to parse out each of those values separately from the added array item within each iteration. Finally, we'll invoke our GitHub REST API call function and assign it to a variable that will now contain the raw content of the file we've retrieved. After that, it's as simple as calling our createAsset function, noting that we are passing in a value of 220 for our asset type ID as this corresponds to code snippet content blocks within the Content Builder API asset model. For a complete list of asset type IDs, please refer to the Content Builder Asset Type documentation, located at https://developer.salesforce.com/docs/marketing/marketing-cloud/guide/base-asset-types.html.
That's it! With the preceding code saved and published to the endpoint that we defined within the GitHub webhook configurations, we are now all set in order to start syncing our push event file data with Content Builder. Whenever we've added, committed, and pushed an event to our repository, this endpoint and logic will automatically be processed and the new content will be created within Marketing Cloud from the data we've pushed to the repository.
This was a somewhat simplistic example, but we hope that it helps highlight the different ways that we can utilize webhooks to create an event-driven set of functionality within Marketing Cloud that automates our processes or provides some new degree of efficiency. Utilizing the aforementioned solution, we could easily scale it to more comprehensively handle our assets or even define our own schema for mass creating any set of Marketing Cloud objects that are accessible either through platform functions or API routes defined within the documentation.
In addition to using external services in order to generate event-driven functionality, there are webhook services within the Marketing Cloud ecosystem that allow the user to subscribe to certain events and then receive automated requests posted to a defined endpoint whenever an activity has occurred. One such method is Event Notification Service, which provides a webhook functionality that allows developers to receive relevant deliverability and engagement metrics on an email or text deployment automatically. This allows us to further automate processes in order to provide an immediate response or insight following some user engagement with our content. So, say we have an order confirmation email that contains a link for support or further information. We could set up a webhook that receives a request when the link is clicked and then takes some further immediate action (such as emailing a more detailed order receipt to the user).
The core concepts for utilizing the GitHub integration and Event Notification Service remain largely the same. Though the steps to accomplish both will differ in their configuration or endpoint, the basic premise is that we utilize the following steps to create our integration:
With Event Notification Service, these steps are largely generated through API requests to defined routes within the Marketing Cloud REST API. In our GitHub example, these are done through simple User Interface (UI) configurations made within the repository settings, but the overall flow necessary for constructing these solutions is essentially the same.
Understanding the importance of event-driven requests, and how they can be utilized both within Marketing Cloud and externally in order to generate real-time functionality, is key. Familiarity with the distinction between webhooks and APIs allows us to choose the appropriate tools for a given use case and allows developers to select the appropriate tool for a given task and ensure that we're keeping efficiency and maintainability at the forefront of our application development. Now that we have introduced the concept of webhooks, let's move on to another concept that can aid us in building solutions that are both efficient and can scale.
It's no secret to any of you that business requirements or existing flows can change, sometimes on a daily or weekly basis. As such, development teams are compelled to adapt to changing circumstances by extending new functionality into a given service or by altering its capabilities to meet both existing and new challenges. Unfortunately, it's not always so simple to extend functionality or revise existing solutions within our applications. We may have portions of our code base that are generic but intertwined and dependent on the execution of some other component.
When starting a project, when the focus is narrow, the code base can be very manageable and somewhat self-contained since it should encapsulate all of the base functionality outlined in the discovery process. Over time, additional functionality and components are added such that the code base and build, integration, and test processes can become cumbersome to manage or decouple. With more and more code being utilized in a central location, best practices were developed for ways to modularize the functionality of the application to make the code more maintainable and generalized (that is, they can be used by other parts of your application). Unfortunately, all of these individual modules must still be compiled together into a single code base in order to deploy the application. So, regardless of how the improved modularity of the application has impacted the developers working on it, at the end of the day, it still needs to come together in a single deployment for the entire code base to go to production. Enter microservices.
Microservices are an architectural pattern that differ from a monolithic approach in both the structure of the development process, as well as that of deployments. In a microservice architecture, we break down individual pieces of functionality into discreet, loosely coupled entities that have their own code base and can be deployed and managed independently from the rest of the application. So, when we have a simple update or new service addition, we can both develop and deploy the individual piece of functionality as a separate code base rather than worry about app-wide testing and deployments or integrations. Before we clarify this topic any further, let's take a look at the monolithic approach for building applications and then compare that with a microservices architecture so that we can see the pros and cons of each:
A monolithic architecture is used for traditional server-side applications where the entire functionality or service is based on a single application. The entire functionality of the site is coded and deployed as a single entity and all dependencies are intertwined together. As you can see in Figure 12.4, a monolithic architecture comprises a UI, a server-side application (Business Logic and Data Access Layer in the preceding figure), and a database that contains relevant information that we can read and write with our application. Now that we have a base definition of what a monolithic architecture comprises, let's take a look at some advantages and disadvantages of this architecture.
This is the architecture that most developers in Marketing Cloud will be familiar with concerning application development. A single suite of tools or technologies is selected to solve a given range of use cases, and the development process will more or less flow through a common build and deployment process that will be global in its management of the code base. Not only is this process intuitive, particularly when coming from a hobbyist or more isolated developer experience, but it can also allow developers to get started on a project quickly.
First, let's look at some of its advantages:
As you can see, some of the most key advantages of utilizing this architecture are related to its simplicity to develop, test, and deploy. Most developers will be familiar with this workflow and easily understand the efficiencies that it can provide, particularly during the initial build and deployment process. That being said, this is not without a few disadvantages as well. Let's take a look at a few key costs when implementing this approach:
As you can see, there are some obvious advantages and costs associated with utilizing a monolithic architecture when building applications or services. While it may be more intuitive to use this approach, and even desired when the level of complexity in the application is known to remain small, these advantages come at the cost of maintainability and scalability, which may be substantial barriers when considering your implementation.
Let's now take a look at an alternative approach that was created to address some of these concerns:
As you can see from the preceding figure, the structure of this architecture differs substantially from a monolithic approach. Instead of the more linear flow that we outlined with a monolithic approach, here we've decoupled our services and routed them through an API gateway. This allows us to route the appropriate request to individual microservices, providing a level of service decoupling that is not possible in the other architecture. For clarity's sake, let's define what each of these pieces does at a high level:
As you can see, microservices differ from a monolithic approach in some distinct and important ways. First, it decouples functionality by their domain or purpose into entirely separate code bases, languages, deployments, and build processes. From an end user perspective, the functionality of a monolithic and microservice application is essentially the same but the method with which that functionality is built is quite different. In the monolithic approach, we're taking all our disparate features and services and rolling them up in a single application that a user interacts with. With microservices, however, we separate our application into a subset of smaller applications that interact with our application in such a way that the overall suite of services mirrors what our single application could do but in a much more efficient and manageable structure for developers. To illustrate this difference a bit more, let's list some characteristics of a microservice that define its purpose and how it can be managed:
The key takeaway from these points is that each service is its own self-contained entity that can be managed wholly separate from the other services comprising an application. This lends itself well to extending the functionality of your application as you can have disparate teams contributing to the same overall functionality while still retaining the overall architecture of your implementation. Whether you are using completely different technologies or languages, hosting platforms, or any other differing item in the development process, as long as you have a common set of instructions for accessing and processing data from the service, they can be implemented within the same architecture to drive the functionality of an application.
Now that we have examined some of the advantages and disadvantages of a monolithic architecture, let's take a look at the microservices model in order to determine the pros and cons of its implementation.
Here are the advantages:
Each implementation comes with its own set of benefits and challenges, and the type that you choose to implement will be based on a multitude of factors, such as the complexity of your application, the expected roadmap for functionality, and the scope of development resources available. Though the benefits of microservices are clear, it is often recommended that, unless constructing complicated enterprise applications, you utilize a monolithic approach to begin with. This is so that you can more quickly get an idea into production and determine whether the future needs of your application merit the implementation of a microservices architecture. Segmenting your code into more modular components, simplifying build and deployment processes, and keeping the data model more self-contained are all ways that you can build within a monolithic structure while still keeping your application flexible enough for an eventual pivot to microservices if the move seems warranted. Finally, microservices are not necessarily better. If making simple changes to your application requires you to update and deploy 5-10 different services, then it defeats the purpose of using this architecture. It's simply a method for managing complex functionality within an application when the logic can be easily decoupled and managed by multiple teams using their preferred technologies.
Outside of understanding these two approaches concerning application development, there is a benefit in considering these architectures and their cost/benefit even among constructing simple functionality within Marketing Cloud. For instance, let's consider we have a simple automation that contains some simple scripts that will create data extensions, queries, or journeys. We could write a single script that reads some input, from a data extension perhaps, and then uses that data to determine which function in the script to execute (create a data extension or create a query, for example), but that doesn't feel like an efficient method for implementing this solution. For one, we now have multiple, unrelated pieces of functionality being housed within the same code base, which makes it more difficult to maintain and could lead to a small error in one piece effectively derailing the entire script. A more efficient solution would be to have each script separated to only handle the domain that is relevant for its functionality. In this instance, we might have a script for creating data extensions, one for creating queries and another for creating journeys.
By compartmentalizing the individual pieces into their own, distinct Script activities, we've created a system where single errors in one script have little to no impact on our other scripts and allow us to make updates more selectively to individual pieces of functionality rather than constantly tweaking a single Script activity to manage all pieces. Now, you might be hard-pressed to consider this implementation a true example of a microservices architecture as it is traditionally understood within web application development but a lot of the same benefits can be derived by utilizing this system as with microservices. Obviously, the understanding of this in the web application space is hugely beneficial for us as Marketing Cloud developers as well as when we are building complex applications that interact across services to automate some functionality within Marketing Cloud. That being said, we hope that the takeaway from this chapter for you has been that you can utilize these generic concepts, with regard to both microservices and the other topics we've discussed so far in the book, in order to start thinking differently about how your work in the platform itself is done. While you may not always find a quick correlation with the work you're doing on a daily basis, understanding these architecture patterns will inform how you operate within Marketing Cloud and can allow you to approach problems from a more knowledgeable perspective that will drive efficiency and maintainability in your solutions.
We've covered several different key ideas within this chapter that we hope you found both informative and enlightening for how you consider the work that you do as a Marketing Cloud developer. The differences between webhooks and APIs, and how we can utilize webhooks to create an event-driven, real-time solution, are so important in taking your integration with Marketing Cloud to the next level. As we have seen the rise of many platforms and services that implement webhooks, such as GitHub, Discord, and Slack, there have arisen numerous opportunities for automation across disparate systems to allow functionality that would otherwise be either impossible or wildly inefficient.
In addition to discussing webhooks, we also went through an example that creates content whenever a push event has occurred within GitHub. Obviously, our example was somewhat simplistic with many assumptions made for ease of demonstration, but it should provide a strong springboard for you to take this functionality to the next level. As Git has become an indispensable tool for teams across all development spaces, integrating this technology with Marketing Cloud through automated services can be a powerful multiplier that will increase the efficiency and happiness of developers or marketers working within the platform.
Finally, we reviewed what microservices are and how this architectural pattern differs from the traditional monolithic approach to application development. We highlighted some of the advantages and disadvantages of each approach and carefully considered how different factors, such as application complexity, team capabilities, or modularity, can affect our decision in regard to the optimal solution for our given use case. We also took a step back to consider how these ideas could be envisioned in the context of Marketing Cloud automation, and how automation itself can be thought of as a microservice architecture.
After reading this chapter, you should feel inspired to create event-driven functionality that can create value for your organization or developer experience. You should also be able to more clearly see how to apply the concepts in this book to the work that you do in Marketing Cloud, even outside of the context of custom application development.
In our next chapter, we're going to tackle custom Journey Builder activities, specifically an activity that can greatly expand both the utility and capabilities of Journey Builder within Salesforce Marketing Cloud.