© Paul Michaels 2022
P. MichaelsSoftware Architecture by Examplehttps://doi.org/10.1007/978-1-4842-7990-8_5

5. The Admin Application Problem

Paul Michaels1  
(1)
Derbyshire, UK
 

One of the biggest problems that software engineers and architects face during their day-to-day work is deciding when something is done. In the past, I’ve worked on both large and small-scale software systems that we would produce and then sell on to customers; but the questions that we’d get from customers, be they internal or external, were always of the same vein: “Can you just add x?” or “Can we change y?” Often, it’s cost prohibitive as a software vendor to make minor changes for individual customers; imagine a situation where you could contact Microsoft and ask them to change the icon of Microsoft Word!

Microsoft and many other companies have established a system around this, which is to make the software configurable and extensible. You can, for example, create add-ins for the Office suite.

In this chapter, we’ll explore ways that we can extend a piece of software without changing the core components of that software.

Background

The software company that you are working for produces CRM (Customer Relationship Management) software. Several of your customers that use the software have requested an administration program that allows them to update their customer records. However, each of the customers needs the software to operate in a slightly different manner.

Note

Clearly, the CRM system that your company produces is in its infancy, as it is yet to produce something that maintains the customer records.

Let’s define the scope of this task.

Requirements

The MVP, or Minimum Viable Product , for this application is that the system will:
  • Read or create a JSON data file, which will define a customer, including name, address, email, and credit limit.

  • Allow the creation of new customer records.

  • Allow the user to change existing records.

This constitutes the functionality that all of the customers have requested; however, three have requested additional, specific functionality:
  • Ability to email the customer when the record has changed.

  • Ability to display an alert on the screen where the credit limit is set to more than £300.

  • Ability to list any customers over a given credit limit.

Let’s think about how we can achieve this and produce the software that best meets the needs of all the customers.

Options

In previous chapters, we’ve discussed how we might achieve our goal using a manual process; this chapter will be no different, but in this instance, the requirement is to maintain a record that is stored on a computer.

Manual Process

The reason it’s useful to consider the manual process is that it starkly reveals exactly what the requirements are. In this instance, the requirement is to maintain a file that’s stored on a server (the JSON data file).

If we imagine a manual process for this, we might think about an IT administrator, whose job is to manually alter this file. To give us a feel for the type of work involved, Listing 5-1 shows an example of the file itself.
{
  "Customers": [
    {
      "Name": "Fred Bains",
      "Address": "1 Hilltop View, Swansea",
      "Email": "[email protected]",
      "CreditLimit": 100
    },
    {
      "Name": "Wilma Green",
      "Address": "84 Elm Close, Luton",
      "Email": "[email protected]",
      "CreditLimit": 400
    }
  ]
}
Listing 5-1

Customer Data File

Each of our customers has their own data file, so there is no risk that one customer could access a different customer’s data.

Note

Unfortunately, the term “customer” is overloaded here. The one type of customer is the customer that has purchased your software, and the other type is the customer that they have for their business.

We can imagine a manual process whereby somebody may go and change this file; in fact, JSON is a very human-readable format, so it wouldn’t take a huge amount of training. For the additional requirements, we can certainly see how a user might be able to email an alert; however, producing reports based on a JSON file is a little different and may be difficult to do manually. It seems like an obvious thing, but we state that, given the manual process, the varying requirements become irrelevant, as each customer would simply implement their own additional requirements.

Each of these operators would be given a runbook of sorts, for example:

Amend Customer
  1. 1.

    Open the JSON data file in Notepad.

     
  2. 2.

    Find the relevant customer by searching for their name.

     
  3. 3.

    Change the record.

     
  4. 4.

    Email the customer using the email address on the record explaining that their record has been updated.

     

Now, let’s consider what this might look like for a different operator:

Amend Customer
  1. 1.

    Open the JSON data file in Notepad.

     
  2. 2.

    Find the relevant customer by searching for their name.

     
  3. 3.

    Change the record.

     
  4. 4.

    Check the credit limit – if it is greater than £300, then notify the person that requested the change.

     

As we can see, the two processes are identical except for the last step.

Essentially, what we are looking for is a method that we can encapsulate this manual change in a software system. We can imagine that one such way might be to change the source code for each different implementation; a pseudocode version of this idea might look like Listing 5-2.
UpdateRecord()
If Customer = "Customer1" Then
        If CreditLimit > 300 Then
                SendAlert()
Else If Customer = "Customer2" Then
        SendNotification();
End If
Listing 5-2

Pseudocode Implementation of Differing Requirements by Customer

In fact, I’ve personally seen variations of Listing 5-2 being used to satisfy the requirements of multiple clients. Even apart from other concerns, it becomes increasingly difficult to maintain code like this: the complexity increasing almost exponentially as new customers are added (imagine if Customer 4 wanted alerting and notifications).

Situations like this led to the Open-Closed Principle; this forms part of the SOLID design principles; it seems strange that we’ve reached Chapter 5 of a book on software architecture and have yet to speak about the SOLID design principles, so we’ll correct that now!

SOLID

SOLID is, in fact, an acronym; most of these principles can be attributed to Robert Martin, Bertrand Meyer, and Barbara Liskov; in brief, they are:
  1. 1.

    Single Responsibility.

     
  2. 2.

    Open-Closed.

     
  3. 3.

    Liskov Substitution.

     
  4. 4.

    Interface Segregation.

     
  5. 5.

    Dependency Inversion.

     

Although we are currently interested in the Open-Closed Principle, let’s briefly describe all of these principles – while I don’t necessarily think that software that follows all of these principles becomes good software, they are doubtless good principles to be aware of.

Single Responsibility

A class should have only one reason to change.

The Single Responsibility Principle dictates that there should only be a single reason to change a piece of code. As with all these principles, the best description of what this means is to state what it is trying to guard against.

Let’s consider a class that might exist in our software: the Customer class. Now, let’s imagine that the Customer class looks like Listing 5-3.
public class Customer
{
    public string Name { get; set; }
    public string UpdateNameInDatabase(string newName)
    {
        ...
    }
    public bool SendEmailNotification(string emailBody)
    {
        ...
    }
}
Listing 5-3

A Possible Customer Class

The class in Listing 5-3 would violate the Single Responsibility Principle (SRP) because it would have more than one reason to change; that is, it does more than one thing.

Let’s imagine that we want to change our class, such that the formatting of the name changes to always be uppercase when we read it – so a change to the in-memory representation of the data is one reason for the class to change. What if we updated the structure of the database, such that the customer name has now two fields? We would need to change the in-memory representation and the UpdateNameInDatabase method. Now, what if we wanted to add attachments to our email notification? Again, our class would need to change for a second reason.

As we can see, our class does not follow the Single Responsibility Principle very well at all!

However, the principles in the SOLID acronym are not some kind of litany that we should simply remember and follow without question; in fact, all software engineering principles should be continually questioned. What is wrong with having to change the class for multiple reasons?

Well, there are a few issues here; let’s focus on the following three: testability, code churn, and the resilience of the software in general.

Testability

Ultimately, all software is testable and tested – unless you simply write it and then instantly delete it, it will be both tested and testable. How can we ensure that our class correctly writes the customer name to the database? Well, one option is that we have faith that the functionality works, and we deploy this to our customer base. Does this mean that the software is neither tested nor testable? On the contrary: the software will be tested by our customer base, and it will be tested by using the functionality that we state is available.

I won’t say here that such a method is flawed (your customers may do so if you adopt this approach), although I will offer an alternative: that we create a unit test against our method; the issue here is that our method writes to the database, and so we would need to abstract that part of the functionality; however, we can’t do that because our class has both the business logic and the database access built in.

Code Churn

Changing code is dangerous. To clarify exactly what I mean by that statement: there is a non-zero risk that for every line of code you write or change, you will introduce an unexpected bug or change the behavior in an unexpected manner. Computer programs are extremely complex, and computer programmers are fallible, so each time code changes, there’s a chance that something will go wrong. In my time as a programmer, I have seen the most innocuous changes result in hugely significant bugs.

Obviously, changing the code of a program that is currently broken inherently carries less risk than changing the code of a program that is working well. However, if we accept that code churn (i.e., the act of changing code) is dangerous (relative to not changing the code), then anything we can do to isolate, insulate, and reduce change must, therefore, be a good thing.

If a class does more than one thing, there is more than one reason to change it, which means that it is more likely to need to change and therefore is more likely to break.

Software Resilience

Addressing this separately from the other two reasons, let’s look at the resilience of the software at rest; by that, I mean the chances that a piece of code has a bug that has yet to be detected. As we’ve already said, software is complex; bugs can be present in software for years and years before coming to light. Let’s imagine a situation where the tool that we’re using to send emails fails for some reason; well, in our example here, that failure could affect our entire class, meaning that we are unable to update or retrieve customer information, simply because we are unable to send emails.

A Better Way

As I’ve said multiple times throughout this book, there are no right or wrong decisions in software, and these rules are no exception; adhering to these rules has a cost. However, let’s imagine our class slightly differently and think about what the differences may cost.
public class Customer
{
    public string Name { get; set; }
}
public class CustomerRepository
{
    public UpdateName(Customer customer)
    {
        ...
    }
}
public class EmailService
{
    public bool Send(string emailBody, string to)
    {
        ...
    }
}
Listing 5-4

A Better Way?

In Listing 5-4, we have separated our single class into three. We have a class that is responsible for updating the database, we have a class that is responsible for sending emails (which incidentally had no dependency on the Customer at all now), and the Customer class itself only represents the in-memory state of the customer.

As we’ve said, there’s no free lunch here; we’ve solved the problems expressed, but we’ve introduced an element of complexity. In the example that we have here, the complexity that we’ve introduced actually simplifies the implementation; however, this is not always the case; as with everything in software design, you should declare the benefits that you intend to get out of something and then weigh that against the cost. If this system was working as Listing 5-3 for years and had never changed or had an issue, then you should consider what benefits you’ll get from changing it; if the system is constantly breaking and needing to be changed, then you may find it warrants the time and cost to make the change.

The next in the list is the Open-Closed Principle ; since this is the particular principle that we’re interested, we’ll delve a little deeper into this one.

Open-Closed

A module will be said to be open if it is still available for extension. For example, it should be possible to add fields to the data structures it contains, or new elements to the set of functions it performs.

A module will be said to be closed if it is available for use by other modules. This assumes that the module has been given a well-defined, stable description (the interface in the sense of information hiding).

The Open-Closed Principle is, perhaps, one of the more esoteric of the principles, but at its heart, it’s actually quite simple. The idea is that it should be possible to modify the behavior of the software without changing the code for that software.

Talking specifically about object-oriented programming (for which the principle was initially stated), we essentially have two approaches here: inheritance and polymorphism. In this section, we’ll discuss the particular meaning of the Open-Closed Principle, and then we’ll also step back and look at what that might mean in a modern software development scenario.

Note

Arguably, in certain cases, using inheritance may be considered polymorphic; however, I think the two approaches here are distinct.

Inheritance

Let’s start with inheritance and consider a class as defined in Listing 5-5.
public class EmailService
{
    public virtual bool Send(string emailBody, string to)
    {
        Console.WriteLine("Email Sent");
        return false;
    }
}
Listing 5-5

EmailService

What we’re trying to do is to add functionality to the Send method without changing the code. In fact, using inheritance makes this an easy task, as we can see in Listing 5-6.
    public class LoggerEmailService : EmailService
    {
        public override bool Send(string emailBody, string to)
        {
            var result = base.Send(emailBody, to);
            Console.WriteLine($"Logger: {emailBody}");
            return result;
        }
    }
Listing 5-6

LoggerEmailService

As we can see, the functionality of the method has been extended, and yet the original code remains intact. There are downsides to using the approach; for example, the code that calls this method may initially refer to the EmailService and would now need to refer to the new LoggerEmailService. We can simply replace any instance of EmailService with its subclass, but we are now changing code (albeit fewer lines).

We’ll come back to inheritance when we get to the Liskov Substitution Principle, as using inheritance can lead to some potential issues.

Perhaps, then, a better way is polymorphism.

Polymorphism

Typically, the polymorphic approach makes use of an interface. Our class from Listing 5-5 may be changed to implement an interface, as shown in Listing 5-7.
    public class EmailService : IEmailService
    {
        public virtual bool Send(string emailBody, string to)
        {
            Console.WriteLine("Email Sent");
            return false;
        }
    }
Listing 5-7

LoggerEmailService with Interface

The idea behind the approach taken in Listing 5-7 is that any code that references EmailService would actually reference IEmailService as an abstraction. This means that we can replace the instance of IEmailService with any implementation that we choose to use at a given time.

The interface is a contract; it provides no functionality of its own, but anything that implements an interface commits to provide functionality for each of its methods; or, at least commits to implement each of the methods; you are free to implement an interface method that does nothing; we’ll come back to this shortly.

Note

In C#, since version 8, there has been a concept of Default Implementations; this allows the provision of some functionality within an interface. This also blurs the line between the concepts of inheritance and polymorphism further.

Given that we’ve already established that we can replace a class with its subclass, what advantages does the use of an interface abstraction give us?

Well, the advantage to this approach is that the interface can be replaced with any functionality you choose; for example, and very commonly, you may wish to swap out the interface for an implementation that does nothing at all – for the purpose of testing. Many languages and frameworks (C# and .Net included) have a concept of mocking libraries; these do just that, and provide you with an implementation of an interface that does nothing.

We’ve mentioned the Liskov Substitution Principle already in this section; let’s now explore exactly what that is.

Liskov Substitution

Let Φ(x) be a property provable about objects x of type T. Then Φ(y) should be true for objects y of type S where S is a subtype of T.

Despite this mouthful, this is a straightforward idea. The Liskov Substitution Principle (LSP) essentially says that for any class that uses inheritance, the parent class must be replaceable by the subclass. For some reason that I can’t fathom, every single example ever used for this is with squares and rectangles.

For our example, let’s return to Listing 5-5; in this, we have an EmailService class; we don’t really have an implementation, but we can imagine that this sends an email. However, what if we wanted to send an SMS; we could do something like Listing 5-8.
    public class SMSService : EmailService
    {
        public override bool Send(string emailBody, string to)
        {
            //return base.Send(emailBody, to);
            Console.WriteLine("Sent SMS");
            return false;
        }
    }
Listing 5-8

SMSService

The example given in Listing 5-8 is extreme; you probably don’t need to understand the LSP to see that this is a poor design (and a little pointless), but what about something slightly more nuanced. Let’s consider Listing 5-9.
    public class EmailServiceExtended : EmailService
    {
        public override bool Send(string emailBody, string to)
        {
            //return base.Send(emailBody, to);
            throw new Exception("This method is deprecated");
        }
        public bool Send(string emailBody, string[] to)
        {
            foreach (var destination in to)
            {
                if (!base.Send(emailBody, destination)) return false;
            }
            return true;
        }
    }
Listing 5-9

EmailServiceExtended

In Listing 5-9, we are extending the functionality of the class. It’s easy to imagine a situation where you only want to replace this one method in a huge class with hundreds of lines of code; however, this clearly violates the LSP. It does so because we cannot exchange the base class for the subclass – the code would start to crash.

From this, we can see that there are situations, then, where an interface may make more sense. The next principle, Interface Segregation Principle, deals with how interfaces should be partitioned.

Interface Segregation Principle

No client should be forced to depend on methods it does not use.

The idea behind the Interface Segregation Principle is a very simple one: don’t have a client implement an interface with methods that it doesn’t use. For an example, let’s look back at Listing 5-7; here, we implement an interface with a single method, which we are using. We could, at this point, imagine an interface that included both email and SMS methods, and a class each which had to stub out or throw exceptions for invalid use; however, we won’t because such an interface is very obviously too broad (and would breach other principles that we’ve mentioned).

Let’s instead imagine a very slightly larger interface, such as that in Listing 5-10.
    public interface IEmailService
    {
        bool Send(string emailBody, string to);
        string Receive();
    }
Listing 5-10

IEmailService with Receive

Listing 5-10 looks innocuous; after all, you might imagine that in addition to sending an email, we would want to receive one too. That may, indeed, be the case and, if you can guarantee that it is always the case, you will not fall foul of the ISP; however, Listing 5-11 shows how this kind of interface can quickly lead to problems.
    public class EmailSendService : IEmailService
    {
        public string Receive()
        {
            throw new NotImplementedException();
        }
        public virtual bool Send(string emailBody, string to)
        {
            Console.WriteLine("Email Sent");
            return false;
        }
    }
Listing 5-11

EmailSendService with Receive

Here, we have tried to create a granular class (i.e., to adhere to the SRP), and yet we have fallen foul of the ISP because our interface is too broad. Most languages, including C#, allow the implementation of multiple interfaces, so keeping interfaces small makes more sense.

Finally, let’s look at the Dependency Inversion Principle.

Dependency Inversion Principle

High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g., interfaces).

Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.

The Dependency Inversion Principle (DIP) rounds off the list – the wording of all of these principles is, sadly, overly complex for what they are trying to convey. In fact, this principle is why IoC containers have become so popular; certainly in the .Net community and in the wider software development community.

In order to understand this principle, let’s take Listing 5-12, which shows a piece of code that uses our EmailService from Listing 5-7.
    class Program
    {
        static void Test()
        {
            var emailService = new EmailService();
            emailService.Send("This is a test email", "[email protected]");
        }
    }
Listing 5-12

Program Using EmailService

The DIP states that high-level modules (i.e., our program) should not depend on low-level modules (i.e., our service), but that both should depend on abstractions. The code in Listing 5-12 breaks this rule because we have a direct dependency. If you were to delete the EmailService, the program would not only stop running, it wouldn’t even compile.

Surely, though, we need to use the EmailService within our program; if we can’t use it, then we might as well never create it. The solution is to be found in the second part of the statement: Both should depend on abstractions. Listing 5-13 offers a slightly different version of the code in Listing 5-12 but replaces the dependency with an abstraction.
    class Program
    {
        static void Test(IEmailService emailService)
        {
            emailService.Send("This is a test email", "[email protected]");
        }
    }
Listing 5-13

Program Using IEmailService

The idea posited in Listing 5-13 is called Dependency Injection , and it comes directly from this principle: we are now passing in an interface, rather than the class itself; this means that we can now delete EmailService and the code will still compile; of course, we are required to provide something that fulfills the contract of IEmailService, but we no longer directly depend on that particular service. This doesn’t completely invert the dependency: rather than the program being dependent on the service, we are passing the dependency in.

Inversion of Control

One technique that has become very popular in recent years for enforcing the DIP is the use of an Inversion of Control (IoC) container. The idea here is that you register your dependencies with the container, which is then responsible for the lifetime and resolution of those dependencies. Several frameworks, including ASP.NET Core and ASP.NET 5+, have this feature built in. In this scenario, you delegate the dependency injection that we just saw to a separate component; this solves a number of problems, although you still cannot get away from the fact that you must provide an implementation for the abstraction.

Methods of Extending Software

In our manual process, we had a core set of functionality (a set of instructions that everyone followed), but then we had an extension point – that is, parts where the functionality would diverge dependent on the user or situation. This is exactly what we can do with our architecture; essentially, the main ways that we can achieve this are through hooks, messages, and injection.

Hooks

A hook is a point in the program where you call a function or method, but that function or method is replaceable. Let’s try to envision this in a tangible, real-world, example.

You have a friend that you send a mail to every year and ask which present their mother would like for Christmas; your friend receives the mail and then replies with their choice, presumably after undertaking some form of research or procedure to discover the answer.

Figure 5-1 illustrates the process described. The hook is a hook because your friend is able to do absolutely anything they wish at this point: They may ask their mother, they may conduct a machine learning experiment, they may place an advert in the local paper asking what the reader’s opinions are, or they may do nothing and simply reply with “A box of chocolates.”
Figure 5-1

Illustration of sending the message

In practical terms, a hook can be an event, allowing the user to hook into the code flow.

Messages

We have visited this idea in previous chapters. As we saw, we can use a message broker to call out and accept information back into the system. Further, we can achieve a similar effect inside the process by the use of the Mediator Pattern.

Mediator

The Mediator Pattern was featured in the famous Gang of Four book Design Patterns: Elements of Reusable Object-Oriented Software and works very well to provide a type of internal message bus.

At its core, the Mediator Pattern can be thought of in the following manner: imagine that we have an EmailService class and an SMSService class, and then imagine that we wish to communicate between the two (as per Listing 5-14) – we would need to pick a direction of communication (i.e., one of these must be the instigator of the other, or else we would need to pass a reference of each into the other).
    class EmailService
    {
        public bool Send(string emailBody, string to)
        {
            Console.WriteLine("Email Sent");
            // Notify SMS
            return true;
        }
    }
    class SMSService
    {
        public bool Send(string msgBody, string to)
        {
            Console.WriteLine("SMS Sent");
            // Notify Email
            return true;
        }
    }
Listing 5-14

EmailService and SMSService

At it’s very simplest, the Mediator Pattern allows an abstraction over the communication – essentially a linking class – as shown in Listing 5-15. In principle, all we’re actually doing here is passing a single instance of a class into both of the instantiated classes (EmailService and SMSService) and then allowing that class to reference and communicate back with them.
    interface IMessageReceiver
    {
        void ReceiveMessage(string message);
    }
    class CommsMediator
    {
        public List<IMessageReceiver> MessageReceivers = new List<IMessageReceiver>();
        public void SendMessage(string message)
        {
            foreach (var receiver in MessageReceivers)
            {
                receiver.ReceiveMessage(message);
            }
        }
    }
    class EmailService : IMessageReceiver
    {
        private readonly CommsMediator _commsMediator;
        public EmailService(CommsMediator commsMediator)
        {
            _commsMediator = commsMediator;
        }
        public void ReceiveMessage(string message)
        {
            Console.WriteLine(message);
        }
        public bool Send(string emailBody, string to)
        {
            Console.WriteLine("Email Sent");
            // Notify SMS
            _commsMediator.SendMessage("email sent");
            return true;
        }
    }
    class SMSService : IMessageReceiver
    {
        private readonly CommsMediator _commsMediator;
        public SMSService(CommsMediator commsMediator)
        {
            _commsMediator = commsMediator;
        }
        public void ReceiveMessage(string message)
        {
            Console.WriteLine(message);
        }
        public bool Send(string msgBody, string to)
        {
            Console.WriteLine("SMS Sent");
            // Notify Email
            _commsMediator.SendMessage("sms sent");
            return true;
        }
    }
Listing 5-15

Using CommsMediator

Listing 5-15 is far from a comprehensive implementation of a Mediator Pattern; however, we’re simply illustrating how this may be used to fulfil the OCP within an application.

Note

As with many of these techniques, what we’re actually doing is reducing, rather than eliminating completely, the amount of code that would need to be changed in the original class. Ultimately, you will need information or events from the class, and something may need to change – the target is to make that change as localized and trivial as possible.

The final technique that we’ll look at is injection.

Injection

In fact, injection, while being a way to extend functionality, is often seen as not only something to avoid but something to actively prevent. The principle here is that you ask your code to execute something and then pass that thing in, in the form of code.

Security

Unless you’ve been living under an IT rock for the last 25 years, you will likely have come across the concept of SQL injection, joined now in the OWASP top 10 by its close cousin JavaScript injection . Let’s quickly review what these are, and then we can talk about how we might leverage the concept of injection without exposing ourselves to risk.

We’ll start with SQL Injection, as it’s perhaps the best-known attack vector, and this is simply because it’s so easy to exploit. Let’s take the code in Listing 5-16 as an example.
void RunQuery(string value)
{
    string sql = "SELECT * FROM MY_TABLE WHERE SOME_VALUE = '" + value + "'";
}
RunQuery("test");
Listing 5-16

SQL Injection Vulnerability

As you can see from Listing 5-16, we’re executing a fairly innocuous SQL query, and to make our function reusable, we’re allowing the parameter to be passed in. The call to RunQuery is calling the query with a single parameter. However, let’s imagine that RunQuery was being called with a value that came from outside the running program. SQL, like JavaScript, is dynamic, and so it will run anything you ask it. Imagine that the following string was passed in instead:
  • RunQuery(“’; SELECT * FROM USERS; --");

This would form a perfectly valid SQL statement, and the database engine would simply execute the first statement, followed by the second. The same is true of JavaScript injection; if an attacker can find a point in your website where you execute JavaScript (even unintentionally), then they can force the site to perform in a way that you hadn’t anticipated.

Note

One other factor about the code in Listing 5-16 that’s worth noting as a reason to avoid this type of syntax is that most RDBMSs will try to cache your queries for performance. If the code in Listing 5-16 gets called with three separate values, it will be interpreted as three separate queries and cached three times. This is known as flooding the cache – as these three queries are cached, the query cache fills up with slightly different versions of the same query and eventually becomes useless.

Now we’ve seen that injection can be bad; but as with all things, it’s not entirely bad. The key point to the attack vectors that we have seen here is that they allow external input and so are vulnerable to an injection attack. However, if we structure our software in such a way that the code being executed is known and trusted code, we can pass code around, which gives us a lot of power (after all, any dynamic language is, essentially, executing injected code). Listing 5-17 shows an example of that being possible in C#.
        static void Main(string[] args)
        {
            DoTheThing(() => Console.WriteLine("Test"));
        }
        public static void DoTheThing(Action theThing)
        {
            theThing.Invoke();
        }
Listing 5-17

Injecting a Method

In fact, Listing 5-17 is not the only way to inject code: it is possible to simply pass in a string and have it compile using the Roslyn compiler and execute that code, or to switch to dynamic mode; however, this structure, allowing you to simply pass a method onto the code, finds a halfway house of flexibility versus security. You could also inject a class that adheres to a given interface.

Now that we’ve established what the possibilities are, let’s have a look at our target architecture.

Target Architecture

Figure 5-2 illustrated the target architecture for the system. As you can see, each section of functionality is broken into modules, and then each raises an event, based on what has happened within that module.
Figure 5-2

Target architecture

The principle here is that we will allow the events to be handled externally.

Before we delve into the method that we’ve chosen for extensibility, let’s imagine what may be possible.

One way that we could achieve this is to allow the user to store, in our system, the code they’d like to handle the event – this would constitute the code injection that we mentioned earlier.

Another possibility would be to raise a message on a message bus. This doesn’t really fit in this situation, as reacting to messages needs to be quite a rigid affair. We could. However, use the Mediator Pattern that we’ve discussed; this would work, although this may add an amount of complexity, in the form of the mediator itself, to our software.

The option that is left then is to provide a hook. Let’s explore a little what that will look like; remember, our target is that changes need not result in changes to our software; that means (in an ideal world) that we shouldn’t even have to change our software to load the extended functionality.

Note

Remember that, as with everything we’ve discussed in this book, this is a trade-off again. Allowing users of the software to change the functionality without, even slightly, changing the base software provides a lot of flexibility and resilience; but you’re adding complexity, and bugs in the external software may be difficult to find.

For our purposes, we’ll provide a directory where the users can place libraries that contain the extended functionality.

Examples

The basis of this application, as with all the others, is going to be a .Net Console Application. However, the technique works equally well for a desktop (e.g., WinForms, WPF, or MAUI) or web applications. Depending on the type of application that you’re dealing with, the specific approach may be more, or less, applicable; and you should always consider the comments that we’ve made around security.

As with other chapters, all the code can be found here:

https://github.com/Apress/software-architecture-by-example

Basic Functionality

The basic functionality of this application is a simple CRUD function (or at least the create and update part of that). As you can see from the architecture, the project is partitioned into the functional areas, as shown in Figure 5-3.
Figure 5-3

Project structure

For the basic functionality, we’ll concentrate on the App, Common, Update, and Read modules. Let’s start with the App. Listing 5-18 shows a simple menu system in the Main method.
        static List<CustomerModel> _customers = new List<CustomerModel>();
        static Random _rnd = new Random();
        static Hook _hook = new Hook();
        static void Main(string[] args)
        {
            while (true)
            {
                Console.WriteLine("1 - Read Customer Data");
                Console.WriteLine("2 - Write Customer Data");
                Console.WriteLine("3 - Add Customer");
                Console.WriteLine("0 - Exit");
                var choice = Console.ReadKey();
                switch (choice.Key)
                {
                    case ConsoleKey.D0:
                        return;
                    case ConsoleKey.D1:
                        ReadCustomerData();
                        foreach (var customer in _customers)
                        {
                            Console.WriteLine($"Customer: {customer.Name}");
                        }
                        break;
                    case ConsoleKey.D2:
                        CommitCustomerData();
                        break;
                    case ConsoleKey.D3:
                        ReadCustomerData();
                        _customers.Add(new CustomerModel()
                        {
                            Name = $"Customer {Guid.NewGuid()}",
                            Address = "Customer Address",
                            CreditLimit = _rnd.Next(1000),
                            Email = $"customer{_rnd.Next(10000)}@domain.com"
                        });
                        CommitCustomerData();
                        break;
                }
            }
        }
Listing 5-18

Menu

We won’t dwell too deeply on the code in Listing 5-18, as it’s relatively straightforward; the key things to note are that we have a read, write, and add function.

Let’s now look at Listing 5-19, which shows the functionality behind the Read and Write/Commit methods.
        private static void ReadCustomerData()
        {
            var read = new ReadService();
            _customers = read.ReadAll(@"c: mp est.txt").ToList();
        }
        private static void CommitCustomerData()
        {
            var write = new WriteService();
            write.Write(_customers, @"c: mp est.txt");
            // Provide hook
            string jsonParams = JsonSerializer.Serialize(_customers);
            _hook.CreateHook(
                methodName: "After",
                className: "CommitCustomerData",
                parameters: new[] { jsonParams });
        }
Listing 5-19

Read and Commit

The main thing to note about Listing 5-19 is the call to _hook.CreateHook. We’ll come back to this in the next section, but for now, we just need to make a mental note that this is the extensibility hook.

We won’t look at Admin.Common, as it simply contains a shared model.

Note

In fact, there is a compelling argument against this method of passing data between modules; it tends to work well in a system such as this; however, as you’ll see in the extended module, there are potentially preferable alternatives when dealing with external or distributed systems.

From Listing 5-19 to Listing 5-20, we could see that we were making use of a ReadService and WriteService. These are in the CustomerRead and CustomerUpdate modules, respectively.

Listing 5-20 shows the CustomerRead module.
    public class ReadService : IReadService
    {
        public CustomerModel Read(string dataFile, string customerName)
        {
            var customers = ReadAllRecords(dataFile);
            return customers.FirstOrDefault(a => a.Name == customerName);
        }
        public IEnumerable<CustomerModel> ReadAll(string dataFile) =>
            ReadAllRecords(dataFile);
        private IEnumerable<CustomerModel> ReadAllRecords(string dataFile)
        {
            string customerData = File.ReadAllText(dataFile);
            var customers = JsonSerializer.Deserialize<IEnumerable<CustomerModel>>(customerData);
            return customers;
        }
    }
Listing 5-20

Admin.CustomerRead.ReadService

In Listing 5-20, we see a very basic service that reads from a serialized data stream. Listing 5-21 shows the WriteService: essentially the counter function to the read; again, it’s a very simple implementation.
    public class WriteService : IWriteService
    {
        public void Write(IEnumerable<CustomerModel> customers, string file)
        {
            string serialisedCustomers = JsonSerializer.Serialize(customers);
            File.WriteAllText(file, serialisedCustomers);
        }
    }
Listing 5-21

Admin.CustomerUpdate.WriteService

We’ve now seen the basic functionality in these three modules. In Listing 5-19, we placed the hook into the code, so all we need to do now is to attach to that hook.

Extensibility

In Admin.Extensibility , we have what is essentially a helper method. Listing 5-19 shows this being called where data is committed, and Listing 5-22 shows the CreateHook method that we used.
        public void CreateHook([CallerMemberName]string methodName = null, string className = null, object[] parameters = null)
        {
            // If className is not supplied then attempt to infer it
            if (string.IsNullOrWhiteSpace(className))
            {
                var stackTrace = new StackTrace();
                className = stackTrace.GetFrame(1).GetMethod().GetType().Name;
            }
            // Check that we have the basic arguments
            if (string.IsNullOrWhiteSpace(methodName) || string.IsNullOrWhiteSpace(className))
            {
                throw new ArgumentException("className and methodName cannot be null or empty");
            }
            string executingPath = Assembly.GetExecutingAssembly().Location;
            string libraryFullPath = Path.Combine(Path.GetDirectoryName(executingPath), $"{className}Extended.dll");
            var library = Assembly.LoadFile(libraryFullPath);
            foreach (Type type in library.GetExportedTypes())
            {
                var c = Activator.CreateInstance(type);
                type.InvokeMember(methodName, BindingFlags.InvokeMethod, null, c, parameters);
            }
        }
Listing 5-22

CreateHook

Since this is the crux of the functionality, let’s delve into this a little. The method itself accepts three parameters: the method name that should be called (this defaults to the method name that made the call), the class name (which forms part of the expected assembly name), and any parameters that need to be passed. Should we call the method as follows:
  • CreateHook(“Method1”, “Class1”, null)

then the method would attempt to find an assembly in the current executing path, called Class1Extended.dll.

If the assembly was found, it would then attempt to execute a method called Method1 on each class in that assembly; we’ve set the parameters to null, and so no parameters would be passed.

Note

Clearly, there may be issues with executing the method on every class. If this did prove to be an issue, it could be dealt with by a convention-based approach; perhaps you would only execute the method on the class with the className, or something similar.

In our case, we pass the method name as “After”, and the class name is set to “CommitCustomerData”; we also pass a serialized version of the data store through.

Note

Whether or not you manage such extensions yourself, you should probably treat the hook as an external system: don’t trust anything that it returns (i.e., encode) and don’t send it more information than is absolutely necessary. In this example, we are passing much more information than is necessary for the purpose of illustration and convenience.

There are many different ways to approach this problem; .Net allows several approaches, and depending on how strictly you wish to adhere to the OCP, you may use differing approaches.

Now that we’ve seen the basic functionality, along with the code for the hook, let’s look into the extended functions.

Custom Functionality

Inside the main code directory of the repo (assuming that you’ve pulled it down) are three directories; src and test are the usual standard ones, but there is an additional one called extended . This contains a completely separate .Net solution, as shown in Figure 5-4.
Figure 5-4

Directory structure

The code inside this solution is shown in Listing 5-23.
    public class CommitCustomerData
    {
        public void After(string parameter)
        {
            Console.WriteLine(parameter);
            using var doc = JsonDocument.Parse(parameter);
            var element = doc.RootElement;
            foreach (var eachElement in element.EnumerateArray())
            {
                string name = eachElement.GetProperty("Name").GetString();
                decimal creditLimit = eachElement.GetProperty("CreditLimit").GetDecimal();
                if (creditLimit > 300)
                {
                    Console.WriteLine($"{name} has a credit limit in excess of £300!");
                }
            }
        }
    }
Listing 5-23

CommitCustomerData Extension Class

Again, let’s break this down. The first thing to note is that we are not trying to deserialize the data. We can’t have any dependency on the core code base whatsoever, and so we cannot share or rely on the structure of the JSON; as a result, we just treat it like a string and manually parse it. As stated earlier, this can be considered a better practice, depending on your use case.

Once we’ve parsed the data, we simply iterate and display a warning where the credit limit is greater than a given amount. This is compiled to a .Net Assembly and then simply copied into the output directory of the main project. We can then change this functionality without ever touching the core code base.

Summary

We have now finally looked at the SOLID principles, specifically focusing on the Open-Closed Principle . I like to think that once you’ve spent time thinking about such principles, you find that your code simply looks better and becomes more extensible. Much like the practice of test-driven development, after some time, even if you stop writing the tests first, the code is still written as though you were.

Writing extensible software is something that every software engineer is expected to do; however, what this means can vary hugely. The example that we’ve given in this chapter, of allowing the user of the software to extend it, is an extreme case; however, the principles that we’ve discussed can work across the extensible spectrum.

We should also consider that making something extensible often opens up a security vulnerability. We can often mitigate such vulnerabilities, but we need to realize that they are there and have a feel for the potential damage they can do to our system.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset