Chapter 9. Systemic Promises

A system is commonly understood to mean a number of parts that collectively perform a function by working together. Agents within systems have intentional behaviour, so they can make promises collectively, by cooperation, or individually. They can be built from humans, animals, machinery, businesses, or public sector organizations.

What Is a System?

A system is an emergent phenomenon. The virtual or systemic properties promised by a system come about from the collaboration of actual promises made by its internal agencies. No individual component within the system typically promises the qualities that a system embodies.

For example, the firefighting emergency service promises to cover half a city, respond to several simultaneous emergency calls, and put out fires within a maximum amount of time. There is no single component that leads to this promise being kept.1

Systems can promise things that individuals can’t. If a military officer pointed to a soldier and shouted, “You, soldier, surround the enemy!” few individual soldiers would be able to respond. However, if the soldier was a superagent, composed of many collaborating parts (like a swarm of bees), it would be a different story. Systems may thus be strongly or weakly coupled collections of agents. A simple aggregation of residents in a building is not really a system, but, if all agents promise to work together, they could turn cohabitation into cooperation, and make new irreducible promises such as acting as a company.

The Myth of System and User

Systems are thought to be mighty, and users lowly (see Figure 9-1). More often than not we build the myth of system versus user, as a kind of David and Goliath story, because there is a boundary of control, or a region we call the system, in which we have a high degree of confidence in promises. Beyond this limit exist the users, like the savages beyond the city walls. Because we are not so close to them, they are treated differently. We are not sure about what promises they keep.

image
Figure 9-1. The separation of user and system is not inscribed in any Kafkaesque constitution.

The benefits of a system usually come about from the interaction between system and user, so why do we persist in this separation? One reason is that we use proxies to deliver the services by spawning off a surrogate of a system in the form of some kind of product that will be received by the user. At this point, the relationship between user and system becomes somewhat remote from the organization that built it. The surrogate does not typically learn or adapt as an active part in a cooperative relationship (though modern systems often allow upgrades, which are based on the learning of the parent system). Service industries have an easier time of this issue because they do not spawn off a surrogate, but rather continue in direct touch with their users.

The challenge in designing a system is to include users as a part of the system, and treat the promises that involve them just like any others within the machinery. The user is part of the machine.

The agency that is the final consumer is often not what we call the user at all. It is the agent that receives the outcome of what user plus system promise together. For example, a system might be a recording studio; the user, a band; the outcome, a musical recording; and the final recipient, the listener.

Systemic Promises

Let’s think about what kinds of promises we expect from systems. At a low level, we might expect:

  • Service levels and times (dynamics)

  • Configurations or compositional chemistry (statics)

Two kinds of promises that belong to the system as a whole are:

  • Engineered collaborative properties (aiming for direct causation)

  • Emergent collaborative properties (emerging by indirect causation)

Some systems include users or clients, for whom desired outcomes are intended. We usually can’t draw a clear boundary around a service part and a client/user part of a system; after all, they are connected by promises. Thus, the user must be viewed as a key part of the system, both through collaboration and outcome.

Properties that cannot easily be attributed to a specific individual include the following:

Continuity

The observed constancy of promise-keeping, so that any agent using a promise would assess it to be kept at any time.

Stability

The property that any small perturbations to the system (from dependencies or usage) will not cause its promises to break down catastrophically.

Resilience (opposite of fragility)

Like stability, the property that usage will not significantly affect the promises made by an agent.

Redundancy

The duplication of resources in such a way that there are no common dependencies between duplicates (i.e., so that the failure of one does not imply the failure of another).

Learning (sometimes called anti-fragility)

The property of promising to alter any promise (in detail or in number) based on information observed about an agent’s environment.

Adaptability

The property of being reusable in different scenarios.

Plasticity

The property of being able to change in response to outside pressure without breaking the system’s promises.

Elasticity

The ability to change in response to outside pressure and then return to the original condition without breaking the system’s promises.

Scalability

The property that the outcomes promised to any agent do not depend on the total number of agents in the system.

Integrity (rigidity)

The property of being unaffected by external pressures.

Security

The promise that all risks to the system have been analyzed and approved as a matter of policy.

Ease of use

The promise that a user will not have to expend much effort or cost to use the service provided.

These are just a few of the properties that concern us. You can think of more desirable properties, or different names for these.

Although no particular agent could be blamed for enabling a property, it is plausible that a single agent could prevent the keeping of a promise. However, in systems, the chains of causation can be quite Byzantine. In fact, in any system of cooperative promises, there are loops that make direct causation a nontrivial issue. This is why it makes sense to think of systems as emergent phenomena.

A negative property like fragility seems like an odd thing to promise, but such a promise could be very important. We label boxes fragile for transport precisely to try to influence delivery agents to take extra care with them, or to reduce our own liability in case of an accident.

Who Intends Systemic Promises?

If no particular agent promises systemic behaviour, then who is responsible for the result? All agents and none are responsible. If that sounds like too much French philosophy, this is possibly because the whole idea of the system is abstract.

As always, Promise Theory’s simple prediction is that it is not any part of a system that results in semantics. It is always the observer or recipient of promises that assesses whether or not promises are kept as intended. The assessment is in the eye of the beholder.

Thus any observer gets to make the judgement that a system is secure relative to its own knowledge. If we ask a consultant to tell us whether our system is secure, we’d better hope the he or she knows enough to predict every possible failure mode.

Suppose you want to promise ease of use. There is no ease-of-use plug-in or battery pack to provide this quality, so you have to look at the agencies you have and ask how could their promises result in the emergent effect of ease of use? Now you have to perform some semantic sleight of hand. Can you use already learned common knowledge or cultural norms to shortcut a lengthy learning process? Can you minimize the work required to learn an interaction pattern? At this point you might think, well if I’m going to impose something on these agents, why not just give commands? Why frame it as promises?

By now the answer should be clear. You could use impositions, but the agents might be neither willing nor able to act on them. They would not be as motivated to cooperate as if they were making voluntary promises that they saw an individual benefit in keeping. So, even if you propose promises for those agencies to keep, you have to put yourself in their situation, understand what they know and what skills they have, and then propose a promise. Would you make that promise in that situation? This is how to maximize the likelihood of a successful outcome.

Breaking Down the Systemic Promises for Real Agencies

Figuring out how systemic promises emerge from the real promises of individual agents is the task of the modern engineer. We should not make light of the challenge. System designers think more like town planners than bricklayers, and need to know how to think around corners, not merely in straight lines.

Directing birds of a feather to flock together as a systemic promise could involve arranging for individual agents to stay close to their neighbours. Another example might be that Godliness will result from promising to wash your hands often.

If you work entirely from the bottom up, you never have to confront the idea that some concrete agencies in your system actually have to promise real measurable outcomes, because it will happen automatically. It will be obvious. On the other hand, you might never confront the way users experience your system either. That means you won’t understand how to build a lasting relationship with users.

The challenge of designing a system from the top down is that you never confront whether there are any agencies that can deliver appropriate promises. Your first thought will be to divide and conquer and you will start duplicating work.

Why Do Systems Succeed or Fail in Keeping Promises?

Systems of many parts promise a variety of attributes. A painting may promise form but not colour, or colour but not form. Does the painting collectively promise to represent its subject or not? Again, only the recipient of the promise can judge whether or not the promise has been kept.

Which part of a plane leads to flight? Which parts can fail without breaking the collective promise? There might be workarounds, like a skilled pilot gliding the plane.

A school might succeed in the promise of education:

  • Because a specific teacher was helpful

  • Because exams motivated students to study

  • Because other students passed on information

  • Because it had a good library

Only the individual student can say which of the promises, perhaps all, were sufficient in their own context.

Promises and intentionality come from humans, but humans keep promises through a variety of intermediaries, any of which might fail. When systems fail to keep promises, it might be unintended, or it might be deliberate deception. Much attention is given to the idea that failures are unintended and should not lead to the blame of individual humans or technological agents in systems. Very little attention is given to the notion of deception and deliberate sabotage of systems. It is common to point to individual component failures in computer malware, forged artwork, pirated music, and so on.

Dunbar’s limitations can lead to systemic promise failures by overloading the capacity to maintain relationships. Packet loss and process starvation are the analogues in computing. As it is impossible to completely specify a human-machine system, and machines work on behalf of humans, it ends up being humans, not machines, who must make the system work.

Complexity, Separation, and Modularity

The matter of complexity in systems comes out a bit muddled in the literature. It has been suggested that it is the entwining or muddling of information that leads to complexity. This is actually quite wrong. Indeed, in a somewhat counter-intuitive sense, the opposite is true.

Complexity is measured by the explosion of possible outcomes that develop as a system evolves. This is a growth of information (also called entropy). When parts of a system interact strongly, it means that a change in one place leads to a change in another. This leads to brittleness and fragility rather than complexity. If you could replace 1,000 pennies with a 10 £ note, it would be less complex to deal with.

Sometimes fragility causes system states to split into several outcomes. Logical if-then-else reasoning is an example of one of those causes. It is these so-called bifurcations or branchings of outcomes that lead to the explosion of states and parts. This is provoked by strong coupling, but is not caused by it. Indeed, muddling things together reduces the total number of outcomes, so ironically it makes things less complex.

Separation of concerns (i.e., the tidying instinct), is in fact a strategy that brings about bifurcations, and hence proliferation. It leads to complexity because it increases the number of things that need to interact, leading to new entropy. Branching without pruning brings complexity. One of the reasons we fear complexity is because we fail to understand it.

The Collapse of Complex Systems

The separation of concerns is a very interesting belief system invented by humans. It has become almost ubiquitous in information technology and management. It is referred to as siloing in management. Promise Theory casts doubt on its validity.

One reason why belief in divide and conquer came about might be an artifice of our limited thinking capacities. Just as the limited size of an army leads you to deal with one invader at a time, so the Dunbar numbers for our thinking capacity imply limits on our brain power for handling other relationships. Similarly, our grammar skills do not support too many levels of parenthetic remarks, and we cannot do more than about five things at a time. All of these cases point to a strategy of trying to push away parts of a problem that we can avoid dealing with, similar to leaving behind travel items so you can actually carry your luggage.

The mistake we potentially make is in believing that our inability to carry the weight implies that the weight is not actually important, and should not be carried.

As a counterpoint to that idea, anthropologist Joseph Tainter made an interesting study of societies going back through time, looking for the reasons they collapsed. His conclusion was roughly this: as societies grow, there is a cost benefit to specializing into different roles to scale up. This is because individuals are not smart enough to be experts in everything; they have limited brain and muscle power. But that separation comes at a cost. If we want a service from a specialist, we now have to reconnect with them.

As agencies separate, they often form their own private languages that are not equilibrated with the general population, so there is a language barrier cost too. This leads to reduced trust. Agents then make clients jump through hoops to validate themselves. Bureaucracy and organizational silos are born!

Eventually, the cost of reconnecting through barriers (because of the loss of a trusting relationship) exceeds what agents can afford, and the system breaks apart as users begin to work around the barriers. Separation of concerns leads to collapse.

The autonomous agent model in Promise Theory makes this very plain. We centralize services in order to avoid the cost of talking to many other agents, but that puts all of the stress in one place. The bottleneck is born. The bottleneck then has to either limit its clients, or double up. As the service doubles up, a new service is required to coordinate the bottlenecks. The hierarchy is born. The hierarchy leads to further branching.

As the distance between agents increases, trust is lost and certainty demands adding more promises back to make up for the loss of the direct promises. The cost of all this infrastructure grows until agents no longer see a cost benefit to centralizing, and the structures break apart.

A goal for system design is surely to avoid this process if possible. The difficulty lies in evaluating which promises are cheap and which are expensive at any given time. It is the desire to save on costs that seduces us to reformulate these relationships and fall into the trap.

Through the Lens of Promises

What general lessons can we extract from thinking in promises? Promises are about:

  • Formulating outcomes by destination rather than journey

  • Which agents are responsible for the outcome

  • How the constraints on those agents affect the ability to predict outcomes

  • How access to different information affects each agent’s view on whether an outcome was achieved

Instead of thinking of force or command, promises help us to see the world as a way to deal with constraints on the field of free possibility. Some observations that cropped up along the way:

  • Client role: clients are responsible for obtaining service, not servers.

  • Coupling strength is best kept weak to avoid fragility and dependency.

  • Interchangeability and compatibility are about keeping intentions constant.

  • In the end, it’s all about knowledge, or what you know.

I’ve applied the idea of autonomous agents to both technology and to humans. Is that fair? Are humans really just agents? Of course, anything can be an agency for some observable effect. That is just how we label causation and understand the world. Humans, however, distinguish themselves by a more complex psychology than machines. (The psychology of machines is human psychology projected onto some kind of state machine.) As a human of some middling years, I have seen how aspects of human psychology intrude upon individual assessments and distort it in different ways.

Promise Theory pretends that communication and observation are relatively unambiguous, which is optimistic. If we intend to apply it to people, we’d better not get too excited about rational prediction. Human intentions and interpretations seem to be very noisy. Perhaps a warning would be in order on the package: “Caution: Promise Theory may cause headaches and tantrums if used together with humans.” But this is true of all formal models.

Many themes could be tied up in a promise view of the world. More importantly, Promise Theory reveals, through simple principles, why individual subjectivity is important even in the relatively impartial world of engineering. It offers a few tools for thinking about the issues, and it even has some rules of thumb to avoid making the worst mistakes.

If you think in promises a little every day, I cannot promise that it will change the way you see the world (because that would violate your autonomy), but it might allow you to see things in sharper focus, and question the way we often impose assumptions on a situation.

You alone can promise to ask the right questions and take fewer promises for granted.

1 The electricity company promises power continuously, without interruption. The police force promises public safety, but there is no public safety component.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset