Chapter 3. Assessing Promises

The value of promising depends entirely on how it is assessed. Assessment of promises is an inherently subjective matter. Each agent may arrive at a different assessment, based on its observations. For instance, “I am/am not satisfied with the meal at the restaurant.” Agent 1 might have the gene allele that makes cilantro/coriander taste like soap, while agent 2 is a devoted Thai food enthusiast extraordinaire. Assessments must therefore respect the individuality or relativity of different agents’ perspectives.

What We Mean by Assessment

When we make a promise, we often have in mind a number of things:

  • The promise itself: what outcome we intend

  • The algorithm used to keep it: how a promise will be kept

  • The motive behind it: why a promise is made

  • The context in which it’s kept: where, when a promise is made

Someone assessing a promise might care about exactly how a promise is kept, but the promiser may or may not promise to use a particular method. In that case, the precise method becomes a part of the intended outcome too, and there might be unfulfilled expectations of promises broken by failing to follow a very specific sequence of actions. Similarly, there are corresponding elements of assessing whether a promise has been kept.

  • Outcome: What was measured during the assessment?

  • Context: Where was the sample taken?

  • Algorithm: How was the sample taken?

  • Motive: Why the assessment was made?

  • Expectation: What were we expecting to find?

Every promise is assessable in some way. Indeed, from the time a promise is made, a process of assessment begins in all agents that are in scope. This assessment may or may not be rational, but it is part of an ongoing estimation.

We humans often make assessments based on no evidence at all. Trust and even prejudice can form the basis for expectation. One might say: “When I release the hammer, it falls. This has always happened before, I see no reason why it would not happen now.” In a familiar context, this assessment might be reliable; in outer space, it would be false. Whether inference is useful or not depends on a separate assessment of its relevance. Hearsay and authority are crutches we use to bootstrap trust.

Kinds of Promise Assessment

We often need to define what we mean by acceptable terms for a promise being kept. The reason might be grounded in legal process or simply be systemic rigour. Some kind of question needs to be answered.

An assessment of a promise may be based on any method, impression, or observation, that maps to the promise being “kept” or “not kept” at a certain time.

The reason this description is so loose is that humans are not very mechanistic in our evaluation of trust, and the assessment of a promise has a lot to do with how we perceive the agent supposedly keeping the promise.

We might have technical methods of assessing the promises made by a machine, such as compliance with road safety standards, or software testing probes. Then, assessment behaves like some kind of function to evaluate. On the other hand, we might have an entirely heuristic approach to deciding whether a party was good or bad.

The assessment itself makes a promise, namely one to supply its determination of the outcome; thus, it is not strictly a new kind of object in the theory of promises. The “impressions” could be points of data that are invented, received, or specifically measured about the promise. These might include evidence from measurements or even hearsay from other agents.

It is useful to define two particular assessments of outcome. These correspond roughly to the two interpretations of statistical data: namely a belief (Bayesian) interpretation, and a frequentist or evidential interpretation.

  • A belief assessment made without direct observation

  • An evidential assessment made with partial information

There are three outcomes of an assessment of whether or not a promise is kept:

  • True

  • False

  • Indeterminate

Assessments are made by observing or sampling at a localized moment in time. In other words, an assessment is an event in time and space that samples an outcome that may or may not persist. Even when conditions have been assessed as making a promise kept or not kept, the promise made by the assessment itself has limited trustworthiness, as the system might have changed immediately after the assessment was made.1

Relativity: Many Worlds, Branches, and Their Observers

Agents’ perspectives are what we refer to as agent relativity. Each agent has its own private world view. Agents privately assess whether they consider that promises have been kept in their own world. They also assess the value associated with a promise, in their view.

One of the things that makes a promise viewpoint advantageous is that we cannot easily brush these differences of assessment under the rug, as so often happens in other modelling frameworks. By forcing ourselves to confront agent individualities (and if necessary, make agents promise conformity), we find important lessons about the fragilities in our assumptions.

This does not only apply to the obvious assessments: kept or not kept. A promise to give something is often seen as having a positive value to the recipient and a negative value or cost to the agent keeping the promise. An imposition, on the other hand, often carries a cost to the recipient, and brings value to the imposing agent.

This idea of promise valuation further implies that there can be competition between agents. If different suppliers in a market make different promises, then an assessment of their relative value can lead to a competition, or even a conflict of interest.

Relativity and Levels of Perception

Sometimes a relative perspective depends more on what level you interact with something than where and when. Imagine evaluating whether a book, a radio, or a hotel keeps its promises.

These agencies can be decomposed into parts on many levels. Should we assess whether each component keeps its promise (for example, in the radio, do components properly move around electrical currents), or should we rather ask whether the boxed arrangement of components keeps its promise to blast out music?

First, we have to decide at which level we intend to consume the promises, and who is the observer making the assessment?

Inferred Promises: Emergent Behaviour

The radio or music player is a nice example of a collective agency with emergent behaviour. It is an agency formed from multiple components that collectively promise something apparently new. The radio functionality does not come from any single component in the radio; it only appears when all of the components are working together as intended. However, not all systems that exhibit identifiable behaviours were designed explicitly with any promises in mind.

Some systems only appear to an observer to act as though they keep certain promises, when in fact no such promises have been made. The observer might either be out of the scope of the promise, but can still observe its repercussions, or in fact, no such promise might have been made at all.

  • This car veers to the left.

  • The traffic gets congested around rush hour.

  • The computer tends to slow down if you run these programs together.

We call such effects emergent. In some cases, there might be promises of which we are not aware. Agents have information only about the existence of promises for which they are in scope. Someone might have promised one person without telling another. A design specification for a tool might not be in the public domain: “Take it or leave it.”

Does this matter? From the perspective of an observer, naturally it makes no difference whether a promise has actually been made or not, as long as the agent appears to be behaving as though one has been made. It is entirely within the observer’s rights to postulate a model for the behaviour in terms of hypothetical promises.

This is how science talks about the laws of nature. We can take scientific law to mean that the world appears to make certain behavioural promises, which we can codify into laws, because they are invariably kept. In truth, of course, no such laws have been passed by any legal entity. Nature seems to keep these promises, but no such promises are evident or published by nature in a form that allows us to say that they have been made. Nevertheless, we trust these promises to be kept.

So, based on its own world of incomplete information, any agent is free to postulate promised behaviour in other agents as a model of their behaviour. This hypothesis can also be assessed in the manner of a promise to self.

From any assessment, over an interval of time, an agent may infer the existence of one or more promises that seem to fit its assessment of the behaviour, regardless of whether such a promise has been made outside of its scope.

The observer cannot know whether its hypothesis is correct, even if an explanation is promised by the observed agent; it can only accumulate evidence to support the hypothesis.

For example, suppose a vending machine is observed to give out a chocolate bar if it receives a coin of a certain weight. Since most coins have standard weights and sizes, a thief who is not in possession of this knowledge might hypothesize that the machine actually promises to release a chocolate bar on receiving an object of a certain size. Without further evidence or information, the thief is not able to distinguish between a promise to accept a certain weight and a promise to accept a certain size, and so he might then attempt to feed objects of the right size into the machine to obtain the chocolate bar. Based on new evidence, he might alter his hypothesis to consider weight. In truth, both hypotheses might be wrong. The machine might in fact internally promise to analyze the metal composition of the coin, along with its size and other features.

How Promises Define Agent-Perceived Roles

The simplest kind of emergent behaviour is falling into a role. A role is just a pattern consisting of an unspecified agent or group of agents making certain promises. An agent that is aware of the promises, and assesses them, would be able to infer the pattern and name the role.

Roles are just names for patterns of promised behaviour, without necessarily being attached to a specific person or thing. For example, the role of doorstop can be promised by tables, chairs, hooks, or wedges of paper. In business, the role of developer or manager might be assumed by the same person in different contexts, based on what promises they keep. There are three ways that roles can be defined based on promises:

Role by appointment

An agent is pointed to the same kind of promise by several other agents (Figure 3-1). For example, if 20 agents promise to send data to a single agent, that agent clearly has the role of “the agent to whom data is sent,” which we might call a database, or a storage array, and so on. Similarly, we might identify the same agent by virtue of its promising to receive data from the 20 agents. In either case, the agent is a concentration of arrows of the same kind.

image
Figure 3-1. Role by appointment is when promises point in the same way to a particular kind of agent.
Role by association

When an agent just happens to make a particular kind of promise, for example, web server or policeman. Suppose three different shops promise to sell smartphones. Then, by virtue of making the same promise, we see that this is a “thing” (i.e., a repeated pattern; see Figure 3-2). Thus, regardless of what they think themselves, every observer who can see the pattern can assign to the shops the role of smartphone vendors.

image
Figure 3-2. Role by association is when all agents make the same promise. In this case, it could be an emergent promise like a property of the agent (its gender).
Role by cooperation

When agents promise to work together as a unit (e.g., agents that promise to behave in concert, such as a number of soldiers working as a team; see Figure 3-3). The agents all promise to be alike, or behave in the same way, so that they become interchangeable. Alternatively, imagine that each team member has its own specialization, and promises to play its part to keep the collective promise of the whole team. This is the same as inanimate components in a radio making different promises, collectively promising the listener to form a collective design. Cooperative promises are the glue that allow a number of components to come together to form a collective superagent, with its own effective promise. The cooperative role is identified by promises that effectively say, “I belong to team X.”

image
Figure 3-3. Role by cooperation is when agents play a part within a group.

The Economics of Promise Value: Beneficial Outcomes

Promises are valuable in a number of ways. They offer predictability, which is a valuable commodity because our expectations make it cheaper to operate. If we don’t know what to expect, we have to be much more careful, and we might miss opportunities. Indeed, we would be experimenting all the time. Promises also offer delegation of responsibilty, at least partially. If another agent is willing to do something we rely on, that is valuable. Promises align our thinking in the direction of causation, instead of against it.

The reason we make promises at all is because they predict beneficial outcomes in the eye of some beholder. They offer information that might allow those who know about the promise to prepare themselves and win advantage. Physical law enables engineering, consistent bodily responses allow medicine, software promises data integrity, and so on.

Perhaps this makes you think of promising something to a friend, but don’t forget to think about the more mundane promises we take for granted: a coffee shop serves coffee (not bleach or poison). The post office will deliver your packages. The floor beneath you will support you, even on the twenty-third level. If you could not rely on these things, life would be very hard.

A more difficult question is: what is this knowledge worth? Now, convention drives us to think that value means money, but that is not true at all. Money itself is simply a promise, represented by surrogate hardware like coins and paper, but value is a many-headed beast.2 We offer one another value in many ways:

  • Money: a placeholder for value, to be redeemed later

  • Credit: fictitious money

  • Trade: equivalent value

  • Goodwill: the opportunity to interact again

  • In kind: returned favours, promises, gifts, etc.

The success of money lies in its promise to be a lingua franca for value exchange. It is an alchemist’s dream: a form of value that everyone wants, and endlessly convertible.

As usual, the view of the autonomous agent is the key to understanding the relativity of value in Promise Theory. Every agent is free to decide the value of a promise, in whatever currency it deems valuable. We see this in the global economics of monetary currencies. The value of the dollar relative to, say, the yen is simply what others are willing to pay or exchange for dollars at any given moment. This is an assessment based on the promise of the currency’s value in the eye of the beholder.

In Chapter 6, we’ll see how repeated cooperation between agents builds trust and value, and plays an important role in motivating collaborative behaviour.

Human Reliability

Humans treat promises with the sort of casual abandon that sometimes shocks. We say “I promise” when, in fact, we have no intention to make any effort. This is compounded by the fact that humans are incalculable: our world views are all so different and individual that it is hard to reason about human behaviour as we try to do for simple-minded abstract “agents.”

We are bombarded with information that makes us change our minds, on a timescale that makes us appear unreliable or fickle. Humans make promises into deceptions or lies on purpose, and often pursue self-interest in spite of making promises to cooperate as a team. This leads to a semantic chaos of fluctuating outcomes, not just a dynamical chaos of whether or not certain well-known promises are kept.

How can we even be sure what is being promised? Documentation of intent used to be considered a must, but then we discovered that promises could be made by appeal to intuition instead. Intuition taps into the cultural training we’ve all received, so we can rely on the promise of cultural norms to short-circuit expectation.

If one could set aside the unreliable aspects of humanity to act as part of a reliable mechanism, there would be no impediment to success in designing human-technology systems, with dependable promises. But we are often unwilling to give up the freedoms we consider to be an essential part of humanity. During the industrial revolution, humans really did sacrifice their humanity to become part of the machine, and many were happy doing so in sweatshops. Today, we consider this to be dehumanizing.

How can we deal with these issues? Ultimately, we have to appeal to psychology to understand the likelihood of humans keeping promises in the manner of a chemistry of intent. This is true even for promises made by proxy through technology. The promise patterns for consistency reveal the ways to find consensus. Instead of single point agents, we might talk about a strong leader. Instead of a clique of a complete graph of agreements, we might talk about weak consensus.

The Eye of the Beholder

I once had a disagreement with someone about the nature of beauty. From a Promise Theory perspective, beauty is in the eye of the beholder. It is that simple. My opponent made an imposition argument against this, saying that we cannot ignore cultural norms when judging the value of something.

From a Promise Theory perspective, this is simple: each autonomous agent can indeed reject the weight of opinion and peer pressure because it is autonomous. Just as we are not obliged to accept the will of a mob, in principle. In practice, we might find it practical to do so, or we might feel weak and intimidated, fearing the consequences. However, this is an autonomous decision to comply. A stubborn person has the ability to resist.

Even if you believe that it is impossible to disregard peer pressure, mob rule, or other coercion,3 there is still a plain engineering utility to adopting the autonomous agent model. Now you can model someone who is affected by mob rule as a person who always promises to follow the mob, while a free spirit is someone who doesn’t. Thus the promise methodology allows you to model these differences and account for them. As in all science and engineering, we shouldn’t muddle belief with utility.

Some Exercises

  1. If someone leaves a package on your doorstep, do you consider the promise to deliver it kept?

  2. There are many kinds of smartphones on the market that promise the Android operating system or Apple’s iOS operating system. These are sufficient promises for many to think of the promised role of the smartphone, but not all of these devices can make calls. What promises distinguish the role of the smartphone from a tablet?

  3. Branding is a form of culturally primed promise. The idea is to build familiarity with a simple image, such as the label on a bottle of wine. But how do we assess this promise over time? Lindeman’s from Australia make a popular brand of wine. Does Lindeman’s Bin 45 2012 promise the same as Lindemans’s Bin 45 from 2010? Should we consider the brand a role by association?

  4. Many websites promise secure payment facilities. How do you assess whether a payment system is secure? Do you look for a URL starting with https:, or perhaps a certificate signed by a known authority? What promises do these symbols represent? Are they sufficient?

  5. In music, different voices promise different roles within a performance, such as melody, timekeeping rhythm, and ornamentation. Which instruments or voices play these roles in the following?

    1. Symphonic music (e.g., Richard Strauss, Also sprach Zarathustra, well known as the theme from 2001: A Space Odyssey)

    2. Disco music

    3. Folk music

1 Even the physical sciences struggle with this kind of issue. In quantum mechanics, the state of a microscopic region of space can be known only at the moment it is sampled, and the likelihood of knowing its state thereafter depends on precise promises encoded into its mathematical description.

2 Pound notes, in the UK, still bear the text: “I promise to pay the bearer on demand the sum of X pounds.” This goes back to the time when there was a trade-for-gold standard in money, and one could take the paper note to the Bank of England and be remunerated in gold. The gold standard no longer exists, but the principle remains.

3 One way to define an attack is to impose an agent, without a promise to accept, and with the intent to cause harm.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset