On Complexity

Introduction

In a previous chapter I suggested that there are problems associated with having models that are too complex for people to understand. Most importantly, I argued, if a model is too complex a human being would not actually wish use it to help them. There are some aspects of this argument that are perhaps a little problematic. Firstly, in many places in the book I have suggested that complexity actually necessitates trust: if something is too complex to understand or predict (and thus control) what else is there to do but think in terms of trust? I think this is a fair argument, and it’s one that Cofta’s work supports. So why wouldn’t a complex trust model be worthwhile, since it puts us, ironically, into the ‘need to trust’ zone? It’s a fair question but the irony is a problem. Trust is (if we follow Luhmann’s arguments) a means to reduce complexity in the society we find ourselves. How then is it a good thing to make it more complex by giving us tools that are indecipherable?

A counter-argument is of course that the model need not be decipherable, it just needs to be followed — suggestions need to be seen as, as it were, commands. This is the very antithesis of trust empowerment (indeed, by definition it is trust enforcement). One of the things that we should be trying to do is to design systems, including those that help us with trust decisions, that are more understandable and by extension more empowering. It should go without saying that helping people understand better is a good thing.

Clearly, there are situations where the autonomous systems we create will use their own models of trust. In these cases, the opaqueness of the model itself — either because it is extremely complex or because it is obfuscated in some way — is of some use. At least as a means to enable more security and strength against trust attacks (which we will talk about later in the book). However, even in these circumstances there remains a need to explain.

Steve’s First Law of Computing is that, sooner or later every computational system impacts humans[1]. This is important to remember because at some point a human being will either be affected or want to know what is happening, and often both.

It is at this point that the whole explainability bit becomes extremely important. There is a lot of literature around things like explainable AI, explainable systems and so forth (much of it is in the further reading chapter at the end of the book) but what it comes down to is this: the complex tools that we create for people to use have to be able to explain why they have chosen, or recommended, or acted the way they have. They may need to do this after the fact (let’s call those justification systems) or before (in which case explainable systems is probably more apropos). It’s probably fair to say that we as humans don’t always know why we did something, or why we trusted someone when we did, for example. It’s a source of great amusement to me to sometimes ask, and a source of some consternation for someone who is asked, why things were done the way they were (“why did you trust Bob to deliver that letter?”).

A long time ago[2] when expert systems were all the rage, explainability was recognized as important too. If an expert system came up with some diagnosis (there are lots of medical ones) or prediction it was possible to ask it why or more correctly how it came to the conclusions it did. Through a process of backtracking the system was able to show which rules were triggered by which data all the way back to the first evidence or question it got. It’s pretty neat actually, and as a form of justification it is impeccable.

The thing is, complex systems like neural networks or genetic algorithms or black box artificial intelligences, or even complex mathematical trust models in an eCommerce setting, can’t backtrack like that. They can however maybe pretend to, or perhaps lead the human through a path that makes sense to them. This may actually be enough — remember, humans are notoriously bad at explaining themselves too. Holding up other systems to a higher standard, especially in these uncertain circumstances, does rather seem a little demanding.

But this: The systems that we use have to be able to explain or justify in some reasonable way. The more complex, the more imperative this is.

Ten Commandments

This brings us to a paper I wrote with colleagues in 2012, which saw the potential pitfalls of complexity in trust models and tried to address them, or at least to make a start. It’s called Rendering unto Cæsar the Things That Are Cæsar’s: Complex Trust Models and Human Understanding which is a little bit of a pretentious title[3] but the contents are important. The paper was written about trust models, like the one I presented (my own) in a previous chapter. I could see at the time these models becoming increasingly complex and far too specialized to be reasonably called trust models at all. In the past ten or so years I have looked back at the paper and forward to what is happening in AI and human-centred systems and felt that the commandments are rather important there too.

The argument I have made at the start of this chapter was made in that paper and, in the almost ten years since, not much has changed for the better. This is a little bit of a shame[4]. As we will see in the Trustworthy AI chapter in this book, there is a huge number of important questions that exist around the systems we are deploying in the world. Many of them are urgent, and not all of them are being addressed all that well. Barring the obvious racism, sexism and other bias in the tools we are creating, complexity and understanding are, to be honest, amongst the most urgent.

To put it another way, and bearing in mind that I am a computer scientist and not entirely a Luddite, AI and its cousins pose an existential threat to humanity. This is quite a claim to make. Indeed, a reviewer of a paper I co-wrote where we postulated that this was the case was instantly dismissive. But it is a needful claim. That they present an existential threat to humans is self-evident when one considers the antics of self-driving cars. However, more and more we are seeing the deployment of unmanned aerial vehicles (UAVs), drones capable of delivering death and mayhem thousands of miles from the humans that ostensibly control them (tragically), and even mobile agents to patrol contested borders. This is not a particularly healthy direction to be going, especially if we combine the problems of autonomous vehicles with the capabilities of deadly weapons systems. What is more, if the systems we are creating are all but indecipherable to even smart people, it is to our shame to not accept that there is a problem here. This elephant in the room is something I address in more detail in the Trustworthy AI chapter of the book.

Meanwhile, back to that ostentatiously titled paper. The paper presented ten commandments[5]. I repeat them here, along with the extra two commandments we published later in this paper. Where possible I explain some of the reasoning I had behind these particular commandments.

The first eight are from pages 197-198:

  • The model is for people.
    • At this point, it’s probably reasonable to quote Einstein: “Concern for man and his fate must always form the chief interest of all technical endeavors. Never forget this in the midst of your diagrams and equations.” This probably says all that needs to be said but, just to belabour the point a little, we forget why we are doing this fascinating science and engineering at the peril of other human beings (at the very least).
  • The model should be understandable, not just by mathematics professors, but by the people who are expected to use and make decisions with or from it.
    • I have nothing against mathematics professors; indeed, some of them are very good friends. However, if we design complex models then they need to explain themselves. Consider: it makes no difference if a system is better than a human at driving, or decoding, or diagnosing a particular disease or condition if the person using it doesn’t trust it. Indeed, it simply adds another layer of doubt and obfuscation to a system that includes people but is designed by scientists and engineers who are often little trusted in the first place. Even if the doctor assures the patient that the system has it right (after checking themselves of course, and if not why not?) there is little gained if the patient doesn’t believe them.
  • Allow for monitoring and intervention.
    • It is necessary to “understand that a human’s conception of trust and risk is difficult to conceptualise. Many mathematical and economic models of trust assume (or hope for) a ‘rational man’ who makes judgments based on self-interest. However, in reality, humans weigh trust and risk in ways that cannot be fully predicted. A human needs to be able to make the judgment.” (page 197).
    • This isn’t actually something new. Kahneman and Tversky’s Prospect Theory is a perfect example of how humans don’t always make rational decisions. We (humans) are really rubbish at this stuff. But that in many ways is exactly the point. If a system making decisions for us makes them in ways we wouldn’t, does this make it right or wrong from our point of view? And if it can’t explain the decisions made in ways we can understand, what does this mean for how we might feel about it?
  • The model should not fail silently[6], but should prompt for and expect input on ‘failure’ or uncertainty.
    • Again, humans aren’t really very good at things like risk estimation, things like that. But what human beings do well is be liminal. Human beings live the edges of things, and make decisions around those edges. To put it another way, it’s exactly when there is not enough information to make a decision that humans shine. If this seems to be a problem (because we often make wrong decisions!) it is worth pointing out that in the absence of data the artificial system may well do no better and referring the reader to the first of the commandments; which is to say that at least the human’s decision is one that they made for themselves.
  • The model should allow for a deep level of configuration. Trust models should not assume what is ‘best’ for the user. Often design tends to guide users towards what the owner or developer of a site or application thinks that people should be doing. However, only the user can make that call in context.
    • Many of us ‘know what’s best’ for the people for whom we design systems. This of course is dangerous. Why is this? It takes very little imagination to look at the ways in which ‘persuasion’ or ‘nudging’ or the often downright deceitful use of our understanding of how brains work and see them as problematic. If we stretch this kind of behaviour to allow complex systems that make decisions for us we do ourselves no favours (although we could be doing a large favour to some tech billionaires). In other words, the systems that we deploy should not only be able to explain themselves, they should be tailorable whenever possible to better match the way we as individuals see are (normatively) good behaviours.
    • What does this mean for a trust model? A look back at the chapter describing a simple model will show you that there are numerous variables that can change or be changed to better align the way in which autonomous trust-reasoning agents behave and reason with and about trust. For example, determination of risks, how utility might be calculated (and what it means to an individual) and from considerations if important to an individual all the way through to the different aspects of ‘what it meant’ and ‘how it feels’ in forgiveness calculations. As I noted in that chapter, a great deal of power in trust models lies in their heterogeneity. Much of this comes from a person’s ability to personalize the models.
  • The model should allow for querying: a user may want to know more about a system or a context. A trust interface working in the interest of the user should gather and present data the user regards as relevant. Some of the questions will be difficult for a system to predict and a developer to pre prepare, so a level of dynamic information exchange is necessary.
    • We already know that trust is highly contextual. This means that one situation may have different results for trust decisions and calculations, even if much of what exists is similar to or the same as a situation that occurred before. At the very least, the experience (memory) of the truster has changed, but it may be that the location (place) is different, or that the people involved are different in some subtle ways. Regardless of the aspect that resulted in the different decision, and in any given situation, having an autonomous system explain its reasoning costs little (if it is designed properly) and grants much (in terms of confidence and ultimately then, trust – see for example Google’s People and AI book).
    • Are explanations always a good idea? Well, yes, but not always in the moment. I leave you with a thought experiment: do you want your autonomous vehicle to tell you why it is about to put the brakes on to avoid the collision (and the reasoning process that got it there) or just to put the brakes on and maybe explain later? Let’s see what the next point has to say.
  • The model should cater for different time priorities. In some cases, a trust decision does need to be made quickly. But in other cases, a speedy response is not necessary, and it is possible to take advantage of new information as it comes to hand. A trust model working for humans needs to be able to respond to different timelines and not always seek a short-cut.
    • This is probably self-explanatory. There are times when speed is essential – after all, it would be good for an autonomous car to not hit a child who ran into the middle of the road, for instance  – but there are also times when decisions should be made at more ‘human’ speeds, primarily so that the human can take part in them and understand them better. If this seems to be channeling Kahneman (again) it’s not by accident.
  • The model should allow for incompleteness. Many models aim to provide a definitive answer. Human life is rarely like that. A more appropriate approach is to keep the case open; allowing for new developments, users to change their minds, and for situations to be re-visited.
    • I’ve noted above that humans are liminal creatures. We also live in a constant ‘now’ which doesn’t really ever become a ‘then’. We are constantly able to learn from what has happened, to take what has been learned and apply it to different (or even the same) circumstances. Our trust models and the ways they consider and explain themselves should do no less.

And the extra two? They are from this paper, and go like this:

  • Trust (and security) is an ongoing relationship that changes over time. Do not assume that the context in which the user is situated today will be identical tomorrow.
    • Already, above, we’ve talked about different but similar situations, the ability to learn and change behaviours accordingly, living in the ‘now’ and more. That the relationship between agents (including humans or humans and autonomous systems0 is ever-changing should be no surprise. Context is also every changing and so, when we combine these things, every new moment is a new context to be considered.
  •  It is important to acknowledge risk up front (which is what all trust, including foreground trust, does).
    • There’s not much else to do here except perhaps allow you ask a question…

Foreground Trust

What is foreground trust, you ask? I’m glad you did. I’ll present it much more deeply in the Research chapter, but it goes like this: give people the information they need (their own needs, which they can articulate) to make trusting decisions, and they will do so. In other words, don’t tell them what is important, let them tell you.

So why was it called “Render unto Cæsar…”? Because there are times when the complex models serve important purposes — the world is a complex place after all. But there are also times when humans (Cæsar, if you will) need to be acknowledged. The commandments aim to satisfy that need. The most important thing to bear in mind: put people first (remember Steve’s First Law!).

Let’s put it all another way, one that has to do with system trust, mentions of which you’ll find scattered around this book (see for instance here for a more full explanation). When we make things too complex, or at least so complex people don’t get it, what we are doing is shifting the trust away from where its was (the other person, perhaps?) to something else – the model or the mathematics or whatever. At some point, I’d suggest, this ceases to be reasonable or even worthwhile.

In the next chapter I’ll explore reputation and recommendation systems. These are obvious cases in point where complexity does not serve us well: they are specifically trying to help humans make decisions. But are we doing it right? Are we empowering or are we enforcing? And when we look at how they are deployed in things like social networks, are they helping or harming us? Let’s take a look.

Technology is nothing. What’s important is that you have a faith in people, that they’re basically good and smart, and if you give them tools, they’ll do wonderful things with them.

Steve Jobs, Rolling Stone Interview, 1994.


  1. Actually, if I was more honest, it is that "computing systems are for humans." Regardless of the place, time, space, purpose, how far removed from humans the computer is, computers do impact us, every day. To make a first law that demands that this impact be positive would seem to be a reasonable thing.
  2. In computing terms, that could mean anything from 70 years to the last few minutes, but in this case it means the 70s, 80s and probably 90s, to be honest.
  3. I can say that because I co-wrote it! My long-suffering colleagues Natasha Dwyer and Anirban Basu probably rolled their eyes when I suggested the title...
  4. As ever, the master of understatement.
  5. actually, it presented eight, but in a later paper we introduced a couple more because one should always have ten commandments.
  6. I did in an original draft put "fail silent" but was told by a reviewer to make it silent'ly'. I get why, but I also think 'fail silent' conveys more about what I was trying to think at the time.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Trust Systems Copyright © 2021 by Stephen Marsh is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book