Research and Ongoing Concerns

Introduction

Not being one to miss out on a chance for self-promotion, like most scientists, I’m going to use another chapter to talk about some of the work that I and my colleagues have done. In a career of around thirty years (as I write this), I have been lucky enough to be able to do this in many cases with people whom I respect and admire, and their names will be shown in this chapter.

Some of what we’ve done has explored how trust systems might be used to defend people and systems from attacks like the ones I talked about in the earlier in the book. Some has been exploring Social Adeptness in technology — so that the technology can exist alongside us in a functioning society. In other work we’ve explored how information can flow around networks when trust is used as a tool. There’s more, too.

Helping People

As I already noted, attacks on people often rely on their expectations for social behaviour (phishing and social engineering, in particular). I’ve already mentioned that you can put policies in place in organizations to help manage this, but sometimes we’re not in an organization — we might just be on the street, or at home, commuting, looking after a sick child, or whatever the case may be.

In one sense, there is a need to armour people against the worry of saying, “no”. In another sense, though, there is a need to armour people against situations where they just aren’t able to focus. Which is most of the time, but in some circumstances the focus is much less than in others. This is natural – that sick child takes precedence. As does a car accident you were in, or witnessed. You can probably think of many more examples.

In any case, we surmised a while ago that the technical (process) part of the trust system may well be a powerful defence in both cases. Let’s focus on the first. As it stands, the technology that is deployed has has no concept of what is proper social behaviour. The supposition is then that it doesn’t really get embarrassed when having to say, “no” for instance. This is the premise behind what we call foreground trust and its associated technology, device comfort. Let’s have a closer look at how this all comes together.

Foreground Trust and Device Comfort

I will start with a reminder of what you already know (if you’ve been reading till now!). We care about empowering people. Part of the concept of trust empowerment is giving people the tools and information they need to make a trust-based decision without actually making it for them. This is a subtle problem and a political position. Giving people information that matters to them and is not disinformation is a political position. There is little doubt that there are people or organizations that benefit from an uninformed, or worse, misinformed, public. Unfortunately, as we have seen, it doesn’t take a great deal to make misinformation (or indeed disinformation) happen in the social networks that exist in the start of the second decade of the 21st Century. This is a major problem and taking a stand against it is inherently making a statement about what is important to you. This is political, and that’s good.

In order to give people information it pays to know what context the information is needed in. If this sounds a little like “trust is contextual”, you are indeed correct. Information is contextual, in the sense that different information is naturally more or less important than other information at specific times and in specific contexts. For example, the fact that Steve has no hair on his head is of no importance if you are asking him for help with a mathematics problem. But if you were looking for him in a crowd of hirsute men, it might make a difference. But, you might think it matters in the mathematics problem too.

The point? It’s up to you.

I mean, sure, it makes little sense, but perhaps there is something about being bald that means you’re less likely to ask for advice. You have your own reasons. Foreground trust is a way of acknowledging both the contextuality of information as well as the individual information needs (and reasons). Its basic premise is that the person (or agent) making the trust decision has specific requirements that matter to them in that context. It becomes much more important when you consider that online there is a dearth of the signals that we normally use to make trusting decisions. Zoom can only do so much to help you see the body language we have used basically forever to assist us. It is worse when there is no Zoom. This brings us to trust enablement and foreground trust.

Trust enablement is a theory discussed in Natasha Dwyer’s PhD thesis, which examined how trust worked in digital environments amongst strangers. The point is to allow users of digital environments to, as Dwyer would say, “proceed on their own terms”. Which should sound an awful lot like the past few paragraphs tried to say, just in a few less words. The idea? Create a tool to help people gain the information that they needed from the systems that they were using, in order for them to make trust-based decisions around other people in an online environment (where normal social cues are often unavailable).

Trust enablement leads us quite nicely to allowing systems to be able to figure out what that information might be, or at least ask and (and learn). This is the premise of foreground trust. It’s not really so different from Trust Enablement (and both are empowerment tools). The idea is to help people with technological prompts about the trustworthiness of the information or systems (or indeed people) that they are dealing with. If you remember the chapter on complexity, it touched on Foreground Trust a little when the ten commandments were discussed. A similar concept, anshin is presented in this paper and this paper. Anshin (a sense of security) is a Japanese concept that the authors examine the context of security, for instance in online tools.

Let’s bring this back to trust enablement for a moment. The point is that the systems we create can use social contexts and cues to help people do what people do. Especially where there are problems for the person getting the information needed.

Let’s take this a little further. What would it be like if we could take these things and allow systems to be able to reason with them in context? To that end, our own work, similar to anshin but applied in a different context, is called device comfort. Device comfort extends foreground trust by allowing personal (and ultrapersonal, like wearable) devices to be able to determine context and tasks preferences and requirements for their owners, and use this as a tool to represent their individual level of “comfort” to the user based on what is happening to them, or the user is doing.

Why does this matter? Remember the thing about about focus? Right. A computational system can be designed to be focused specifically on the important security (or other) and behavioural cues of the owner as well as others in the situation. Imagine: a device that specifically flags when something happens to affect your focus — like a 911 (or 999) call following that accident, or a different route taken to work — and lets you know when you are about to do something that has the potential to be dangerous (or uncomfortable). Like clicking on a phishing link — indeed any link unless you have been paying attention to what you are doing. Or, say, tweeting when it’s late at night. Or unlocking your device when crossing a border, opening yourself up to different kinds of espionage. As an aside, we did develop the idea for a tool that took the data off your device(s) and stored them securely in the cloud with a password you didn’t know. Cross the border (whichever, I’m not fussy or prejudiced) and restore them to the device on arrival wherever you were going (say, the hotel you are staying in). Sort of a digitally locked plausibly deniability tool. After all, if the stuff is locked and you don’t know the password. Imagine the potential!

As a tool to combat social trust based attacks, comfort and foreground trust are formally untested, but show great promise in informal settings, and we leave it as a thought exercise to see in which kinds of settings a little comfort enabled device might work.

Forgetting and Forgiving (or Not)

At the risk of repeating some of what has come before in the book, let’s return to forgetfulness. We’ll get to forgiveness again shortly. As we’ve learned so far, trust is a dynamic and contextual measurement based on things like observed behaviours, risks, importance and so on. It’s reasonable to say that good behaviour can increase trust, whilst bad behaviour can decrease it. How that increase and decrease works is a different matter, but in general that’s a fair assumption. As you already know, trust is often seen as ‘fragile’, too —hard to gain and easy to lose.

But in order to make decisions, things need to be remembered. Trust systems need to remember what has happened in order to be able to calculate trust(worthiness). However, if they remember everything, what is gained? (And what is lost?) More to the point, if events that happened a long time ago have the same weight as those that were recent in our models, we hit a problem: people change. And so can other actors in the system. What was “bad” could become “good”, or at least not bad, and vice versa.

In order to manage this problem, a “forgetting factor” can be introduced which either weights temporally distinct behaviours or observations differently or simply forgets what happened after a time (there are also adaptive forgetting mechanisms). Jösang’s Beta Reputation System  has such a factor built in. In my thesis, I call it a “memory span”. It’s possible, then, to allow trust systems to forget, or to behave as if they forget, things from the past. Does this represent how people do things? Let’s think about that for a moment. For one thing, it’s probably fair to say that different events have different memorability. Some events or behaviours are so egregious or so incredible that forgetting them is out of the question, and the effect that they have on the trust value is constant (particularly the bad things, which is why, perhaps, the fragility of trust is so well entrenched).

Here’s a thing to think about: What would a forgetting function that took all this into account look like?

Unforgiveness and Regret Management

In previous chapters I talked about forgiveness and how it might work for an artificial trust system. I did, at the time, mention that there are problems with some of the rationale there. In particular, forgiving some(one/thing) just because there don’t appear to be any other viable things out there to work with works sometimes but is pathologically problematic at many others. Abusive relationships are a case in point. However, In the defence of forgiveness, think about this: forgiveness is a powerful expression of self control and acceptance that is recognised as being positive for both forgiver and forgivee. More importantly, in human relationships, forgiving does not necessarily mean remaining in uncomfortable situations, and we wouldn’t recommend it for online situations either, where there was a potential for harm.

However, we do argue that trust systems being in a position to forgive transgressions and stay in interactions can bring benefits. As well, knowing the interaction can be challenging is information the trust model can use to strengthen itself (which is where the “Trust but Verify” thing can come in). Yes, of course there are situations where this stuff needs to be managed differently. In the place of this, we considered regret management. More specifically, we considered making someone regret something they did.

Regret is a powerful tool in the mechanisms of trust systems, a priori as a means of determining whether to act or not (anticipatory regret) and as a means of determining the difference between potential actions , but a posteriori as a means of determining the cost of what happened when things went wrong (which applies both to the trustee and the truster). In the latter instance, regret can be used as a tool to determine the amount by which trust is adapted following a transgression (for instance, a high regret on the part of the truster may reduce trust by a larger amount, as well, regret expressed by the trustee for their actions may mitigate this reduction somehow).

Taking this further, in this paper we proposed a system that centralizes the management of regret in a distributed system. From the paper:

“there exists a potential to use trust management systems as a means of enforcing the accountability of others in a transaction or relationship. In this sense, both parties in a transaction need to know that there is a recourse to as system that will lead to the punishment of a transgressor.”

In a way, this is a resort to the idea of system Trust, which we’ve already talked about (in the TrustLess Systems chapter). Basically, if both truster and trustee know that the system will punish transgression, enforcing the regret of the transgressor, in other words, the likelihood of transgression may well be lowered. If the system is trusted to do the job properly.

I leave it as an exercise to think about how you would represent the transfer of trust to the system, how this affects the two foundational questions, and more, whether or not, if the regret management system works, trust is even being considered at all.

Endings

Like all of the chapters in this book, this one is an evolving piece of work. In fact, considering that research moves forward and ideas and questions keep happening, it is probably a little more changeable than the others. The beauty of a digital book like this (unless you printed it!) is that it can grow. That last word is important. We’ll discuss it more in the ending of the book.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Trust Systems Copyright © 2021 by Stephen Marsh is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book