Attacks and Defences

Or, There's Always a Handsome Prince...

Introduction

So there’s this story. A man and his pregnant wife who lived next to a large, walled garden were looking for something that would sate the woman’s cravings, which of course they found in the form of a leafy thing called Rapunzel[1] in the garden, which of course belonged to a not-too-nice witch. Naturally, the witch is a woman[2]. Anyway, the wife will eat nothing else and so one night the husband pops over the wall to nick[3] some. Of course his wife likes it a lot and wants more, so he goes to get some and this time is caught by the witch.

As you might expect, his behaviour doesn’t go down too well with our witch[4]. She allows the man all the Rapunzel he wants, but insists that the child, when born, should be given to her, and the man agrees[5]. The child is born, the witch calls her Rapunzel, and when the girl grows old enough (with long golden hair to boot) our witch locks her in a tall tower with no door and only a window high up.

You’ve likely heard the rest (“Let down your hair,” climb hair, visit Rapunzel) and even if you haven’t you can almost certainly predict that a handsome prince pops along, hears Rapunzel singing, naturally falls in love and asks Rapunzel to let down her hair (which she does) which he climbs, they both fall in love, eventually he asks her to marry him. Now, it’s almost certain she’s pregnant herself by this time, but anyway, she forgetfully mentions the prince to the witch and the witch cuts off her hair in a rage. Prince arrives, climbs up, finds a witch, is thrown (or jumps off) and falls into thorny bushes which make him blind.

Sure, there’s a happy ending. You can find it it for yourself. Why does this matter in a book about trust?

Because there’s always a handsome prince. Read on, it’ll become more clear.

It should come as no surprise that trust systems are subject to attack. After all, trust itself is subject to attack: how many times have people been fooled by con artists, or social engineers, to their detriment? Indeed, anything that has at its root an acceptance of risk by definition is subject to an attack, even if the attack is unintentional. I mean, that’s the point, right?

Artificial trust systems like reputation and recommendation systems, blockchains (okay, trustless systems too), and autonomous systems have powerful capabilities. They also have cracks in their armour that allow nefarious others to take advantage. Sometimes this can be as simple as suggesting or displaying ads on a web page that are offensive or that lead to content that isn’t what they say they are, but sometimes it can be quite serious.

One of the interesting things about trust, as we have learned, is that it allows things to work when there is no central control possible – decentralized systems, in other words. The use of trust allows elements (nodes) in such systems to reason about each other in order to better make decisions about which node to send information to, or which node to accept it from, which path to route sensitive data through, and so on. Other systems may monitor the environment and send information back to a central reasoning system that allows it to provide a situation report to scientists or, in many cases, military forces. If such information was compromised – for example by taking over one node and having it falsify data, or not route data to the control server, the results could be quite serious. Likewise, as infeasible as you might find it, if someone or something were to take over more than 50% of the miners in a blockchain, the “truth” can be managed, if not simply, then at least possibly[6].

Remember: there’s always a handsome prince.

Attacks

In this chapter I’ll have a look at some of the potential attacks that exist for trust systems, and I’ll go over some of the defences that can be put in place to protect them. Like the chapter on reputation and recommendation, it’s a living piece of text. There are always more potential attacks and, to be honest, there will always be people willing to use them, whilst other people try to defend against them. Nobody ever said that it was a good world filled with nice people, but it is the one we have. As it happens, trust is a pretty good tool most of the time to help us get through it. Anyway, we’ll start with attacks that are pretty straightforward and go from there as time progresses. There is an excellent report from ENISA that looks at attacks on Reputation Systems that you might like[7].

Trust systems can be attacked in such a way that renders them much less useful to the people who want to use them. For instance, it’s perfectly possible in online auction sites that rely on pseudonyms to enter as a new person with no penalty. “Of course,” you might say, “because we want new people to join.” However, this also means it’s possible to build a (bad) reputation by reneging on promises, not delivering and so on and then leaving before rejoining as someone new.

Reputation models such as those used by eBay, Amazon and other online sites seek to remove some of the mystery about with whom you are dealing by taking a societal viewpoint. Reputation, ultimately, is a societal measure of the potential for trustworthiness of some person (or autonomous agent) in a very specific context.

To put that more succinctly, your reputation as a seller on eBay is a conglomeration of the ratings from everyone with whom you have dealt as a seller of things. If you have dealt with lots of people fairly and got good ratings, your reputation will be high. Do good things, people notice, you get to be known as a person who does good things. At this point it might help to revisit the trust models chapter to clear up what the difference between trustworthiness and reputation is.

The attacks on reputation systems, and indeed trust models that rely on such niceties, are aimed at subverting these scores in various ways.

In a whitewashing attack, attackers repair their reputation in some way after behaving badly to others in the system. How does this work? It’s really a cost of having cheap pseudonyms (as Friedman and Resnick would have it).

In a previous chapter, I talked about how a cheap or free pseudonym-based system could allow that to happen. It’s worth repeating here. The attacker can build a lovely reputation by lots of small transactions where they live up to (and exceed) expectations. Of course they’re going to get a good reputation score… Then comes the trick: make one huge transaction, grab the money, as it were, and run (leave the system). Too bad for the sucker who you messed about with. Now, because the system you are using has a pseudonymous name system (you can call yourself “HappyOtter” or “AquaticLynx” or whatever) the next time you join you can be someone new, with no links to the previous person you were. And you can start all over again.

As the cheap pseudonyms theory tells us, whitewashing attacks work mostly in reputation systems where the cost of entering is very low. It helps to have a system where, when you enter, your reputation is effectively the same as, or close to, that of someone who has been in the system a long time and been good (especially, then, in systems that focus on negative feedback).

It’s possible to defend against such attacks by removing these traits – for example by requiring some form of registration, a named user, emails or other semi-unique IDs attached to each user, and so on.

This is not the only way to enhance your reputation. In fact, it is possible to do this with another kind of attack (which has many many uses, sadly) to get what you want. It works like this: imagine that you are quite powerful, or a good coder, and have managed to secure for yourself a bunch of ‘drones’ that you can make behave any way you like. In particular, you can make them all give you a 5-star rating (remember, anything less than 5 is a fail). The result? You can pretty much do what you like because you’re always going to have a preponderance of great ratings. This is a form of what is called ballot-stuffing.

Now, imagine if you took advantage of someone in the system. The person you make into a sucker might not take this lying down, and this could get messy. So what could you do? Well, you could get your drones to vote this person down (give them bad ratings, whatever) so that their word counts, basically, for nothing. You can use drones to just make someone look bad who you don’t like, it’s all possible. This form of attack is known as bad-mouthing.

Those drones I was talking about? They are a special kind of thing called a Sybil and the use of them is called a Sybil attack. The Sybil attack occurs when multiple actors under the control of a single actor flood the system in some way as to influence the reputation of the controller or someone else. In one setting, it is a form of self-promotion (ballot-stuffing). As we have seen, it can easily, however, also be used to decrease another person’s reputation (where it is known as bad-mouthing).

This attack has potential in systems that are distributed, where there is little to no cost to join, and where interactions are not properly or adequately registered – after all, if nobody knows we didn’t interact, then you could say anything and it would be believed by the system.

To defend against such an attack, the system could set up some kind of cost for creating new users. For instance, you could charge a small fee for each new user registered – although this might have an impact on getting anyone to register, of course. You might perhaps have some form of a time delay between creating a new user and being able to do anything. Or you might just give new users a zero reputation. A combination is of course possible – imagine for example charging a fee for reputation points that is held in escrow and returned when the user has shown that they behave, and if there is no fee paid starting at zero reputation. You might also want to put in place a way to ensure verifiability, where each transaction is verifiable to ensure that it definitely took place – with some form of certification, for instance.

As you might have spotted by now, there are problems with defending systems like this.

In the eCommerce world, a sort of Sybil attack happens when the manufacturer or marketer of some product pays a fee to have fake reviews posted for it. Amazon has suffered from this in the past (still does) but it isn’t the only one. Pretty much any site that allows things like ratings and reviews has this problem.

It’s a constant battle, and it’s not getting any easier to win. The past year (when I write this) has driven a huge increase in online sales and thus, naturally, a similar increase in things like fake reviews. So can you still trust the reviews? It depends.

The on-off attack manipulates how trust systems remember in order to maintain the ability to benefit from being bad by alternately being good, thus never dropping below a certain trust score. How does this work? Simply by ensuring that, as an attacker, you never behave so badly that the system remembers. Cast your mind back if you will to our discussion of Robert Axelrod and the Prisoner’s Dilemma. You will recall that tit-for-tat did quite well in Axelrod’s experiments for various reasons (it was provokable, forgiving, and so on). That forgiving thing, which we’ve talked about too, is akin to forgetting if designed that way. This is to say that if you forget what happened to you, you are likely doomed to repeat the same thing happening – which is why it’s sometimes good to forgive whilst not forgetting. Many trust systems have a forgetting factor built in. If an attacker can learn what this length of time, or mechanism, might be, they can behave badly for a time, then well, and then wait long enough for the system to forget before starting again.

As Sun et al. note, it’s possible to mimic real life and the way society is seen to evaluate trust in a way to mitigate this attack. As we have learned, it is commonly believed that trust is fragile, that is, hard to build, easy to lose. Thus, it takes some time, and a lot of good behaviour, to be seen as trustworthy, but just a few instances of bad behaviour to become seen as untrustworthy.

Having a trust system behave in this way could certainly have the effect of mitigating an on-off attack, but it also has the effect of making newcomers, if they join with a low rating (since they are untrusted as yet) seem no different from those who have just been ‘demoted’. That is, of course, unless you can count the number of interactions someone has had.

As with all defences, there are nuances here. The problem is figuring out what they might be and not allowing an attacker to do the same thing!

Another attack, which mostly affects systems that use a secondary indirect trust (recommendation trust), since it decreases the recommendation trust you have in the other party, is called the Conflicting Behaviour attack. In this attack, the attacker behaves differently to different entities, or groups of entities. Imagine how that might work: if I behave nicely to you, and badly to someone else, there will be conflicting information in the system. If you should go to those others looking for recommendations and they speak badly of me, it is quite possible that you will cease to believe what they have to say about anyone else, or at least discount it. Nefarious, eh?

This is a direct attack on the system itself. It doesn’t necessarily benefit me (although it might, somehow – there’s very likely a Machiavellian thing here somewhere) but it does mean that trust in the system is eroded, and this is a very bad thing – as we have already noted, sometimes system trust is the thing that we turn to in order to manage a difficult situation.

How do we deal with it? Well, one way is to not use recommendation trust at all, if you suspect that such an attack might be happening. As you might have just thought to yourself, this does rather negate the potential good of recommender systems, whilst also assuming we could spot such an attack in the first place. I cannot but agree.

People are, in the vernacular of the security professional, the weakest link in the security chain. Regardless of the truth or fairness of this belief, it is true that people, the human part of the trust system, are targets of attacks just as the technical parts are. Let’s have a look at some of them.

Don’t forget that darned handsome prince, by the way, we’re getting to him.

We’ve all heard of phishing attacks, and spear-phishing as a means of precisely targeting them. Quite apart from being more or less sophisticated interface attacks, they, and their relations in social engineering, are also attacks based on trust: you click on links because you trust the systems that you are using to be telling you the right things. See, I told you that system trust was a bit of a challenge! I’ve already mentioned the problem with the way we design security systems in the Trustless Systems chapter so I won’t harp on about it here, but we really ought to do better.

Anyway, the attacks work because either you are not paying attention or because they are extremely sophisticated (possibly both). This brings us to the observation that no-one can pay attention all the time. Moreover, since we are able to spot some attacks quite readily with security tools, the really difficult ones to spot are left to the fallible people to manage for themselves. Oh dear, I did harp on after all.

Defences

So, how can we defend against things like phishing attacks? It’s difficult – again there’s an arms race between the good and the bad folks, and when the good folks get better at spotting things automatically, the bad folks jump over the new walls that have been created. In other words, we probably won’t get rid of them – there will always be something that slips through and isn’t noticed because we are focused on our sick child, or the car crash we saw on the way to work, or whatever. People are people, to paraphrase Dickens: that they are what they are, do not blame them.

Whilst we’re talking about people, let’s look at the most difficult thing to fix, because people just don’t like to be mean. Yes, we’re talking about social engineering. Social engineering, a pithy term for lying, is a set of techniques used through millennia by con-artists to convince people of their authenticity, and often relies on the inability (or discomfort) of people to say ‘no’ in social situations.

As a tool to infiltrate physical locations (and hijack digital ones) it is usually extremely successful, and is often enabled by the over-sharing of its targets online. Examples abound of the successful use of the various techniques in the social engineering toolkit but, at the heart of its success is abuse of trust and social expectation.

How do you defend against things like this? Well, one way that organizations do it is by putting policies in place to ensure that people can say no. After all, if a policy exists that says you can’t leave an unauthorized person alone in the building then it’s (at least in theory) easier to say I’m sorry, I have to stay with you. Other policies that might help are to insist everyone wears an ID badge which is visible at all times and colour-coded, with certain colours indicating that the bearer has to be accompanied. Easy peasy.

The point? If we give people a good reason to say no, perhaps this makes it easier.

I’ve already noted in the chapter on reputation that we find ourselves in a situation where reputation is an important currency. Reputation can be used to better promote things, sell things, get things done online, and so on. It is also, therefore, a target. If someone’s reputation can be hijacked, the results could be very problematic. It could be used to vouch for someone in order to insert them into a social network to be malicious. It could be used to promote malware, and so forth. It could, then, be “spent” on bad things.

Worse, it could be used to blackmail the person to whom it belongs, since it is such an important online tool in the 21st century, much as it was an important societal tool in previous centuries. Locking someone out of their reputation account, or social network account, could indeed be a highly lucrative business. How might this work? There are many ways. Consider for example stealing the account and asking for money to get it back. Or blackmailing the owner in order to get them to do something for you.

In case you didn’t think it mattered too much there are companies who specialize in rebuilding the reputation of people who have done something that in retrospect seems a little foolish. The Internet is a terribly harsh place – so many people are quite happy to judge others by some measure which makes the judge look good and the judged look bad, and they are quite happy to be vocal about it, often with disastrous results for the judged whilst the judge feels righteous (I say feels – rarely are they ever such a thing). Consider the case of the person who took a prank picture of themselves at a cemetery. I’m not going to mention the name or the place, you can find it for yourself. The result of this person posting the photo was a Fire X Facebook page, death threats, rape threats, the works. And yes, they lost their job. Still today you can find the person’s name and what happened along with the opprobrium attached to the action, almost ten years later.

Frankly, I don’t really care much about whether what was done was right or wrong: the reaction was disproportionate, and quite probably based on self-righteous self-importance and hate of others who are not us regardless of what they might do. It’s been said that this is possibly part of the ‘culture war’ that has been stoked by the likes of Facebook  – sorry, Meta – and others. Regardless, there are plenty of these kinds of people around and sadly the Internet provides several venues for them to be hateful in. It undoubtedly makes them feel better about their own rather shallow lives.

Don’t be those people.

On the other hand, don’t be the person who has to get their picture out there right away either. A little forethought might be a good thing in our rather judgmental world. Remember: there is a problem in using power tools when you are inattentive.

It’s said the internet doesn’t forget (regardless of right to be forgotten laws) but there are reputation management companies such as ReputationDefender which try to work the algorithms used by search engines to make the ‘bad’ pages appear lower in the results list when a person’s name is searched for. It’s a good thing such companies exist: they are the handsome princes that we are lucky to have. If you’re interested, you can find a good article in the Guardian newspaper about the whole process here.

The consequences of lies and anger on sites which encourage ‘friending’ and instant reaction have become ever more sorrowful, for instance in the killing of Samuel Paty). And before you think this is not about trust, it most certainly is, you just have to look at it properly. The father of the teen who lied and began this sorry chain of events reportedly said, “It’s hard to imagine how we got here.”

It really isn’t. That’s the problem.

To be clear, things like attacking and defending reputation are sometimes deliberate and sometimes accidents. The way in which the system itself reacts is disproportionate (usually negatively) because of what it is: a collection of anonymous fools who should know better. The problem of course is that much of what I just wrote is normative. Your version of the truth or what matters will always be different from mine or anyone else’s. The trick with systems like this is to be your own person and not follow the crowd, even if it is noisy. I’ve talked about crowds before in this book but it is good to remind you: the crowd is pretty much as smart as the stupidest person in it.

I’ve said before in various settings that one of the things that makes human beings so very easy to con is that they get embarrassed rather easily. It is embarrassing to be taken for a sucker.

It’s also something you don’t forget easily, I rather like Rhys Rhysson’s little line in Terry Pratchett’s Raising Steam: “I see embarrassment among all of you. That’s good. The thing about being embarrassed is that sooner or later you aren’t, but you remember that you were.”

The other thing about being embarrassed is that it is akin to being ashamed: you really don’t want anyone else to know how it happened. So, human beings, well, we hide our shame and our embarrassment and let the con-artists and blackmailers carry on with what they are doing, not letting other people know, even to warn them, because to do so would reveal we are suckers.

One of the best things about artificial systems is that they can compartmentalize what they should be feeling and turn it into action. Instead of being ashamed or embarrassed about being taken for a sucker, an artificial entity can broadcast it far and wide. The end result? The con-artist or blackmailer is less likely to succeed with other such systems (or even people). This is, in fact, partly why things like the Cyber Information Sharing and Collaboration Program could work.

Whilst I’m talking about this kind of thing, I would be remiss if I didn’t talk a little more about how people are problematic just because they are people. And people live in a society which can be harsh. Society can also attack the people who are in it, and trust systems like reputation or recommendation systems can amplify this. One way this might happen is by stamping down on differences, or on new ways of doing things – someone proposing such new things might suffer from a particularly bad reputation if those with power and a vested interest in the status quo are threatened. Likewise, there is always the problem of a vocal minority: that small subset of a population that actually does leave reviews and ratings might skew the findings of a person looking for accurate information.

There’s no real end to these things.

I promised that we’d return to our handsome prince. As you may have spotted, the prince was able to get into Rapunzel’s tower and seduce her regardless of the fact that there was no door and a window many many feet up in the air. To put it simply, there is never a totally secure system. There is never a system that can’t be successfully attacked. If there was such a thing, there wouldn’t be so many security consultants. Since we’re paraphrasing books and plays, let’s do one more – as William Goldman’s Dread Pirate Roberts says, “Life is pain, Highness. Anyone who says differently is selling something.”

There’s always a handsome prince, and they can always get where you don’t want them to and do what you don’t want them to.

A final word or two for this chapter is in order. Trust systems can be attacked. There are also some defences. Unfortunately the defences, like most security procedures, somewhat get in the way of the very tasks you might want to accomplish with the system. Let’s go back to our example at the start of the chapter – entering the system as a new person to whitewash your reputation. To defend against this, as we’ve learned, you can require references from friends before being allowed to enter, or deposits, or any number of other things that affect the cost of entering the system. All of these are valid, and every one of them has a side-effect: to make the system less accessible to the bona fide new members.

We can’t get around this: defending systems makes them more challenging to use for the people who want to use them legitimately. Our task is to determine the best defence in any circumstance that has the least impact on the people or computational systems that want to use them legitimately.


  1. Bet you know the story now! Darn, the secret is out!
  2. It’s a thing – traditionally it’s a pretty powerful way to keep intelligent women in their place.
  3. Steal. Slang...
  4. Neither should it – after all, yon husband was stealing her stuff, but nobody thinks about that.
  5. There is so much wrong with this story, but it is what it is.
  6. And as you already found out in the Trustless Systems chapter, the more time goes on, the more likely it is that a blockchain using Proof of Work can indeed be compromised in this way because mining becomes, well, not worth it.
  7. Yes it is from 2007, but the general attacks havent changed much in the sense that they still happen and we still haven't managed to make things muich better.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Trust Systems Copyright © 2021 by Stephen Marsh is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book