A Pact with the Devil – Bond and Danezis, June 6th 2006
With thanks to Joshua Corman and David Etue for pointing this paper out to me during discussions at the GOTO London conference.
Does that app really need all those permissions? And why can’t permissions be finer-grained or temporary? For example – I’m happy for you to access this photo, but I might not want you to have access to all my photos for all time; I’m happy for you to use the camera to capture an image, but once it’s captured I don’t want you to be able to use the camera whenever you want from now on, and so on. Reading today’s paper choice, ‘A Pact with the Devil,’ makes you think about these kinds of issues in a new light!
In studying how virus propagation interacts with user incentives, we discover the propagation continuum. Viruses, botnets, peer-to- peer systems and conventional applications are not dichotomous, they are data points on a scale of malice. They all attempt to survive on end user systems, by providing both positive functionalities, penalties for being removed and often some negative side effects. Do the benefits of a peer-to-peer client such as the ad-ware ridden KaZaA outweigh the problems? Will a user coexist in relative harmony with a search-hijacking, click-counting tool because of the risk of damage to their OS during a fudged uninstallation? When someone sends you a word document that you cannot read with your viewer, is Microsoft Word propagating itself in the same way?
Bond and Danezis show a model whereby malware can provide enough incentives to a user for the user to willingly maintain the malware on their system (and also provide enough disincentives to prevent them from uninstalling it).
Users can therefore enter in “a pact with the devil” that confers on them some powers, that the virus shares with them, but as they soon realise, some heavy responsibilities too. Not surprisingly, it is the darker human traits that such malware seeks to foster and exploit – greed, curiosity, need for power, fear, shame, lust, to name but a few.
This family of viruses are called ‘Satan Viruses:’ they carry the malice of the devil and will employ the most ruthless techniques to achieve their ends. Satan Viruses use two fundamental design principles: carrot, and stick.
The Carrot Principle:
First, the virus convinces the user to execute it by conferring him or her with a certain advantage. This advantage is true and tangible, it is backed up by evidence that clearly demonstrates it can be provided, and should ultimately satisfy the user. There is no deception involved at this stage – and the user knowingly “sells his soul to the devil” to acquire this advantage. As long as he honours his side of the “pact” the advantage is provided. This first principle provides incentives for the user to execute and maintain the virus alive.
The Stick Principle:
Second, the virus, in its co-existence with the user, gathers information about the user’s activities, lifestyle and habits. It then tells the user that if an attempt is made to remove the virus, the gathered information will be used to hurt the user. This provides further disincentives for the user to remove the
(or, somewhere along this continuum, the virus simply sells the gathered information, or uses it to target advertising etc.).
The paper gives a worked demonstration of an instance of a Satan Virus. It assumes a PC setting, with access to another user’s files as the carrot, and revelation of the access to the party spied upon as the stick. There are four key phases: temptation, monitoring, blackmail, and propagation.
The virus sends an email from Alice to Bob, offering access to all of Alice’s emails and documents. To make the offer more enticing, extracts from these documents containing Bob’s name, or other interesting keywords can be included. Bob can chose to accept this offer, by downloading the virus (that can be hosted on Alice’s computer or bundled in the email) and executing it. As a result he should have full access to Alice’s documents, with a search interface to help locate files of interest.
The challenge here is incentive design. The virus architect must consider both appropriate lures, and calibration to spread with the correct amplifying effect – e.g. up and down a management hierarchy, or sideways across peers.
As soon as the virus has installed itself, it starts recording everything that Bob does, and in particular the accesses to Alice’s information. Crucially, this includes the search queries performed as well as logs of the documents retrieved. This information is sent back to Alice or another infected third party (that can be known through Alice) for safekeeping, but it is not revealed. The key intuition is that the virus avoids the hard problem of automatic detection of ‘blackmail’ material on Bob’s computer, by collecting evidence on the unsavoury act of spying that it has tempted Bob to commit
During the monitoring phase, if Bob suspects he is also being spied on, he may try to delete incriminating information from his computer. This is perfect for the virus, as it tells it what might be the most valuable information! The ‘virtuous sinner’ paradox applies here: “if Bob does not perceive that he has something to hide, genuinely or because he has deleted the information, he will be more tempted to spy on Alice, since he is less worried about it happening to him.”
When a critical mass of incriminating evidence of unauthorised accesses from Bob to Alice’s machine has been gathered, the virus emails Bob with a warning. The warning specifies that if an attempt is made to remove the virus the information gathered will be revealed…
Propagation can take two paths: voluntary or involuntary.
Bob is asked by the virus to provide a target to which it might spread. Bob selects Charlie. Bob is told that Charlie would have the ability to read Alice’s documents (not Bob’s) and that he would have the ability to read Charlie’s documents. The ‘invitation’ will appear to be coming from the virus residing on Alice, in the form of an email tempting Charlie to read her documents. Thus the incentives are aligned for Bob to assist, and the virus propagates…. In case the virus has not propagated enough through the addresses provided by Bob, it considers that Bob has breached his side of the “pact”, and sends itself to Bob’s contacts, as harvested through emails, contact lists, documents, etc. The virus now encourages recipients to install it, using the incentive of access to Bob’s files.
Build your own Satan Virus
The authors provide a taxonomy of rewards and threats that could be used in constructing a Satan virus. Reward possibilities include:
- Threat enactment – “I’ll carry out threat X on Y and you can watch”
- Privacy invasion – “You can browse X’s hard drive / read X’s email / watch webcam or mic”
- Revelation – “I’ll tell you what X said to Y, I’ll tell you what I found on X’s hard drive”
- Fabrication – “You can forge emails from Y”
- Mischief – “You can take control of X’s PC”
- Virtual Goods – “You’ll get tons of free porn / software”
- Real-world Goods – “You’ll get free goods to your door”
- Innovation – “You can use this really cool feature”
- Unsubstantiated – “You’ll get 7 years bad luck…”
(and the modern one, “This app is free in the app store” 😉 ).
Threats can include data destruction, privacy invasion, revelation, fabrication (“I’ll make up an email telling X you slept with Y…”), desecration, framing, hardware damage, security exposure, real-world actions (e.g. ordering on your credit card), unsubstantiated threats, nuisance threats, unwanted goods, reporting, access denial, or any combination of the above.
Once a Satan Virus has awarenes of the social structure of the human network above its hosts, it can start to approach humans to help it spread.
To do this, the virus spawns “demons” – communicating conversational clients which interact with a specific user via email or instant messaging, with the goal of obtaining their assistance, and additionally enacting whatever overall payload the virus may carry. The demons manipulate the users through a system of rewards and threats, harvested from the compromised nodes it controls. If a user does not react to rewards, maybe she will react to threats. When a computer is compromised, the virus scans its content and produces “threat modules” and “reward modules” based on the content and resources found. These broadcast out their existence via the peer-to-peer system, and the demons barter for them.
Each demon binds to one person, is instantiated when the system determines a good chance of success in manipulating that person, and is responsible for controlling their fortunes.
Nodes accept incoming reward and threat modules, which may be time locked, or locked with some cryptographic key. When a reward package is unlocked it may confer a simple benefit, for instance releasing an MP3 or pornographic image to the owner of the host, or it might release some more sophisticated reward, such as access to the email archives of another user close by in the social network.
Nodes scan continuously looking for potential new outgoing reward and threat modules to construct.
Maybe the only long term solution is to give users the proper tools to compartmentalise their PCs, and maintain their privacy. Initiatives such as trusted computing may both help and hinder – they could provide hard boundaries to keep a virus from illicit access to address books and privately marked files, however they could also protect the virus, rendering it invulnerable to reverse engineering, and entrenching it more deeply.
Until that day…
Once research into phishing, anti-spam and anti-spyware technology integrates more fully with virus defence, we may enter a new realm of content approval and adjudication. Therein lie fascinating new avenues for research that could help users properly gain control of their PCs, and understand what they are letting themselves in for when they click ‘yes’ – be it on temptation from the Satan Virus, or on a twenty page licence agreement. One thing remains certain: until this day comes, running other peoples software will remain an activity to be undertaken with caution.