Attribution, Economics, and 'The Criminality Premium'

I started putting together a piece on the concept of a 'criminality premium' some time ago, but was drawn to other topics for a while after.  I was brought back to it again after reading a blog by Phil Kernick, of CQR Consulting, titled "Attribution is Easy."  I'm not sure whether the title is intended to be serious or to provoke debate, but if you're really interested in attribution, the US held hearings into the topic before the Subcommittee on Technology and Innovation, Committee on Science and Technology of the United States House of Representatives, in 2010.  While obviously a few years old now, the content remains excellent and is a must read for cyber-security folks.

My personal favourite is this submission: Untangling Attribution: Moving to  Accountability in Cyberspace, by Robert K. Knake, International Affairs Fellow in Residence, The Council on Foreign Relations

The following diagram from Knake's submission presents a neat and tidy summary of the key challenges in attribution, varying by the type of incident/attack one is trying to attribute.  I would suggest that attribution isn't "easy", but in some cases is a problem with sub-elements which can definitely be resolved.

attribution.png

While the CQR blog entry example of Alice and Chuck, and Chuck peering over Alice's fence with a telephoto lens, is hardly the epitome of 'cyber war', the mechanism of attribution - based on the "small number of capable actors" (ie who could see the designs) and "using out-of-band investigative and intelligence capabilities" is a pretty good match for the above.  

The CQR blog also included the following line which raised my eyebrows:

"This is an economic market working perfectly - if it is cheaper to steal the design than license it, economic theory drives theft, until the cost of theft is greater than the cost of licensing." 

While the underlying economic premise here may well be correct, it is only true in a world where the only 'cost' of theft to the thief, is the actual financial cost of the resources used to steal.  The lack of consideration of the potential for either civil or criminal liability for copyright breach (and whatever other miscellaneous crimes may have occurred in the process), renders the example of little use in the real world.

Where this does become relevant, however, is in the consideration of the concept of a 'criminality premium', which first arose after a discussion about crowd sourced security testing, and bugcrowd (for whom I am an Advisor).  

The realisation that I had, is that crowdsourcing testing aligns the reward process for the good guys, with the reward process for the bad guys.  That is, the bad guys don’t get 'paid' (ie, don't receive an economic reward) for the time they invest in finding venerabilities in systems; they only get 'paid' when they find the vulnerability (generally, through exploiting it).  Crowdsourcing aligns the reward system so that the good guys get rewarded for doing the same thing as the bad guys.  

This, in turn, got me wondering about whether this economic similarity in reward structure somehow helps level the playing field because the good guys no longer have the economic advantage of stability of earnings (ie getting paid for time, rather than results) and instead are paid like the bad guys - on delivery of results.

Taking this a step further, if we're presenting the same fundamental task (finding security weaknesses), and the same economic incentive structure to both the good guys and the bad guys, then the only reason someone would choose between the two is the size of the reward.  I also assume that it is not as simple as just converging the size of the 'good guy' reward pool with the potential size of the criminal 'reward pool', but that logically there is a 'criminality premium', in that given two choices:

  1. Earn $50 legally;
  2. Earn $50 illegally for doing exactly the same thing;

Anyone making rational decisions will choose 1, as there is a 'cost' that must be considered associated with (2) associated with the potential for punishment for the illegal act.

Therefore, the question is simply how big we think this criminality premium is.  If you have a database of 40,000 credit card numbers, which for argument's sake are worth about 50c each on the black market, the potential 'payment' for accessing that database and selling the contents, is $20,000.

How much do you need to pay, for the person identifying the vulnerability allowing access to that data, who is economically rational, to choose the legal disclosure path rather than the illegal disclosure path?  (Acknowledging that this concept requires almost everyone in the world having a tacit ongoing bug bounty program!)

$5,000?  Seems unlikely.

$10,000?  Must be getting close.  $10,000 without any worries about the feds kicking in your door would seem a better idea than $20,000 from illegal exploitation of that data set (since there are all the usual 'non-payment' risks that also arise in the black market). 

$15,000?  Surely.

If we can successfully remove the economic incentive to be a 'black hat' rather than a 'white hat', we're just left with the criminally insane and the purely vindictive (ie not economically motivated) attackers to worry about.  

And whether organisations have a grip on the potential economic value of their data to an attacker, in order to  put together a program that is sufficient to take economically rational hackers out of the pool of bad guys, is a different question again.

Crowdsourcing & the Prisoner's Dilemma

One of the common questions that gets raised in the crowdsourced testing process (eg Bugcrowd) is how it's possible to manage the risk of a tester identifying vulnerabilities, then disclosing them or selling them or using them, outside the parameters of the officially sanctioned test.

While it is presenting an alternative to penetration testing in many cases, it is somewhat more useful to consider the model in the context of the bug bounty programs run by companies like Google.  

The reason for the distinction is that bug bounty programs are aimed at achieving two related, but distinct, goals:

  1. To have vulnerabilities that would have been identified anyway (ie through unauthorised testing, or through incidental testing for a third party), be responsibly disclosed; and
  2. To have additional vulnerabilities identified by encouraging additional testing, and corresponding responsible disclosure.

That first group is often not considered as a goal of a penetration test - the likelihood that any system of interest is constantly being subject to security analysis by Internet-based users with varying shades of grey- or black- hats, seems to often to be overlooked.  

With the risk of stating the obvious, the reality is that every vulnerability in a given system, is already in that system.  Identifying vulnerabilities in a system does not create those weaknesses - but it is true that it may increase the risk associated with that vulnerability as it transitions from being 'unknown' to being 'known' - depending on who knows it.  

Feel free to spread this video around.

To use Donald Rumsfeld's categorisation, we could consider the three groups as follows:

  1. Known Knowns: Vulnerabilities we know exist and are known in the outside world (publicly disclosed or identified through compromise);
  2. Known Unknowns: Vulnerabilities that we know exist, and are unsure if they are known in the outside world (either identified by us; or privately disclosed to us);
  3. Unknown Unknowns: Vulnerabilities that we don't know exist, and are unsure if they are known in the outside world (which is the state of most systems, most of the time).

What crowdsourcing seeks to do, is to reduce the size of the 'unknown unknown' vulnerability population, by moving more of them to the 'known unknown' population so that companies can manage them.  The threat of a 'known unknown' is significantly lower than the threat of an 'unknown unknown'.

Which brings us to the risk that a vulnerability identified through a crowdsourced test, is not reported, and hence remains an 'unknown unknown' to us.  The risk of non-disclosure of vulnerabilities identified through a crowdsourced test is effectively mitigated by game theory - it is somewhat similar to the classic 'Prisoner's Dilemma'

The Prisoner's Dilemma is a classic of game theory, demonstrating why individuals may not cooperate, even if it is in their best interests to do so.  The Dilemma goes like this:

"Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail."

Effectively, the options are as presented in this table:

prisonersdilemma.png

The beauty of the dilemma, is that as they cannot communicate, each prisoner must evaluate their own actions without knowing the actions of the other.  And for each prisoner, they get a better outcome by betraying the other prisoner.  For Prisoner A looking at his options, if Prisoner B keeps quiet, Prisoner A has the choice of 1 year in jail (if he also keeps quiet) or no jail time at all (if he testifies against Prisoner B). Hence, testifying gives a better outcome.  And if Prisoner B testifies against him, Prisoner A has the choice of 3 years in jail (if he keeps quiet) or 2 years in jail (if he also testifies)... again, testifying gives a better outcome.

Hence, economically rational prisoners will not cooperate, and both prisoners will serve 2 years in prison, despite that appearing to be a sub-optimal outcome.

What does this have to do with crowdsourcing?

In crowdsourcing there are obviously far more than 2 participants; but the decision table we are interested in, is as it is relevant to any particular tester.  The situation they face is this:

crowdtest2.png

Essentially, each tester only knows the vulnerabilities they have identified.  They do not know who else is testing, or what those other testers have discovered.

Only the first tester to report a vulnerability gets rewarded.

Any tester seeking to 'hold' an identified vulnerability for future sale/exploitation (as opposed to payment via the bounty system) has to be confident that the vulnerability was not identified by anyone else during the test, since otherwise they are likely to end up with nothing - the vulnerability gets patched, plus they don't get any reward.  

Since Bugcrowd tests to date have had large numbers of participants, and have found that over 95% of vulnerabilities are reported by more than one tester, this is a risk that will rarely pay off.

As a result, economically rational testers will disclose the vulnerabilities they find, as quickly as possible.  

For organisations getting tested, cliched as it is, the crowd truly does provide safety in numbers.

Disclaimer: I'm an Advisor to Bugcrowd.