Attribution, Economics, and 'The Criminality Premium'

I started putting together a piece on the concept of a 'criminality premium' some time ago, but was drawn to other topics for a while after.  I was brought back to it again after reading a blog by Phil Kernick, of CQR Consulting, titled "Attribution is Easy."  I'm not sure whether the title is intended to be serious or to provoke debate, but if you're really interested in attribution, the US held hearings into the topic before the Subcommittee on Technology and Innovation, Committee on Science and Technology of the United States House of Representatives, in 2010.  While obviously a few years old now, the content remains excellent and is a must read for cyber-security folks.

My personal favourite is this submission: Untangling Attribution: Moving to  Accountability in Cyberspace, by Robert K. Knake, International Affairs Fellow in Residence, The Council on Foreign Relations

The following diagram from Knake's submission presents a neat and tidy summary of the key challenges in attribution, varying by the type of incident/attack one is trying to attribute.  I would suggest that attribution isn't "easy", but in some cases is a problem with sub-elements which can definitely be resolved.

attribution.png

While the CQR blog entry example of Alice and Chuck, and Chuck peering over Alice's fence with a telephoto lens, is hardly the epitome of 'cyber war', the mechanism of attribution - based on the "small number of capable actors" (ie who could see the designs) and "using out-of-band investigative and intelligence capabilities" is a pretty good match for the above.  

The CQR blog also included the following line which raised my eyebrows:

"This is an economic market working perfectly - if it is cheaper to steal the design than license it, economic theory drives theft, until the cost of theft is greater than the cost of licensing." 

While the underlying economic premise here may well be correct, it is only true in a world where the only 'cost' of theft to the thief, is the actual financial cost of the resources used to steal.  The lack of consideration of the potential for either civil or criminal liability for copyright breach (and whatever other miscellaneous crimes may have occurred in the process), renders the example of little use in the real world.

Where this does become relevant, however, is in the consideration of the concept of a 'criminality premium', which first arose after a discussion about crowd sourced security testing, and bugcrowd (for whom I am an Advisor).  

The realisation that I had, is that crowdsourcing testing aligns the reward process for the good guys, with the reward process for the bad guys.  That is, the bad guys don’t get 'paid' (ie, don't receive an economic reward) for the time they invest in finding venerabilities in systems; they only get 'paid' when they find the vulnerability (generally, through exploiting it).  Crowdsourcing aligns the reward system so that the good guys get rewarded for doing the same thing as the bad guys.  

This, in turn, got me wondering about whether this economic similarity in reward structure somehow helps level the playing field because the good guys no longer have the economic advantage of stability of earnings (ie getting paid for time, rather than results) and instead are paid like the bad guys - on delivery of results.

Taking this a step further, if we're presenting the same fundamental task (finding security weaknesses), and the same economic incentive structure to both the good guys and the bad guys, then the only reason someone would choose between the two is the size of the reward.  I also assume that it is not as simple as just converging the size of the 'good guy' reward pool with the potential size of the criminal 'reward pool', but that logically there is a 'criminality premium', in that given two choices:

  1. Earn $50 legally;
  2. Earn $50 illegally for doing exactly the same thing;

Anyone making rational decisions will choose 1, as there is a 'cost' that must be considered associated with (2) associated with the potential for punishment for the illegal act.

Therefore, the question is simply how big we think this criminality premium is.  If you have a database of 40,000 credit card numbers, which for argument's sake are worth about 50c each on the black market, the potential 'payment' for accessing that database and selling the contents, is $20,000.

How much do you need to pay, for the person identifying the vulnerability allowing access to that data, who is economically rational, to choose the legal disclosure path rather than the illegal disclosure path?  (Acknowledging that this concept requires almost everyone in the world having a tacit ongoing bug bounty program!)

$5,000?  Seems unlikely.

$10,000?  Must be getting close.  $10,000 without any worries about the feds kicking in your door would seem a better idea than $20,000 from illegal exploitation of that data set (since there are all the usual 'non-payment' risks that also arise in the black market). 

$15,000?  Surely.

If we can successfully remove the economic incentive to be a 'black hat' rather than a 'white hat', we're just left with the criminally insane and the purely vindictive (ie not economically motivated) attackers to worry about.  

And whether organisations have a grip on the potential economic value of their data to an attacker, in order to  put together a program that is sufficient to take economically rational hackers out of the pool of bad guys, is a different question again.

Crowdsourcing & the Prisoner's Dilemma

One of the common questions that gets raised in the crowdsourced testing process (eg Bugcrowd) is how it's possible to manage the risk of a tester identifying vulnerabilities, then disclosing them or selling them or using them, outside the parameters of the officially sanctioned test.

While it is presenting an alternative to penetration testing in many cases, it is somewhat more useful to consider the model in the context of the bug bounty programs run by companies like Google.  

The reason for the distinction is that bug bounty programs are aimed at achieving two related, but distinct, goals:

  1. To have vulnerabilities that would have been identified anyway (ie through unauthorised testing, or through incidental testing for a third party), be responsibly disclosed; and
  2. To have additional vulnerabilities identified by encouraging additional testing, and corresponding responsible disclosure.

That first group is often not considered as a goal of a penetration test - the likelihood that any system of interest is constantly being subject to security analysis by Internet-based users with varying shades of grey- or black- hats, seems to often to be overlooked.  

With the risk of stating the obvious, the reality is that every vulnerability in a given system, is already in that system.  Identifying vulnerabilities in a system does not create those weaknesses - but it is true that it may increase the risk associated with that vulnerability as it transitions from being 'unknown' to being 'known' - depending on who knows it.  

Feel free to spread this video around.

To use Donald Rumsfeld's categorisation, we could consider the three groups as follows:

  1. Known Knowns: Vulnerabilities we know exist and are known in the outside world (publicly disclosed or identified through compromise);
  2. Known Unknowns: Vulnerabilities that we know exist, and are unsure if they are known in the outside world (either identified by us; or privately disclosed to us);
  3. Unknown Unknowns: Vulnerabilities that we don't know exist, and are unsure if they are known in the outside world (which is the state of most systems, most of the time).

What crowdsourcing seeks to do, is to reduce the size of the 'unknown unknown' vulnerability population, by moving more of them to the 'known unknown' population so that companies can manage them.  The threat of a 'known unknown' is significantly lower than the threat of an 'unknown unknown'.

Which brings us to the risk that a vulnerability identified through a crowdsourced test, is not reported, and hence remains an 'unknown unknown' to us.  The risk of non-disclosure of vulnerabilities identified through a crowdsourced test is effectively mitigated by game theory - it is somewhat similar to the classic 'Prisoner's Dilemma'

The Prisoner's Dilemma is a classic of game theory, demonstrating why individuals may not cooperate, even if it is in their best interests to do so.  The Dilemma goes like this:

"Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail."

Effectively, the options are as presented in this table:

prisonersdilemma.png

The beauty of the dilemma, is that as they cannot communicate, each prisoner must evaluate their own actions without knowing the actions of the other.  And for each prisoner, they get a better outcome by betraying the other prisoner.  For Prisoner A looking at his options, if Prisoner B keeps quiet, Prisoner A has the choice of 1 year in jail (if he also keeps quiet) or no jail time at all (if he testifies against Prisoner B). Hence, testifying gives a better outcome.  And if Prisoner B testifies against him, Prisoner A has the choice of 3 years in jail (if he keeps quiet) or 2 years in jail (if he also testifies)... again, testifying gives a better outcome.

Hence, economically rational prisoners will not cooperate, and both prisoners will serve 2 years in prison, despite that appearing to be a sub-optimal outcome.

What does this have to do with crowdsourcing?

In crowdsourcing there are obviously far more than 2 participants; but the decision table we are interested in, is as it is relevant to any particular tester.  The situation they face is this:

crowdtest2.png

Essentially, each tester only knows the vulnerabilities they have identified.  They do not know who else is testing, or what those other testers have discovered.

Only the first tester to report a vulnerability gets rewarded.

Any tester seeking to 'hold' an identified vulnerability for future sale/exploitation (as opposed to payment via the bounty system) has to be confident that the vulnerability was not identified by anyone else during the test, since otherwise they are likely to end up with nothing - the vulnerability gets patched, plus they don't get any reward.  

Since Bugcrowd tests to date have had large numbers of participants, and have found that over 95% of vulnerabilities are reported by more than one tester, this is a risk that will rarely pay off.

As a result, economically rational testers will disclose the vulnerabilities they find, as quickly as possible.  

For organisations getting tested, cliched as it is, the crowd truly does provide safety in numbers.

Disclaimer: I'm an Advisor to Bugcrowd.  

Penetration testing market analysis: where is all the revenue?

I was recently sitting at the Australian Technology Park having a cup of coffee with Casey Ellis, co-founder of Bugcrowd, chatting about upcoming investor presentations.  We worked our way on to market sizing, and found that we had both had the same experience when attempting to do a 'bottom up' sizing of the penetration testing market in Australia.  The problem that we both came across, was that even using fairly conservative numbers as to the amount companies are spending on penetration testing, the amount of theoretical penetration testing revenue sloshing about in the market simply does not align with the revenue of the service providers in this space, or simply with the number of testers providing these services.

[Incidentally, I had brief flashbacks to my case-study interviews with strategy consulting firms before I started SIFT... where I had awesome questions like: 

  • "Estimate the size of the market for salmon in the United Kingdom"; and
  • "Estimate the number of PCs imported to Australia each year".]

Back to the penetration testing market... 

200-300.png

Let's start with the big guys.

ASX 20

Of the ASX20, which includes companies in financial services, materials/mining, energy, consumer staples, telecommunications and healthcare, my back-of-the-envelope estimates would suggest that the biggest spenders would spend about $4 million annually on penetration testing, and the lowest spenders would spend about $100K annually.  Putting together the expenditure of the whole group, I estimate it works out at pretty close to a neat $20 million across the 20 companies.

And of course, the ASX20 is - as its name suggests - just the 20 largest companies by market capitalisation on the ASX.  There are a total of 2,157 companies listed on the ASX (when I downloaded the list a moment ago), all of whom you could argue have some degree of obligation to their shareholders to ensure the security of their data and systems, with penetration testing being a pretty common response to that obligation.  For argument's sake, lets say less than half of them do anything, so 1,000 companies.  And let's assume that averaged across that many organisations, the average spend on penetration testing is $50K per annum.  That's another $50 million into the annual penetration testing market.

Let's look at some other big-spending sectors where some reasonably neat figures are available (about the size of the sector; if not the amount spent):

Financial Services

I'd estimate that about 60-70% of the ASX20 spend is coming from the financial services companies in the group who were some of earliest adopters of penetration testing as a service, and continue to be the 'anchor tenant' for the industry.

According to APRA, at the end of 2012, there were 19 Australian banks, 8 foreign subsidiary banks, and 40 branches of foreign banks.  On top of these, there were 91 credit unions and 9 building societies.  There are also a handful of 'miscellaneous' companies like payments clearing, 'specialist credit card institutions' and 'purchased payments facilities' who are also significant market participants.

So that's an extra 170-ish financial services companies who are probably getting penetration testing completed to a greater or lesser extent.  Even if we rule out the 'branches of foreign banks' (as many of them will have their penetration testing managed by the global head office and hence delivered from overseas), we've still got about 130.  Chop out the group already counted in the ASX20, and we've got about 125.  Now let's be super-conservative and say that they will spend only 10% of the amount that the larger companies will spend; or a meager $100K per institution.  That's another $12.5 million into the annual penetration testing market.

Take a moment to consider that according to the Australian Bureau of Statistics, at the end of the 2010-11 year, there were over 164,000 businesses in Australia classified as 'financial and insurance services'.  In the calculations above we covered about 200 of them; admittedly the biggest, but it still leaves a vast number who have data to protect, and some of whom certainly have some penetration testing done.  (If just 2% of them spend just $5K each, that's another $15 million into the budget).

Government

Federal, State and even Local Government are covered by a range of policies explicitly requiring independent penetration testing.  One of the most succinct is that of the Victorian Government - SEC STD: Penetration testing which states that:

vicgov.png

According to vic.gov.au's Contacts & Services directory, there are 521 distinct entities within the Victorian Government, for which 259 unique URLs are provided.  For example, the letter 'A'...  

vicdepts.png

As per policy, each of these needs at least annual independent penetration testing.  Let's use our average across the set (covering both infrastructure and applications) of just $20K per annum.  That gives us about another $6 million for our penetration testing budget.

To avoid the pain of digging out the numbers for all the other states and territories, let's make a broad assumption that all the other state and territory governments added together, sum to three times the size of Victoria's, in terms of Internet-facing infrastructure (which given it include NSW & QLD, plus the rest, seems reasonable).  Let's also assume that they have a similar intent to test everything annually.  So that's another $18 million to the budget.  That number feels high, so let's include all local government, councils etc across the country as well in that figure.

And of course there is also Federal Government.  It's possible to download a list of all registered contracts with keywords like 'penetration testing' or 'security testing' at https://www.tenders.gov.au/?event=public.CN.search, but these lists are woefully incomplete when trying to get a picture of the size of the market.  The Federal Government side of things is also somewhat obscured by the fact that at least some of the vulnerability assessment and penetration testing completed is performed by the Defence Signals Directorate (DSD).  Rather than tie myself in knots trying to work it through, I'll take a short-cut and assume it's the same as Victoria: $6 million annually, across all government agencies including the Defence Department.

E-Commerce / Payments

The Payment Card Industry Data Security Standard (PCI DSS) requires penetration to be completed at least annually for in-scope systems and organisations. 

There are approximately 200,000 websites in the .au domain space with 'shopping cart' functions.  Mmany of those will be using PCI compliant externally-hosted shopping carts so probably don't get penetration testing completed themselves.  But let's say just 10% of e-commerce websites with 'shopping cart' functions get penetration tested each year.  That's 20,000 websites.  Most of these are probably pretty small, so let's say they are just $10K penetration tests.  That's another $20 million in the budget.

We'll assume that the vast number of companies covered by PCI DSS, but who don't have a distinct 'shopping cart' function so aren't included in the figures above, are covered elsewhere in one of the figures we've already looked at.

Education 

There are 44 universities in Australia, and another half-a-dozen miscellaneous self-accrediting higher education institutions (ie theological colleges, maritime college etc), giving us a nice neat 50.

There are then at least another 100 state and territory accredited educational organisations, plus TAFEs and the like.  There are thousands of schools.

Given universities'... errr... 'creative' student population, they have a bigger need than most of the others here.  Let's assume $100K per annum for the universities, which is $5 million in total to the budget.

For the thousands of schools, TAFEs, and other miscellaneous bodies, it's hard to know where to start, so let's just allocate the entire sector $25 million and be done with it.  If there are 5,000 schools across the country that's only $5K of testing per school, so pretty conservative, although I'm cognisant of the fact that far-flung country-shed classrooms are unlikely to be having this testing done.

Information & Communications Technology (inc Software)

One of the larger consumers of penetration testing services is the broad and large ICT industry - and in this I also include companies developing software for sale to others, who therefore have a requirement for security assurance of that product prior to taking it to market.  It is also the fourth largest industry sector contributing to Australian GDP and employs 291,000 people in Australia. According to the Australian Bureau of Statistics, at the end of the 2010-11 year, there were 18,854 businesses operating in the Information, Media & Technology classification

Let's just say 1% of these companies, spend $100K annually on penetration testing.  That's close enough to another $20 million.

The rest

And we haven't even touched industry sectors like healthcare, resources (in the midst of all the 'China APT' news), legal, accounting, professional services, let alone the hundreds of thousands of small and medium sized businesses in this country, at least some of whom are spending some money on penetration testing.  

Adding it all up

pentest-source.png

So using this logic, there's a spend of something like $200-300 million on penetration testing, annually, in Australia.  Given the massive slabs of Australian business that are not covered in the figures above, even with the odd wayward assumption or double counting here and there, it seems reasonable.

610.png

And this is where the trouble starts.  Where is it going?

Many jurisdictions have bodies similar to the ACCC who are responsible for monitoring the misuse of market power.  In some of these jurisdictions, they have put numbers to what 'substantial market power' means, and a 'minimum' threshhold for considering a company to have an influential market position.  The best figures I could find are from Hong Kong, who discuss using 40% as an indicator of 'substantial market power', and 25% as the 'minimum' threshhold before being particularly interested in a company's market position.  Working with these:

  • Taking the 40% figure, we'd be looking for a company with $80-120 million in penetration testing revenue, annually, in Australia.  They don't exist.  No big deal, it just means we don't have a company with 'substantial market power'.
  • Taking the 25% figure, we'd be looking for a company with $50-75 million in penetration testing revenue, annually, in Australia.  They still don't exist.  So we don't have any real competition concerns in the market, which is healthy.
  • For argument's sake, let's take a 10% figure, so we'd be looking for a company with $20-30 million in penetration testing revenue, annually, in Australia.  I'm still doubtful any service provider in Australia operates at that level.

If I'm right, and there is not a single company in Australia with 10% of the penetration testing market, who is delivering all these penetration tests?  Or is it that the numbers above are fundamentally incorrect because organisations just don't do as much penetration testing as they should (under policy, regulation, best practice etc)?

Let's take another angle on this.  Using $200 million as the market size, and a pretty standard average consulting rate of $1,500/day, there are about 133,333 days worth of consulting-level penetration testing to be delivered each year, which would require about 610 full time penetration testers in service provider organisations.  They aren't there either.

One thing I am confident of is that there is also an extremely long tail when it comes to suppliers of these services.  That is, there is a very large set of companies who each provide a very low portion of the services overall consumed in the market.  A great many miscellaneous ICT service providers (of which as per above there are many thousands) provide security related services such as penetration testing to their existing client base, with varying levels of quality.  Because of the large numbers, if 1,000 of these companies provide $100K of penetration testing services each, that could make up $100 million of the market total.

Another interesting question is how big the market would be if everyone was following 'best practice'.  At present, there is far from anything like consistency when it comes to the amount that organisations are spending on IT security, let alone on a sub-set of the topic such as penetration testing.  Near-identical banks can quite plausibly be spending amounts on penetration testing that are out by a factor of 10.  Where one bank spends $2 million; another spends $200,000.  There are also a great many companies - including those no doubt in lists like the ASX 200 - who simply do not have penetration testing completed at any meaningful level.

If all Government agencies were following policies and had every system tested annually; and all PCI-relevant organisations had penetration testing completed annually; and all ICT companies had their software and hardware tested before releasing it to market... etc, then the figures above could easily double to $500 million plus, annually.

under10.png

So we have a $200-300 million market (much of which is probably only now coming to market for the first time), with a half-billion dollar opportunity, with no company in a position of market dominance, and an  under-supply of qualified penetration testers to deliver the services.  

Pretty compelling.  Want to buy a penetration testing company?  Call me.