Cybersecurity – keep your head

The Australian Attorney-General’s Department released the 2012 Cyber Crime and Security Survey on 18 February. Reading the press that accompanied it eg Cyber criminals struck one in five top Australian businesses, and similar surveys in past years, you might be forgiven for thinking that we are on the precipice of a cyber armageddon!

There is no denying that the threat, vulnerability and consequence of cyber attack to organisations is increasing steeply.
Luckily all is not lost, organisations can minimise their attack surface significantly. How, by taking a holistic approach to their information security which blends appropriate physical, personnel and IT security mitigations.  This, with a well thought out response and recovery plan can produce layered security and lead to a resilient organisation able to sail the ‘cyber seas’ with confidence.

In the IT space, doing the basics well can protect against all but the most sophisticated attacks
In the IT space, doing the basics well can protect against all but the most sophisticated attacks

The survey in question was conducted on behalf of the Australian Computer Emergency Response Team (CERT.au), part of the Attorney-General’s Department. CERT’s 450 client organisations were sent the survey and 255 responded. Whilst the survey numbers are small and therefore become statistically unreliable very quickly, the clients of CERT.au are vital to Australia.  Generally CERT.au client organisations are part of Australia’s critical infrastructure. They include utilities, telecommunications providers, financial institutions and also mining companies.

That said, there are some interesting figures.

  • 22% of respondents (around 55) said that they knew that they had had a cyber incident in the last 12 months. Of more concern were that 9% of respondents reported that they “didn’t know”.
  • 50% of respondents (ie 127) said considered that they had been subjected to targeted attacks.

The most common reported cyber incident was ‘loss of a notebook / mobile device’ ; followed by virus infection;  trojan/rootkit; unauthorised access; theft /breach of confidential information; and denial of service attack. This seems odd, I find it difficult to reconcile loss of a laptop with hackers sitting in bunkers outside Shanghai and target key espionage targets.  The concerning question is whether respondent companies are only seeing the easy to spot attacks ie missing laptop, computer not working because of virus etc and not the more sophisticated level, ie stealth attack that exfiltrates data to foreign lands.

The survey authors also reiterate an oft made point about the ‘trusted insider’ that

“Many companies spend the majority of their IT security budget on protection from external attacks. But the figures above serve as a reminder that internal controls and measures are also important, to ensure that internal risks are also managed”.
This is a relic of the perimeter approach to information security, the us and them approach. It doesn’t work anymore because the network has no discernible boundary in the modern interconnected organisation.

Delving further into the report it is interesting to look at contributing factors to attacks. The relevant table is replicated here. Almost all of the contributing factors can be wholly mitigated, with the possible exception of “attractiveness of your organisation to attack” and arguably “Sophisticated attacker skill which defeated counter-measures in place”.

Source www.cert.gov.au  – Cyber Crime and Security Survey Report 2012

In any case, we sometimes forget that the spectrum of resilience involves prevention preparation, response and recovery. Organisations need to be agile, they need to work hard to prevent and prepare for loss or compromise of sensitive information, but accept that it is not possible to repel every attack. For this reason, resources need to be allocated to response and recovery.

Another important point is about the vital role of computer emergency response teams (CERTs). CERTs, are like the white blood cells in our bodies, they share information which help their clients protect themselves.
The other way to think about it is that the bad guys take advantage of the information superhighway by sharing information at the speed of light about vulnerabilities in different systems and new attack techniques, so why shouldn’t the good guys? I’ve written about this previously.  The problem is always, that the bad guys have an advantage. As the IRA said after the Brighton bombings in 1984 which almost wiped out the then UK Prime Minister Margaret Thatcher….

“Today we were unlucky, but remember we only have to be lucky once”

So do the hackers.

Alex

Back To Top

Cyber threat, vulnerability and consequence trends 2013

I was asked to give a snapshot about what I thought the big risks for organisations were likely to be in the cyber world in 2013. Below are eight trends that I think are more likely than not to be important in the next twelve months.

1 Boards continue to struggle to consider cyber risks in a holistic manner

With the exception of technology based companies, most government and private sector boards lack directors with a good understanding of their cyber risks. However all organisations are becoming more dependent on electronic information and commerce. This brings with it both opportunities and threats which are not well understood by boards. Good risk management depends on the board setting the risk tolerance for the organisation. Risk and reward are two sides of the same coin.

Senior Management must create a culture where they acknowledge that cyber risk is evolving and encourage sharing of incident information with trusted partners in government, police, industry and with their service providers. Moreover, if boards see problems in sharing information, they should lobby governments to  improve the conditions for sharing.

2 BYOD goes ballistic – deperimeterisation is forced upon organisations, even when they aren’t ready.

Many organisations are in denial about the threat that ‘bring your own device’ (BYOD) policies make them bear.  Together BYOD and Cloud technologies will force deperimeterisation on organisations. The pressure will come from primarily within as their profit centres demand more connectivity to develop new and rapidly changing business relationships.

In the long-term, this is likely to be positive because it will drive down costs and increase flexibility for organisations. But only the resilient will survive the transition. Even resilient organisations will not go through this deperimeterisation unchanged. This process is likely to cause rude shocks for those organisations  and their boards that are not prepared and do not invest prudently in technology and more importantly people to transition smoothly.

3 Attacks that intentionally destroy data

The other threat which may arise is where the attacker intentionally destroys data, usually after stealing it.  This may be as an act of protest by an issue motivated group, the opposite of Wikileaks if you think of it. Or, it could be undertaken by organised crime  against either government agencies or business. Attacks of this nature could cripple many organisations that do not have hot-backup, even then the question of data integrity comes into play. Boards will need to think carefully about the ‘three cornered stool’ of confidentiality, integrity and availability’ relative to their organisations.

Ransomware, where data is encrypted by an attacker to become inaccessible to the owner until a ransom is paid will increase. However, the problem is likely to remain primarily at the home user and SME level. This is less due to technical difficulties with the attacks and more because of the standard problem for such scams – how to extract money when the authorities have been alerted and are on the hunt. Technologies such as Bitcoin will find increasing use here.

4 More sophisticated attacks by organised crime and nation states.

Here’s an easy one. I am more certain of this prediction than any of the others. We are in a cyber arms race between the attackers and the defenders. The advantage currently lies with the attackers. Since the possibility of an international agreement to curb cyberattacks is negligible as per my cyber law of the sea post, I see no let up in 2013.

5 Privacy continues to increase as a concern for governments in most western countries

In Australia, the Parliament passed the Privacy Amendment (Enhancing Privacy Protection) Bill 2012  in November, tightening the Commonwealth Privacy Act 1988, which applies to Commonwealth agencies and private sector organisations. A summary of the changes are here.

In the same month, the European Network and Information Security Agency (ENISA) published a report about the right to be forgotten. This report proposes a regulation that would allow a European citizen to have their personal data destroyed on request unless there were legitimate grounds for retention.

Large multinationals, like Facebook are going to continue to face scrutiny by privacy advocates and governments around the world about the data that they collect and mine. The new version of Microsoft Internet Explorer set a cat among the pigeons when it was shipped with the ‘do not track’ setting on by default.  The Digital Advertising Alliance issued a statement that “Machine-driven do not track does not represent user choice; it represents browser-manufacturer choice”.

It will be interesting to see who wins. Consumers have shown themselves to be willing to choose services which commercialise their information in return for real value. The key here is choice.

6 Failure by government to protect private sector organisations causes more of them to create CERTs

In a number of countries national Computer Emergency Response Teams have been created with much fanfare with the aim to share information between government and industry about the threats to the critical infrastructure. In general it hasn’t worked well. Western economies are dependent on infrastructure that is primarily in the hands of private enterprise, so all the players understand that neither government, nor industry can ‘solve cyber’ on their own. In a federal system like Australia or the US, the problem is exponentially harder.

At its heart, the problem is not technical, it is trust. Security and law enforcement have long come to the CERTs with their hands out asking for information, but unwilling to share what they knew about. Industry doesn’t trust government or their competitors. Meanwhile, the attackers make hay.

In a similar way to international negotiations, when multilateral agreements fail, bilateral ones can take their place (messily). Increasingly we are likely to see technology dependent organisations setting up their own CERTs and working at the technical level with like organisations, at the same time, bypassing central government CERTs and inward focussed intelligence organisations.

7 Organisations start to concern themselves more with cyber-dependencies

Organisations have long understood in the physical world that if their supply chain is attacked or degraded, then their ability to function is impeded. Without wheels from factory A, factory B can’t assemble cars.  Therefore Factory B is keen to ensure that Factory A survives, but it’s also keen to make sure that the tyres from Factory A don’t cause car accidents. A company’s dependencies do not stop at their front door.

This principle needs to be extended actively into the cyber space. Most organisations do not develop all their technology in house. Vulnerabilities in hardware and software operated by their suppliers are of prime importance. Defence companies have long needed to take this account, but this thinking will expand to more parts of the economy.

8 Developing trusted identities continue to challenge governments and organisations

With deperimeterisation upon us, organisations must assume that attackers can enter their networks. Only through good identity and access management can an organisation potentially protect itself.  My post, Trusted Identity – a primer  took a longer look at this trend.

If an organisation has no perimeter, it becomes impossible to work out who should access what, if there is not a good identity system in place. Governments are realising the same. Essentially if they are to provide the services that their citizens want, then they have to have ways of identifying for certain what those citizens are entitled to.

In 2013, we will see some results from the US efforts (NSTIC) to pilot programs to develop trusted identities. Business is taking a big part in this, with leadership from the likes of Paypal.

In Australia, there are varying signals coming from the Commonwealth Government. E-Health is moving forward, albeit slowly, and so is online Service Delivery Reform which will also depend on identity at its core. There has not been much news of late about the Cyber white paper, which was due in the second half of 2012.

 

Pizza Hut Australia got hacked – what they did right!

Last week Pizza Hut Australia admitted that its cyber-defences had been breached. Unfortunately the attackers did get hold of customers’ names and addresses. Technically it seems that Pizza Hut didn’t get breached, but the website providers who host their site did.

From a privacy perspective, its time to use the ‘pub’ test, That is – what would your level of unhappiness be if the world knew that you liked the meatlover’s supreme with extra cheese and lived at 32 Rosegardens Road, Morphett Vale? I think not very high. The important thing is that the company claims credit card details didn’t get stolen.

Pizza – not Pizza Hut

My sources tell me that the hackers didn’t get credit card details because this information is held in a separate and better-protected database by a specialised payment gateway. I hope they’re right!

There are a couple of important lessons for organisations to learn from this breach. Firstly by developing granular controls that separate data by its value and what it was used for, organisations like Pizza Hut can develop protocols for their security that give the best mix of data availability and confidentiality. As an example, there are far more parts of the business that benefit from knowing where customers live and who they are than need the credit details. If the data isn’t separated, the organisation can’t make the best use of the data and ensure security at the same time – they have to do one or other. But with granular controls, the marketing department can use addresses and telephone numbers to plan promotions and the planning department can work out where to open the next store, but they don’t need to know credit card details.

The other point is about risk transfer. Although transferring risk to a third-party is an acceptable mitigation according to the risk management standard ISO31000, organisational reputation can’t be transferred. If your company wants to keep its good name when it gets hacked, it needs to have thought about recovery and restoration. Blaming the web provider won’t cut it with customers if your organisation is anything bigger than the local fish and chippery. Generally, the larger the company, the bigger the reputation; more so for .gov

There has been a gradual, but definite change in the way that cybersecurity professionals talk about breaches. Until around 2001, people talked about the possibility of being breached online. Now this has changed from ‘if you get breached’ to ‘when you get breached’.

Essentially, if information is available on Internet facing systems, then it is more a matter of time and luck as to when your system gets done over. This is something security professionals need to communicate with the senior management of the organisation.

For Pizza Hut, this recent event will probably contribute to its longevity and improve its resilience. Research is showing that organisations that undergo small shocks are more ready for the black swans of the future.

However, they should not rest on their laurels, in the aftermath of any breach, an organisation needs to examine how to reduce the risk of further breaches. Some of the questions I ask in such situations are (in no particular order):

  • Does the organisation need to think further about the balance between confidentiality of personal information and the availability to internet facing systems.
  • From a marketing and public relations perspective is the organisation talking to its customers to show that the organisation is taking their personal information seriously;
  • What changes does the organisation need to make in terms of digital evidence gathering – was this adequate enough to deter future attacks – in the long term the rule of law is the only way to reduce the power of the attackers;
  • Did the organisation understand how to respond to the breach, does this need regular exercising;
  • Was there an ageed direction from senior management in the event of a breach, so that the technical staff could ‘get on with the job’ as quickly as possible;
  • Are the relationships with service providers adequate, were the levels of service and measures taken to recover sufficient.

It is important to recognise that the best value gains for the organisation come not from IT changes like forensics, but business process rearrangement.

Online trusted identities – a primer

“Trust is the currency of the new economy”

You may have heard recently about the efforts being promoted by the USA and Australia amongst others to promote trusted online identities. There are also significant efforts in the private sector to develop online trust systems.

Trust will be the currency of the new economy as it was in the mediaeval village. During the late 19th and early 20th Century, formal identity credentials gradually replaced more informal systems of identifying people that we interacted with. Increasing population and technology drove this change. It was simply impossible to know everybody that you might deal with and so societies began to rely on commonly used credentials such as drivers’ licences to prove identity and ‘place’ in society. Of course, drivers’ licences don’t say much if anything about reputation. But if you think about  high value financial transactions you establish your identity and then you give a mechanism to pay for the transaction. Although in most cases it wouldn’t matter who you are, it gives the vendor some comfort that the name on your driver’s licence is the same as on your credit card and makes it just that bit more difficult to commit fraud on the vendor if the credit card isn’t legit. However this isn’t the case with interbank lending. Most of this is done on a trust basis within the ‘club’ of banks and it is only at a later time that the financials are tallied up for the day.

You can’t trust who or what is on the other end of the keyboard just because of what they say

What is a trusted ID?

Most simply, trusted online identity systems are the online equivalent of a physical credential such as a drivers’ licence used to give evidence of identity online. They can (but don’t have to) also be the basis for online reputation. They may also say something about the rights of the credential holder, such as that they are a resident in a particular country.

Which countries are developing trusted identity systems

The program in the USA is called NSTIC – National Strategy for Trusted Identities in Cyberspace. In Australia, the Prime Ministers’ department has been investigating the possibility of a trusted identity system as part of its work on a cyber policy paper which was due to be released ‘early in 2012’. At the same time, Australia has undertaken a number of processes of service delivery reform, government 2.0 and e-health. All without necessarily solving the problem of identifying whom they are dealing with online. The USA has gone beyond the planning stage and announced that it will move forward on development. As I mentioned recently. NIST has announced grants for pilot projects in NSTIC.

Some countries have already implemented online identity systems simply by migrating their physical identity cards online and allowing these to be used as trusted online systems. A number of Asian countries including Malaysia, Hong Kong and Singapore have proportions of their online services available through such means. Estonia probably leads the world in online service delivery with around 90% of the population having access to an online ID card and around 98% of banking transactions being via the Internet. More information at the Estonia EU website. While NSTIC was issued by the USA government, it calls for the private sector to lead the development of an Identity Ecosystem that can replace passwords, allow people to prove online that they are who they claim to be, and enhance privacy.  A tall order which runs the risk of creating an oligopoly of identity systems driven by corporate interests and not one which suits users. It may be a signal of things to come that Citibank and Paypal have recently been accepted to lead development of the NSTIC. There are also a number of private sector initiatives which come at the issue from a different perspective. Beyond Paypal, Google Wallet and the recently announced Apple Passbook are interesting initiatives which give some of the attributes of a trusted identity.

Why might we want one?

As more services go online from both government and business and more people want to use them there will be an increased demand for a way of proving who you are online without having to repeat the process separately with each service provider. In some ways this is already happening when we use PayPal to buy products not only on eBay, where it originated but also on Wiggle.co.uk and many others. The problem is that different services need different levels of trust between the vendor and the purchaser. Thinking about a transaction in terms of risk… The majority of private sector transactions online carry equal risk for both the vendor and customer. In that the customer risks that he or she won’t get a product or service from the transaction and the vendor risks that they won’t get the cash. Here online escrow services such as Transpact, or PayPal can help.

Where this doesn’t work well is where there complexity to the transaction.  The banking or government services sector are key areas where this is the case. Here the vendor must know their customer. One area might be analysing whether a customer can pay for a service on credit. Another is in applying for a passport, you need to prove that you are a citizen and pay a fee. However, the intrinsic value of the passport is far greater than the face value, as shown by the black market price. The result to the government if it issues the passport to the wrong person is not the value of the nominal fee, but closer to the black market value of the passport.

As a result, we are at an impasse online, in order for more ‘high trust’ services to go online the community has to have more trust that people are who they say they are.

Who might need a trusted identity?

If you take the Estonian example, 90% of the population. Most of us carry around some form of identity on our persons that we can present if required. In some countries, it’s the law that a citizen must carry their identity card around with them. In Australia and Canada and other countries, it’s a bit more relaxed. In the end the question will be whether a trusted id is used by customers and required by vendors. This will be influenced by whether there are alternative ways of conveying trust between people and institutions which are independent of the concept of identity in the traditional sense of the word

Next time:

What are the security and safety implications of a trusted identity and a discussion of about social footprint and whether this may overtake government efforts

 

He has shifty eyes, but at least we know who he is

A legislative approach that defines as ‘sensitive’ any  biometric measurement shows a lack of common sense and understanding of the science.

A better approach would be to protect those aspects of sensitive personal information  (eg sexuality, political opinion, racial / ethnic origin) collected by any means, making legislation independent of technology.

An interesting paper was published in the most recent International Journal of Biometrics. Finnish scientists have developed a biometric measure using saccade eye movements. Saccade eye movements are the involuntary eye movements when both eyes move quickly in one direction. Using a video camera to record movement, this biometric measure can be highly correlated to an individual.

What is important is that there are large numbers of these life (bio) measurements (metrics) being discovered as scientists look more closely at human physiology and behaviour.

The use of biometric identification technologies sees biometric information (eg eye movement) converted into a series of digits (a hash), which can be statistically compared against another series of digits that have been previously collected during the enrolment of an individual to use a system (eg building access control). A biometric ‘match’ is a comparison of the number derived from the collection of a biometric during enrolment with the number that is elicited during verification. In the real world, these ‘numbers’ are nearly always slightly different. The challenge is to make a system able to allow an individual to get a match when he/she seeks verification and to ensure that the bad guy is repelled.

Generally speaking, biometric identity systems are not primarily designed to determine information that might be used to elicit sensitive personal information. Nor is it practical to reverse-engineer the biometric because of the intentional use of one-way mathematical functions and the degradation of data quantity collected. This means that one person would be hard pressed to elicit any information that might be used to discriminate against another with access to this series of digits.

The word ‘biometric’ seems to send shivers down the spines of some privacy advocates. I suggest it is because most, if not all, are not scientists but lawyers. But these biometric systems  are just the current technology. Many critics of biometrics forget that like any tool, it depends on how it is used. The old saying that fire is a ‘good servant, but a bad master’ is equally true of biometrics.

What seems lacking in common sense is that legislation in several countries (including in Australia) puts up a barrier for the use of biometrics for purposes that protect the privacy and safety of people and organisations.

The information that a biometric collects is not necessarily sensitive information –I don’t really care if you know how often I blink. In fact, a photo of me is more likely to give you information about me that I am sensitive about.

The danger with this approach is that people focus on the technology being ‘bad’ and not on the fact that it is the sensitive information which is potentially harmful.  Biometrics can be privacy enhancing, particularly as they can add additional layers to securing claims about identity and be used to protect individuals and organisations from becoming victims of identity fraud.

Disaggregating biometrics from ‘sensitive information’ and considering technology on the basis of what (sensitive information – gender, medical information, religious affiliation etc) it collects about an individual would more appropriately provides a course of protecting personal information. This of course would avoid stifling the practical application of technology.

 

The journal article can be found here

Martti Juhola et al. Biometric verification of subjects using saccade eye movementsInternational Journal of Biometrics, 2012, 4, 317-337

 

Back To Top


Why the world needs the cyber equivalent of an international law of the sea

Islands of order in a sea of chaos

I’ve been thinking for the best part of the last decade about Internet governance and its impact on national security. In that time, little has changed to improve security for users.

The Internet as we know it today can be compared in many ways to the high seas during the swashbuckling so-called Golden age of Piracy between around 1650 and 1730 when pirates ruled the Caribbean.

Why is this comparison valid? Because in the Internet today, like on the high seas of yesteryear, there are islands of order surrounded by seas of chaos. The islands of order are the corporate networks like Facebook, Google, Amazon, Ebay etc and those run by competent governments for their citizens. However, between these orderly Internet islands are large areas where there are no rules and where pirates and vagabonds thrive. An additional similarity is that some of the most competent and successful historical pirates operated with the explicit support from countries seeking to further their national aims.

Even those who govern the orderly Internet islands are subject to bold attacks from chaos agents if they are not vigilant! Witness the compromise of Linkedin earlier this year and very few governments have not had some significant compromise affect their operations.

On the high seas, piracy has been reduced significantly since the 18th Century. With the exception of places like the coast of Somalia, there are now far fewer places where there is a significant piracy problem.  There are a number of reasons why this has been a success. Not least of these has been the development of law of the high seas.

In cyberspace, the world also needs to be moving on from the swashbuckling days. Internet criminals need to be hunted down in whichever corner of the Internet they lurk. Additionally, the concept that some countries could give free reign to local cyber-criminals, as long as they don’t terrorise their own countrymen/women, is an anathema in the 21st Century.

The long term solution has remained in my view a cyber version of the  UN Convention on the Law of the Sea. UCLOS is the international agreement, most recently updated in 1982 that governs behaviour by ships in international waters. Apart from other things the convention deals with acts of piracy committed in international waters.

In the same way, a similar international cybercrime convention could deal with acts where the victim was from for example the USA, the criminal from the Vatican and the offence committed on a server in South Korea.

It would seem that at the moment any move towards a UN convention has gone off the boil. A proposal was shot down in 2010 over disagreements around national sovereignty and human rights. As well, the European Union and USA  position was that a new treaty on cyber crime was not needed since the Council of Europe Convention on Cyber Crime had already been in place for 10 years and has been signed or ratified by 46 countries since  2001.

As I recently noted, wariness by both USA and China continues and means that any international agreement which includes Western countries and the BRICs will be a long time coming. China, Russia and other countries submitted a Document of International Code of Conduct for Information Security to the United Nations in 2011 which the USA seems to have dismissed out of hand.

A code of conduct is nice and the Council of Europe convention is a good start, but they need to be supported by some sort of international cyber ‘muscle’ in the long term.

However, all is not lost. In the meantime, working to coordinate the orderly organisations’ defences that I wrote about before, is a practical step that organisations and governments should be doing more of. This is the cyber equivalent of escorting ships through dangerous waters and passing them from one island of order to another.

There’s a good reason for this, and here’s the resilience message. The cyber-security of an organisation does not begin and end at their firewall or outer perimeter. Whilst in most cases an organisation cannot force other organisations to which it is connected to change, it can maintain vigilance over areas outside its direct sphere of control. This then allows the organisation more time to adapt to its changing environment and of course, a chain is only a strong as its weakest link.

The other step to be taken is to help emerging nations and organisations with poor online security to improve their cyber-defences. If the first step was like escorting ships between the orderly islands, this second step is the equivalent of helping nearby islands to improve their battlements so that the pirates don’t take over and then attack us! This work has been going on for some time. I chaired a number of seminars on cyber security and the need for computer emergency response teams for the APEC telecommunication and information working group which began this work in 2003 and this has been carried on by a number of countries around the world in fits and starts, but we need more.

Alex

In cyberspace, if you don’t share, you don’t survive

This might seem a brave call when talking about cyber-security threat information. But the truth is that the cyber world forces a new paradigm on security. The tools that are familiar in the offline world for providing elements of security, such as obscurity, tend to benefit the attackers rather than the defenders, because the very advantages of the online world, things like search and constant availability are also the online world’s greatest weaknesses. What matters most in the online world is not what you know, but how fast you know and make use of the information you have.

I’ve been reading the Cyber Security Task Force: Public-Private Information Sharing report, and I think its worth promoting what it says. It presents a call to action for government and companies in the US to improve information-sharing to prevent the increasing risks from cyber attacks on organisations, both public and private. The work was clearly done with a view to helping the passage of legislation being proposed in the USA, however..

 Most, if not all the findings made could be extrapolated to every advanced democracy around the world. 

 If you are familiar with this field, much of what has been written will not be new, as we have been calling for the sorts of measures that are proposed in the report since at least 2002. That does not mean that the authors haven’t made a valuable contribution, because they have made recommendations about how to solve the problem. Specifically they recommend removing legislative impediments to sharing whilst maintaining protections on personal information.

According to the authors: From October 2011 through February 2012, over 50,000 cyber attacks on private and government networks were reported to the Department of Homeland Security (DHS), with 86 of those attacks taking place on critical infrastructure networks. As they rightly point out, the number reported represents a tiny fraction of the total occurrences.

As is the case in many areas of security, the lack of an evidence base is at the core of the problem, because it creates a cycle where there is resistance to change and adaptation to fix the problems efficiently and effectively.

 Of course, the other thing that happens is that organisations don’t support an even level of focus or resourcing on the problem, because, most of the time, like an iceberg, the bit of the problem that you can ‘see’ is comparatively small.

To make matters worse, new research is telling us that we are optimistically biased when making predictions about the future. That is, people generally underestimate the likelihood of negative events. So without ‘hard’ data, and given the choice of underestimating the size of a problem or overestimating it, humans that make decisions in organisations and governments are likely to underestimate the likelihood of bad things happening. You can find out more about the optimism bias in a talk by Tali Sharot on TED.com

The cost differential to organisations when they don’t build in cyber security, are unable to mitigate risks and then need to recover from cyber attacks is significant. This cost is felt most by the organisations affected, but its effects are passed across an economy.

So what can be done to break this cycle of complacency? Government and industry experts have long spoken about the need for better sharing of information about cyberthreats. I was talking in public fora about this ten years ago.

The devil is in the detail in the ‘what’ and the ‘how’. Inside the ‘what’ is also the ‘who’. I’ll explain below

What should be shared, who should do the sharing – and with whom?

Both government and industry, whilst they generally enthusiastically agree that there should be sharing, think that the other party should be doing more of it and then come up with any number of excuses as to why they can’t! For those who are fans of iconic 80’s TV, it reminds me of the Yes Minister episode where the PM wants to have a policy of promoting women and in cabinet each minister enthusiastically agrees that it should be done, whilst explaining why it wouldn’t be possible in his department. In government, the spooks will tell you that they have ‘concerns’ with sharing, ie they want to spy on other countries and don’t want to give up any potential tools. It’s no better in industry, companies don’t have an incentive to share specific data, because their competitors might get some kind of advantage.

The UK has developed perhaps the most mature approach to this. UK organisations have been subject to a number of significant cyber attacks and government officials attempt to ‘share what is shareable’. The ability to do this may be because of the close relationship between the UK government and industry, developed initially during the time of the Troubles in Ireland and has been maintained in one form or other through the terrorism crises of this Century. It remains to be seen whether the government will be able to maintain these relationships and UK industry will see value in them as the UK and Europe struggle with short-termism brought on by the fiscal situation.

Australia has also attempted to share what is shareable, however as the government computer emergency response team sits directly within a department of state this is very difficult. It seems that the CERT does not have a clear mission. Is it an arbiter of cyber-policy and information disseminator, or an operational organisation that facilitates information exchange on cyber issues between government and industry?

This quandary has not been solved completely by any G20 country. Indeed, it will never be solved, it is a journey without end. It is possible that New Zealand has come closest, but this seems to be because of the small size of the country and the ability to develop individual relationships between key people in industry and government. Another country that is doing reasonably well is South Korea – mainly because it has to, it has the greatest percentage of broadband users of any country and North Korea just a telephone line away. The Korean Internet security agency – KISA brings together industry development, Internet policy, personal information protection, government security, incident prevention and response under one umbrella.

For larger countries, I am of the view that a national CERT should be a quasi-government organisation that is controlled by a board comprised of:

  • companies that are subject to attack (including critical infrastructure);
  • network providers;
  • government security and
  • government policy agencies.

In this way, the CERT would strive to serve the country more fully. There would be more incentive from government to share information with industry and industry to share information with government. With this template, it is possible to create a national cyber-defence strategy that benefits all parts of the society and provides defence-in-depth to those parts of the community that we are most dependent on, ie the critical infrastructure and government.

Ensuring two-way information flow within the broader community and with industry has the potential to provide direct benefits for national cyber-security and for the community more broadly. Firstly, by helping business and the community to protect itself. Secondly, for government, telecommunications providers and the critical infrastructure in the development of sentinel systems in the community, which like the proverbial canary in the coalmine, signal danger if they are compromised. Thirdly, by improving the evidence base through increased quality and quantity of incident reporting – which is so often overlooked.

Governments can easily encourage two-way communication by ‘sharing first’. Industry often questions the value of information exchanges, because they turn up to these events at their own expense and some government bigwig opens and says ‘let there be sharing’ and then there is silence, because the operatives from the three letter security agencies don’t have the seniority to share anything and the senior ones don’t understand the technical issues. I am not the first person to say that in many cases (I think 90%), technical details that can assist organisations to protect their networks do not need to include the sensitive ‘sources and methods’ discussion. By that I mean, if a trust relationship exists or is developed between organisations in government and industry and one party passes a piece of information to the other and says “Do x and be protected from y damage”, then the likelihood of the receiving party to undertake the action depends on how much they trust the provider. Sources and methods information are useful to help determine trustworthiness, but they are not intrinsically essential (usually) to fixing the problem.

As the Public-Private Information Sharing report suggests, many of the complex discussions about information classification/ over-classification and national security clearances can be left behind. Don’t get me wrong; having developed the Australian Government’s current protective security policy framework, I think there is a vital place for security clearances and information classification. However, I think that it is vastly over-rated in a speed of light environment where the winner is not the side with the most information, but the side that can operationalise it most quickly and effectively. Security clearances and information classification get in the way of this and potentially deliver more benefit to the enemy by stopping the good guys from getting the right information in time. We come back to the question of balancing confidentiality, integrity and availability – the perishable nature of sensitive information is greater than ever.

How should cyber threat information be shared?

This brings me to the next area of concern. There is also a problem with how information is shared between industry and government, or more importantly the speed with which it is shared. In an era when cyber attacks are automated, the defence systems are still primarily manual and amazingly, in some cases rely on paper based systems to exchange threat signatures. There is an opportunity for national CERTs to significantly improve the current systems to share unclassified information about threats automatically. Ideally these systems would be designed so that threat information goes out to organisations from the national CERT and information about possible data breaches returns immediately to be analysed.

Of course, the other benefit of well-designed automated systems could be that they automatically strip customer private information out of any communications, as with the sources and methods info, peoples’ details are not important (spear phishing being an exception). In most cases, I’d rather have a machine automatically removing my private details than some representative of my ‘friendly’ telecommunications provider or other organisation.

These things are all technically possible, the impediments are only organisational. Isn’t it funny, people are inherrently optimistic, but don’t trust each other. Its surprising civilisation has got this far.

CERTs – Computer Emergency Response Teams


References

1How Dopamine Enhances an Optimism Bias in Humans. Sharot, T; Guitart-Masip, M; Korn, c; Chowdhury, R; Dolan, R. CURRENT BIOLOGY. July 2012. www.cell.com

2 Yes Minister Series 3, EP 1 – “Equal Opportunities” 1982

Useful trick to add a nick to your google+ account leads to the darkside

I’ve been updating the Resilience Outcomes Google+ site. A friend asked me what the site url is, but Google in its wisdom has not made this easy. The site reference is https://plus.google.com/103380459753062778553 !!! What a mouthful and not really a set of numbers I want to dedicate my diminishing neurons on. A partial answer is http://gplus.to/  . Using this site you can get a nickname or vanity url for your gplus site.

So now I can give the url http://gplus.to/resilienceoutcomes and you get to the Resilience Outcomes Google+ site!

My friend Karl H. is going to point out that this is not very resilient, because although Google has a reputation for fairly bulletproof infrastructure I know nothing about gplus.to . Karl you’re absolutely right – it demonstrates why thinking about resilience is so difficult… The Dark side has cookies! Literally in the case of gplus.to.

http://gplus.to/ is almost certainly more fragile than google.com or .Google or whatever it will call itself next month. If http://gplus.to/ goes down then all the efforts of google to support their systems are naught in my case. As such, I am faced on a small-scale the choice faced by all who wish to become more resilient and mainstream security. Do I increase accessibility to the site whilst reducing integrity and confidentiality or not? In this case, the question is not an either or, and rarely is it ever. The answer may be in my case that http://gplus.to/ is used when friends ask me what the site is verbally, but that I always write the full url in posts.

🙂