Resilience Blogpost

Complexity and organisational resilience

On the face of it, complex systems might have more resilience than those that are simple because they can have more safeguards built-in and more redundancy.

However, this is not supported by real world observation. Simply put, more complexity means more things can go wrong. In both nature and in human society, complex controls work well at maintaining systems within tight tolerances and in expected scenarios. However complex systems do not work well when they have to respond to circumstances which fall outside of their design parameters.

In the natural world, one place where complex systems fail is the immune system. Anaphylactic shock, where the body over-reacts because of an allergy to a food such as peanuts is a good example. Peanuts are of course, not pathogens, they are food, The immune system should not react to them. However people’s immune systems are made up of a number of complex systems built over the top of each other over many millions of years of evolution. One of these systems is particularly liable to overreact to peanuts. This causes in the worst case, death through anaphylaxis – effectively the release of chemicals which are meant to protect the body, but which do exactly the opposite. This is an example of where a safety system has become a vulnerability when it is engaged outside normal parameters.

We are beginning to see the resilience of complex systems such as the Great Barrier Reef severely tested by climate change. Researchers have found that the reef is made of complex interactions between sea fauna and flora, built upon other more complex interactions. This makes it nigh on impossible for researchers to find exact causes for particular effects, because they are so many and varied. Whilst the researchers confidently can say that climate change is having a negative effect on the coral and that bleaching effects will become more common as the climate becomes warmer, they cannot say with a great deal of certainty how great the other compounding effects such as excess nutrients from farm runoff or removal of particular fish species might be. This is not a criticism of the science, but more an observation that to predict the future with absolute certainty, when there are multiple complex factors at play is extremely difficult.

These natural systems are what some might call ‘robust yet fragile’. Within their design parameters they are strong and have longevity. Such systems tend to be good at dealing with anticipated events such as cyclones in the case of the Great Barrier Reef. However, when presented with particular challenges outside the standard model, they can fail.

Social systems and machines are not immune from the vulnerabilities that complexity can introduce into systems and can also be strong in some ways and brittle in others.

The troubles with the global financial system are a good example. Banking has become very complex and banking regulation has kept up with this trend. That might seem logical, but the complex rules may in themselves be causing people to calibrate the financial system to meet the rules, focussing on the administrivia of their fine print, rather than the broad aims that the rules were trying to achieve. As an example, one important set of banking regulations are the Basel regulations. The Basel 1 banking regulations were 30 pages long, the Basel 2 regulations were 347 pages long and the Basel 3 regulations are 616 pages. One estimate by McKinsey says that compliance for a mid-sized bank might cost as much as 200 jobs. If a bank needs to employ 200 people to cope with increased regulation, then the regulator will need some number of employees to keep up with the banks producing more regulatory reports, and so the merry-go-round begins!

A British banking regulator, Andrew Haldane is now one of a number of people who question whether this has gone too far and banks and banking regulation has become too complex to understand. In an interesting talk he gave in 2012 in Jackson Hole, Wyoming, USA titled the ‘Dog and the Frisbee’, Haldane uses the analogy of a dog catching a frisbee to suggest that there are hard ways and easy ways to work out how to catch a frisbee. The hard way involves some complex physics and the easy way involves using some simple rules that dogs use. Haldane points out that dogs are better in general at catching frisbees than physicists! I would also suggest that the chances of predicting outlier events, what Nicolas Taleb calls ‘Black Swans’  is greater using the simple predictive model.

This is in some ways a challenge to the traditional thinking behind risk modelling. When I did my risk course, it was all very formulaic. List threats, list vulnerabilities and consequences, discuss tolerance for risk, develop controls, monitor etc. I naively thought that risk assessment would save the world. But it can’t. Simple risk management just can’t work in a complex system. Firstly, it is impossible to identify all risks. To (misquote) Donald Rumsfeld, there are known risks, unknown risks, risks that we know we have, but can’t quantify and unknown risks that we can neither quantify nor know.

Added to this is the complex interaction between risks and the observation that elements of complex systems under stress can completely change their function (for better or worse). An analogy might be where one city under stress spontaneously finds that its citizens begin looting homes and another intensifies its neighbourhood watch program.

Thus risk assessment of complex systems is in itself risky. In addition, in a complex system, the aim is homeostasis, the risk model responds to each raindrop-sized problem, correcting the system minutely so there are minimal shocks and the system can run as efficiently as possible. A resilience approach might try to develop ways to allow the system/organisation/community to be presented with minor shocks, in the hope that when the black swan event arrives, the system has learnt to cope with at least some ‘off white’ events!

Societies are also becoming more complex. There are more interconnected yet separately functioning parts of a community than there were in the past. This brings efficiency and speed to the ways that things are done within the community when everything is working well. However when there is a crisis, there are more points of failure. If community B is used to coping without electricity for several hours a day, they develop ways to adapt over several months and years. If that community then finds that they have no power for a week, they are more prepared to cope than community A that has been able to depend on reliable power. Community B is less efficient than community A, but it is also less brittle.

This does however illustrate out a foible of humanity. Humans have evolved so that they are generally good at coping with crises (some better than others), however they are not good at dealing with creeping catastrophes such as climate change, systemic problems in the banking and finance sector, etc.

Most people see these things as problems, but think that the problems are so far away that they can be left whilst other more pressing needs are dealt with.

Sometimes you just need a good crisis to get on and fix long-term complex problems. Just hope the crisis isn’t too big.

Video Donald Rumsfeld – Known Knowns
Back To Top

Online trusted identities – a primer

“Trust is the currency of the new economy”

You may have heard recently about the efforts being promoted by the USA and Australia amongst others to promote trusted online identities. There are also significant efforts in the private sector to develop online trust systems.

Trust will be the currency of the new economy as it was in the mediaeval village. During the late 19th and early 20th Century, formal identity credentials gradually replaced more informal systems of identifying people that we interacted with. Increasing population and technology drove this change. It was simply impossible to know everybody that you might deal with and so societies began to rely on commonly used credentials such as drivers’ licences to prove identity and ‘place’ in society. Of course, drivers’ licences don’t say much if anything about reputation. But if you think about  high value financial transactions you establish your identity and then you give a mechanism to pay for the transaction. Although in most cases it wouldn’t matter who you are, it gives the vendor some comfort that the name on your driver’s licence is the same as on your credit card and makes it just that bit more difficult to commit fraud on the vendor if the credit card isn’t legit. However this isn’t the case with interbank lending. Most of this is done on a trust basis within the ‘club’ of banks and it is only at a later time that the financials are tallied up for the day.

You can’t trust who or what is on the other end of the keyboard just because of what they say

What is a trusted ID?

Most simply, trusted online identity systems are the online equivalent of a physical credential such as a drivers’ licence used to give evidence of identity online. They can (but don’t have to) also be the basis for online reputation. They may also say something about the rights of the credential holder, such as that they are a resident in a particular country.

Which countries are developing trusted identity systems

The program in the USA is called NSTIC – National Strategy for Trusted Identities in Cyberspace. In Australia, the Prime Ministers’ department has been investigating the possibility of a trusted identity system as part of its work on a cyber policy paper which was due to be released ‘early in 2012’. At the same time, Australia has undertaken a number of processes of service delivery reform, government 2.0 and e-health. All without necessarily solving the problem of identifying whom they are dealing with online. The USA has gone beyond the planning stage and announced that it will move forward on development. As I mentioned recently. NIST has announced grants for pilot projects in NSTIC.

Some countries have already implemented online identity systems simply by migrating their physical identity cards online and allowing these to be used as trusted online systems. A number of Asian countries including Malaysia, Hong Kong and Singapore have proportions of their online services available through such means. Estonia probably leads the world in online service delivery with around 90% of the population having access to an online ID card and around 98% of banking transactions being via the Internet. More information at the Estonia EU website. While NSTIC was issued by the USA government, it calls for the private sector to lead the development of an Identity Ecosystem that can replace passwords, allow people to prove online that they are who they claim to be, and enhance privacy.  A tall order which runs the risk of creating an oligopoly of identity systems driven by corporate interests and not one which suits users. It may be a signal of things to come that Citibank and Paypal have recently been accepted to lead development of the NSTIC. There are also a number of private sector initiatives which come at the issue from a different perspective. Beyond Paypal, Google Wallet and the recently announced Apple Passbook are interesting initiatives which give some of the attributes of a trusted identity.

Why might we want one?

As more services go online from both government and business and more people want to use them there will be an increased demand for a way of proving who you are online without having to repeat the process separately with each service provider. In some ways this is already happening when we use PayPal to buy products not only on eBay, where it originated but also on Wiggle.co.uk and many others. The problem is that different services need different levels of trust between the vendor and the purchaser. Thinking about a transaction in terms of risk… The majority of private sector transactions online carry equal risk for both the vendor and customer. In that the customer risks that he or she won’t get a product or service from the transaction and the vendor risks that they won’t get the cash. Here online escrow services such as Transpact, or PayPal can help.

Where this doesn’t work well is where there complexity to the transaction.  The banking or government services sector are key areas where this is the case. Here the vendor must know their customer. One area might be analysing whether a customer can pay for a service on credit. Another is in applying for a passport, you need to prove that you are a citizen and pay a fee. However, the intrinsic value of the passport is far greater than the face value, as shown by the black market price. The result to the government if it issues the passport to the wrong person is not the value of the nominal fee, but closer to the black market value of the passport.

As a result, we are at an impasse online, in order for more ‘high trust’ services to go online the community has to have more trust that people are who they say they are.

Who might need a trusted identity?

If you take the Estonian example, 90% of the population. Most of us carry around some form of identity on our persons that we can present if required. In some countries, it’s the law that a citizen must carry their identity card around with them. In Australia and Canada and other countries, it’s a bit more relaxed. In the end the question will be whether a trusted id is used by customers and required by vendors. This will be influenced by whether there are alternative ways of conveying trust between people and institutions which are independent of the concept of identity in the traditional sense of the word

Next time:

What are the security and safety implications of a trusted identity and a discussion of about social footprint and whether this may overtake government efforts

 

He has shifty eyes, but at least we know who he is

A legislative approach that defines as ‘sensitive’ any  biometric measurement shows a lack of common sense and understanding of the science.

A better approach would be to protect those aspects of sensitive personal information  (eg sexuality, political opinion, racial / ethnic origin) collected by any means, making legislation independent of technology.

An interesting paper was published in the most recent International Journal of Biometrics. Finnish scientists have developed a biometric measure using saccade eye movements. Saccade eye movements are the involuntary eye movements when both eyes move quickly in one direction. Using a video camera to record movement, this biometric measure can be highly correlated to an individual.

What is important is that there are large numbers of these life (bio) measurements (metrics) being discovered as scientists look more closely at human physiology and behaviour.

The use of biometric identification technologies sees biometric information (eg eye movement) converted into a series of digits (a hash), which can be statistically compared against another series of digits that have been previously collected during the enrolment of an individual to use a system (eg building access control). A biometric ‘match’ is a comparison of the number derived from the collection of a biometric during enrolment with the number that is elicited during verification. In the real world, these ‘numbers’ are nearly always slightly different. The challenge is to make a system able to allow an individual to get a match when he/she seeks verification and to ensure that the bad guy is repelled.

Generally speaking, biometric identity systems are not primarily designed to determine information that might be used to elicit sensitive personal information. Nor is it practical to reverse-engineer the biometric because of the intentional use of one-way mathematical functions and the degradation of data quantity collected. This means that one person would be hard pressed to elicit any information that might be used to discriminate against another with access to this series of digits.

The word ‘biometric’ seems to send shivers down the spines of some privacy advocates. I suggest it is because most, if not all, are not scientists but lawyers. But these biometric systems  are just the current technology. Many critics of biometrics forget that like any tool, it depends on how it is used. The old saying that fire is a ‘good servant, but a bad master’ is equally true of biometrics.

What seems lacking in common sense is that legislation in several countries (including in Australia) puts up a barrier for the use of biometrics for purposes that protect the privacy and safety of people and organisations.

The information that a biometric collects is not necessarily sensitive information –I don’t really care if you know how often I blink. In fact, a photo of me is more likely to give you information about me that I am sensitive about.

The danger with this approach is that people focus on the technology being ‘bad’ and not on the fact that it is the sensitive information which is potentially harmful.  Biometrics can be privacy enhancing, particularly as they can add additional layers to securing claims about identity and be used to protect individuals and organisations from becoming victims of identity fraud.

Disaggregating biometrics from ‘sensitive information’ and considering technology on the basis of what (sensitive information – gender, medical information, religious affiliation etc) it collects about an individual would more appropriately provides a course of protecting personal information. This of course would avoid stifling the practical application of technology.

 

The journal article can be found here

Martti Juhola et al. Biometric verification of subjects using saccade eye movementsInternational Journal of Biometrics, 2012, 4, 317-337

 

Back To Top


Why the world needs the cyber equivalent of an international law of the sea

Islands of order in a sea of chaos

I’ve been thinking for the best part of the last decade about Internet governance and its impact on national security. In that time, little has changed to improve security for users.

The Internet as we know it today can be compared in many ways to the high seas during the swashbuckling so-called Golden age of Piracy between around 1650 and 1730 when pirates ruled the Caribbean.

Why is this comparison valid? Because in the Internet today, like on the high seas of yesteryear, there are islands of order surrounded by seas of chaos. The islands of order are the corporate networks like Facebook, Google, Amazon, Ebay etc and those run by competent governments for their citizens. However, between these orderly Internet islands are large areas where there are no rules and where pirates and vagabonds thrive. An additional similarity is that some of the most competent and successful historical pirates operated with the explicit support from countries seeking to further their national aims.

Even those who govern the orderly Internet islands are subject to bold attacks from chaos agents if they are not vigilant! Witness the compromise of Linkedin earlier this year and very few governments have not had some significant compromise affect their operations.

On the high seas, piracy has been reduced significantly since the 18th Century. With the exception of places like the coast of Somalia, there are now far fewer places where there is a significant piracy problem.  There are a number of reasons why this has been a success. Not least of these has been the development of law of the high seas.

In cyberspace, the world also needs to be moving on from the swashbuckling days. Internet criminals need to be hunted down in whichever corner of the Internet they lurk. Additionally, the concept that some countries could give free reign to local cyber-criminals, as long as they don’t terrorise their own countrymen/women, is an anathema in the 21st Century.

The long term solution has remained in my view a cyber version of the  UN Convention on the Law of the Sea. UCLOS is the international agreement, most recently updated in 1982 that governs behaviour by ships in international waters. Apart from other things the convention deals with acts of piracy committed in international waters.

In the same way, a similar international cybercrime convention could deal with acts where the victim was from for example the USA, the criminal from the Vatican and the offence committed on a server in South Korea.

It would seem that at the moment any move towards a UN convention has gone off the boil. A proposal was shot down in 2010 over disagreements around national sovereignty and human rights. As well, the European Union and USA  position was that a new treaty on cyber crime was not needed since the Council of Europe Convention on Cyber Crime had already been in place for 10 years and has been signed or ratified by 46 countries since  2001.

As I recently noted, wariness by both USA and China continues and means that any international agreement which includes Western countries and the BRICs will be a long time coming. China, Russia and other countries submitted a Document of International Code of Conduct for Information Security to the United Nations in 2011 which the USA seems to have dismissed out of hand.

A code of conduct is nice and the Council of Europe convention is a good start, but they need to be supported by some sort of international cyber ‘muscle’ in the long term.

However, all is not lost. In the meantime, working to coordinate the orderly organisations’ defences that I wrote about before, is a practical step that organisations and governments should be doing more of. This is the cyber equivalent of escorting ships through dangerous waters and passing them from one island of order to another.

There’s a good reason for this, and here’s the resilience message. The cyber-security of an organisation does not begin and end at their firewall or outer perimeter. Whilst in most cases an organisation cannot force other organisations to which it is connected to change, it can maintain vigilance over areas outside its direct sphere of control. This then allows the organisation more time to adapt to its changing environment and of course, a chain is only a strong as its weakest link.

The other step to be taken is to help emerging nations and organisations with poor online security to improve their cyber-defences. If the first step was like escorting ships between the orderly islands, this second step is the equivalent of helping nearby islands to improve their battlements so that the pirates don’t take over and then attack us! This work has been going on for some time. I chaired a number of seminars on cyber security and the need for computer emergency response teams for the APEC telecommunication and information working group which began this work in 2003 and this has been carried on by a number of countries around the world in fits and starts, but we need more.

Alex

Visualising organisational resilience

Resilience

I’ve been trying to summarise organisational resilience into a form that can be visualised for some of the people who I’m working with. The key has been to summarise the thinking on resilience as succinctly as possible.

Apart from the diagram you can see, the text below attempts to give concise answers to the following questions

  1. What is it (Resilience)?
  2. Why should my organisation care about resilience?
  3. Why is detailed planning not working anymore (if it ever did)?
  4. What’s the recipe for resilience?
  5. How does an organisation develop these characteristics?
  6. Resilience before and after (a crisis)
  7. How does nature do resilience?

 

Resilience in a mindmap

Visualising resilience is itself an exercise in complexity

The diagram should be A3, so You can download a pdf version here resilience in a mindmap PDF

Let me take you on a journey …

What is it?

Resilience is about the ability to adapt for the future and to survive. Whether that is for an organisation, country or an individual.
What seems sometimes forgotten is that the adaptation is best done before a crisis!
And here Resilience is more an organisational strategic management strategy, and not a security protocol. In this sense, Resilience is the ‘why’ to Change Management’s ‘how’

Why should my organisation care about resilience?

Research shows that the average rate of turnover of large organisations is accelerating. from around 35 years in 1965 to around 15 years in 1995. Organisations that want to stick around need to adapt with the changing environment.

Organisations know that they need to change to survive, but today’s urgency overrides the vague need to do something about a long term problem.  For this reason, crises can be the  catalyst for change.

Resilience is about dealing with organisational inertia, because the environment will change. The more successful an organisation has been in the past, the more difficult it will be to make change and so it becomes susceptible to abrupt failure. Miller coined the term ‘Icarus Paradox‘ to describe the effect and wrote a book by the same name. Icarus was the fictional Greek character who with his son made wings made from feathers and wax, but died when he flew too close to the sun and the wax melted, causing the feathers to fall out of the wings.

It is possible that Eastman Kodak is the best example of this trait. An organisation that was very successful between 1880 and 2007, Kodak failed to make the transition to digital and to move out of film fast enough.

Why is detailed planning not working?

Simply put, the world is too complex and the outliers becoming more common

  1. increasing connectedness – interdependencies leading to increasing brittleness of society/organisations  – just in time process management – risks, in rare instances, may become highly correlated even if they have shown independence in the past
  2.  speed of communication forces speedier decisionmaking
  3. increasing complexity compounds the effect of any variability in data and therefore the uncertainty for decisionmakers
  4. biology –  we build systems with an optimism bias. Almost all humans are more optimistic about their future than statistically possible. We plan for a future which is better than it is and do not recognise the chances of outlier events correct. Additionally, we plan using (somewhat biased) rational thought, but respond to crises with our emotions.

So if

  • we can’t predict the outlier events and
  • this makes most strategy less useful– especially that which is written and gathers dust without being lived ,

maybe we can be more resilient when we run into the outliers. What Taleb calls the Black Swans in the book of the same name.

Taleb’s book is available from Book Depository and is well worth the read, even if he can’t help repeating himself and dropping hints about fabulous wealth.

What’s the recipe for resilience?

Bad news, there isn’t a hard recipe for a resilient organisation, just like there isn’t one for a successful company, but they all seem to share some common attributes such as:

  • Agility and the ability to recover quickly from an event and,
  • an awareness of their changing environment and the willingness to evolve with it amongst others.

How does an organisation develop these characteristics?

It is a combination of many things –

  • developing an organisational culture which recognises these attributes which is supported and facilitated from the top of the organisation;
  • partnering with other organisations to increase their knowledge and reach when an event comes; and
  • Lastly engaging in the debate and learning about best practices

 Resilience before and after (a crisis)

But is resilience just one set of behaviours or a number.  When we think of resilient organisations and communities, our minds tend to go to the brave community / people / organisation that rose up after a high consequence event and overcame adversity. These people and organisations persist in the face of natural and manmade threats. Numerous examples include New York after the September 2001 events; Brisbane after the floods in 2011; and the Asian Tsunami in 2004.

However there is another set of actions which are more difficult in many ways to achieve. This is the capacity to mitigate the high consequence, low likelihood events or the creeping disaster before a crisis is experienced. The US behaved admirably in responding to the 9/11 terrorist disaster after it had occurred, but as the 9/11 Commission Report notes, terrorists had attempted on numerous occasions to bring down the World Trade Center and come quite close to succeeding.

In this thought may be one of the best argument for blue sky research. Serendipity – wondering through the universe with your eyes open to observe what’s happening around you, rather than head down and focussed only on one task – is this the secret to innovation?

How does nature do resilience ?

Life becomes resilient in that it is replicated wildly so that many copies exist, so that if some number fail, life can continue. Individual creatures carry DNA, which is all that needs to be replicated. Those creatures compete with each other and the environment to become more and more efficient. An individual creature may or may not be resilient, but the DNA is almost immortal.

How an organisation achieves this is the challenge that every management team needs to address. Over the next posts I will expand more

😉

back to top

In cyberspace, if you don’t share, you don’t survive

This might seem a brave call when talking about cyber-security threat information. But the truth is that the cyber world forces a new paradigm on security. The tools that are familiar in the offline world for providing elements of security, such as obscurity, tend to benefit the attackers rather than the defenders, because the very advantages of the online world, things like search and constant availability are also the online world’s greatest weaknesses. What matters most in the online world is not what you know, but how fast you know and make use of the information you have.

I’ve been reading the Cyber Security Task Force: Public-Private Information Sharing report, and I think its worth promoting what it says. It presents a call to action for government and companies in the US to improve information-sharing to prevent the increasing risks from cyber attacks on organisations, both public and private. The work was clearly done with a view to helping the passage of legislation being proposed in the USA, however..

 Most, if not all the findings made could be extrapolated to every advanced democracy around the world. 

 If you are familiar with this field, much of what has been written will not be new, as we have been calling for the sorts of measures that are proposed in the report since at least 2002. That does not mean that the authors haven’t made a valuable contribution, because they have made recommendations about how to solve the problem. Specifically they recommend removing legislative impediments to sharing whilst maintaining protections on personal information.

According to the authors: From October 2011 through February 2012, over 50,000 cyber attacks on private and government networks were reported to the Department of Homeland Security (DHS), with 86 of those attacks taking place on critical infrastructure networks. As they rightly point out, the number reported represents a tiny fraction of the total occurrences.

As is the case in many areas of security, the lack of an evidence base is at the core of the problem, because it creates a cycle where there is resistance to change and adaptation to fix the problems efficiently and effectively.

 Of course, the other thing that happens is that organisations don’t support an even level of focus or resourcing on the problem, because, most of the time, like an iceberg, the bit of the problem that you can ‘see’ is comparatively small.

To make matters worse, new research is telling us that we are optimistically biased when making predictions about the future. That is, people generally underestimate the likelihood of negative events. So without ‘hard’ data, and given the choice of underestimating the size of a problem or overestimating it, humans that make decisions in organisations and governments are likely to underestimate the likelihood of bad things happening. You can find out more about the optimism bias in a talk by Tali Sharot on TED.com

The cost differential to organisations when they don’t build in cyber security, are unable to mitigate risks and then need to recover from cyber attacks is significant. This cost is felt most by the organisations affected, but its effects are passed across an economy.

So what can be done to break this cycle of complacency? Government and industry experts have long spoken about the need for better sharing of information about cyberthreats. I was talking in public fora about this ten years ago.

The devil is in the detail in the ‘what’ and the ‘how’. Inside the ‘what’ is also the ‘who’. I’ll explain below

What should be shared, who should do the sharing – and with whom?

Both government and industry, whilst they generally enthusiastically agree that there should be sharing, think that the other party should be doing more of it and then come up with any number of excuses as to why they can’t! For those who are fans of iconic 80’s TV, it reminds me of the Yes Minister episode where the PM wants to have a policy of promoting women and in cabinet each minister enthusiastically agrees that it should be done, whilst explaining why it wouldn’t be possible in his department. In government, the spooks will tell you that they have ‘concerns’ with sharing, ie they want to spy on other countries and don’t want to give up any potential tools. It’s no better in industry, companies don’t have an incentive to share specific data, because their competitors might get some kind of advantage.

The UK has developed perhaps the most mature approach to this. UK organisations have been subject to a number of significant cyber attacks and government officials attempt to ‘share what is shareable’. The ability to do this may be because of the close relationship between the UK government and industry, developed initially during the time of the Troubles in Ireland and has been maintained in one form or other through the terrorism crises of this Century. It remains to be seen whether the government will be able to maintain these relationships and UK industry will see value in them as the UK and Europe struggle with short-termism brought on by the fiscal situation.

Australia has also attempted to share what is shareable, however as the government computer emergency response team sits directly within a department of state this is very difficult. It seems that the CERT does not have a clear mission. Is it an arbiter of cyber-policy and information disseminator, or an operational organisation that facilitates information exchange on cyber issues between government and industry?

This quandary has not been solved completely by any G20 country. Indeed, it will never be solved, it is a journey without end. It is possible that New Zealand has come closest, but this seems to be because of the small size of the country and the ability to develop individual relationships between key people in industry and government. Another country that is doing reasonably well is South Korea – mainly because it has to, it has the greatest percentage of broadband users of any country and North Korea just a telephone line away. The Korean Internet security agency – KISA brings together industry development, Internet policy, personal information protection, government security, incident prevention and response under one umbrella.

For larger countries, I am of the view that a national CERT should be a quasi-government organisation that is controlled by a board comprised of:

  • companies that are subject to attack (including critical infrastructure);
  • network providers;
  • government security and
  • government policy agencies.

In this way, the CERT would strive to serve the country more fully. There would be more incentive from government to share information with industry and industry to share information with government. With this template, it is possible to create a national cyber-defence strategy that benefits all parts of the society and provides defence-in-depth to those parts of the community that we are most dependent on, ie the critical infrastructure and government.

Ensuring two-way information flow within the broader community and with industry has the potential to provide direct benefits for national cyber-security and for the community more broadly. Firstly, by helping business and the community to protect itself. Secondly, for government, telecommunications providers and the critical infrastructure in the development of sentinel systems in the community, which like the proverbial canary in the coalmine, signal danger if they are compromised. Thirdly, by improving the evidence base through increased quality and quantity of incident reporting – which is so often overlooked.

Governments can easily encourage two-way communication by ‘sharing first’. Industry often questions the value of information exchanges, because they turn up to these events at their own expense and some government bigwig opens and says ‘let there be sharing’ and then there is silence, because the operatives from the three letter security agencies don’t have the seniority to share anything and the senior ones don’t understand the technical issues. I am not the first person to say that in many cases (I think 90%), technical details that can assist organisations to protect their networks do not need to include the sensitive ‘sources and methods’ discussion. By that I mean, if a trust relationship exists or is developed between organisations in government and industry and one party passes a piece of information to the other and says “Do x and be protected from y damage”, then the likelihood of the receiving party to undertake the action depends on how much they trust the provider. Sources and methods information are useful to help determine trustworthiness, but they are not intrinsically essential (usually) to fixing the problem.

As the Public-Private Information Sharing report suggests, many of the complex discussions about information classification/ over-classification and national security clearances can be left behind. Don’t get me wrong; having developed the Australian Government’s current protective security policy framework, I think there is a vital place for security clearances and information classification. However, I think that it is vastly over-rated in a speed of light environment where the winner is not the side with the most information, but the side that can operationalise it most quickly and effectively. Security clearances and information classification get in the way of this and potentially deliver more benefit to the enemy by stopping the good guys from getting the right information in time. We come back to the question of balancing confidentiality, integrity and availability – the perishable nature of sensitive information is greater than ever.

How should cyber threat information be shared?

This brings me to the next area of concern. There is also a problem with how information is shared between industry and government, or more importantly the speed with which it is shared. In an era when cyber attacks are automated, the defence systems are still primarily manual and amazingly, in some cases rely on paper based systems to exchange threat signatures. There is an opportunity for national CERTs to significantly improve the current systems to share unclassified information about threats automatically. Ideally these systems would be designed so that threat information goes out to organisations from the national CERT and information about possible data breaches returns immediately to be analysed.

Of course, the other benefit of well-designed automated systems could be that they automatically strip customer private information out of any communications, as with the sources and methods info, peoples’ details are not important (spear phishing being an exception). In most cases, I’d rather have a machine automatically removing my private details than some representative of my ‘friendly’ telecommunications provider or other organisation.

These things are all technically possible, the impediments are only organisational. Isn’t it funny, people are inherrently optimistic, but don’t trust each other. Its surprising civilisation has got this far.

CERTs – Computer Emergency Response Teams


References

1How Dopamine Enhances an Optimism Bias in Humans. Sharot, T; Guitart-Masip, M; Korn, c; Chowdhury, R; Dolan, R. CURRENT BIOLOGY. July 2012. www.cell.com

2 Yes Minister Series 3, EP 1 – “Equal Opportunities” 1982

“It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change”

The quote above has been often misattributed to Charles Darwin. But according to the Darwin project, it is actually a quote from Leon Megginson* in the 1960s paraphrasing Darwin in a management journal.

Now that I have done my bit to put that meme to bed, it is worth considering whether there is value in the concept or whether it is a dangerous oversimplification. And the answer is…

.. It depends!

You didn’t really think there was a black or white answer to this. The facts, such as we have, are that there are very few companies around today which are in the same form. Indeed, Mark Perry’s in his excellent economics blog Carpe Diem presents a chilling picture comparing the US fortune 500 from 1955 and 2011.

Of the 500 companies on the list in 1955, fewer than one in seven are still on the list in 2012! Only 57 years later. I say only 57 years, because it is less than the lifespan of an average western person.

So what happened to the rest, the other 6 in 7? They have either gone bankrupt, been privatised, merged, or their fortunes have gone south to the point that they are under the Fortune 500.

The parallels between evolution and raw capitalism are hard to resist. Indeed, although this may be a bridge too far, there may even be a parallel between evolutionary eras such as the Cambrian Explosion and the current communications technology fuelled business environment. As such, the life expectancy of companies seems to be getting less as the speed of global communications increases.  Steve Jobs is quoted in Forbes Magazine suggesting “why decline happens” at great companies: “The company does a great job, innovates and becomes a monopoly or close to it in some field, and then the quality of the product becomes less important. The company starts valuing the great salesman, because they’re the ones who can move the needle on revenues.” So salesmen are put in charge, and product engineers and designers feel demoted: Their efforts are no longer at the white-hot center of the company’s daily life. They “turn off.”

Maybe another way to say this is that all organisations must have purpose, whether that is a government agency or a company. The widgets (for want of a better description) might be policy or law in the case of a government agency; cars in the case of a car company; or services in the case of a services organisation. If the organisation maintains its focus on why it exists, then it can maybe adapt and survive beyond the average – however, this is hard work and most will end up like trilobites, ubiquitous one day, fossils the next.

Alex

Trilobite fossil – Photo by kevinzim – http://www.flickr.com/photos/[email protected]/43243889/

*Megginson, L. C. (1964). “Key to Competition is Management.” Petroleum Management, 36(1): 91-95.

 

Useful trick to add a nick to your google+ account leads to the darkside

I’ve been updating the Resilience Outcomes Google+ site. A friend asked me what the site url is, but Google in its wisdom has not made this easy. The site reference is https://plus.google.com/103380459753062778553 !!! What a mouthful and not really a set of numbers I want to dedicate my diminishing neurons on. A partial answer is http://gplus.to/  . Using this site you can get a nickname or vanity url for your gplus site.

So now I can give the url http://gplus.to/resilienceoutcomes and you get to the Resilience Outcomes Google+ site!

My friend Karl H. is going to point out that this is not very resilient, because although Google has a reputation for fairly bulletproof infrastructure I know nothing about gplus.to . Karl you’re absolutely right – it demonstrates why thinking about resilience is so difficult… The Dark side has cookies! Literally in the case of gplus.to.

http://gplus.to/ is almost certainly more fragile than google.com or .Google or whatever it will call itself next month. If http://gplus.to/ goes down then all the efforts of google to support their systems are naught in my case. As such, I am faced on a small-scale the choice faced by all who wish to become more resilient and mainstream security. Do I increase accessibility to the site whilst reducing integrity and confidentiality or not? In this case, the question is not an either or, and rarely is it ever. The answer may be in my case that http://gplus.to/ is used when friends ask me what the site is verbally, but that I always write the full url in posts.

🙂