Behavioural Economics as a tool for enterprise security

Have you ever wondered why on your electricity bill there is a representation of your household’s usage against the average 2, 3 or 4-person household telling you whether you are over or under? How does it make you feel?

The term behavioural economics has been around for maybe two decades. The marketing profession has been using the techniques it describes for even longer to get you to buy their brand. However, the use of behavioural economics as a tool for enterprise security is just emerging.

It is time for security professionals to start using these techniques to help protect organisations and not just to influence people to buy a particular soap, car or follow a sporting code.

What is behavioural economics

Behavioural economics looks at the relationship between the decisions that we make and the psychological and social factors that influence them. A significant amount of study in this area has been on people’s economic decisions, but the tools and techniques that have been tested can be applied in many other contexts.

Daniel Kahneman and his late research partner Amos Tversky are the two research psychologists most associated with behavioural economics. In 2002, Kahneman shared the Swedish Banker’s Prize in Economic Sciences in Memory of Alfred Nobel for this work. Kahneman’s 2011 book “Thinking Fast and Slow” explains many of the concepts in accessible terms. Kahneman and Tversky built on earlier studies that cut down an idea that now sounds quaint, the idea that humans act entirely rationally at the population or large group level. Even so, this idea was at the heart of much classical economic thinking.

You might not think at first that this seems entirely related to enterprise security. However, if you consider that the premise of behavioural economics is that people do not always make decisions that are entirely rational, you’d probably see the connection! In addition, the ideas that small (and sometimes even intangible) incentives and disincentives can be used to guide individual actions on a large scale are also very important. It is this second aspect which is of greatest use to the enterprise security practitioner.

Behaviour is at the heart of enterprise security, because people are every organisation’s greatest asset and often also their greatest risk. At its simplest, the key aim of good enterprise security is ensuring that individuals are encouraged to make the right decisions that benefit their organisation.

Behavioural economics works by assuming that in many cases, people making the ‘wrong’ decision within an organisation do so because they have imperfect information or lack the right incentives or disincentives.

Psychologists have also found that people can often exhibit a strong inclination to conform to social norms. The social norms change with the social groups that we participate in. Essentially, we often do things because our friends, colleagues, or those we admire, do.  Our friends and colleagues provide us with informational social influence or social proof. In plain English, we like to follow our herd and keep up with the Jones’.

Curiously though, we seem to struggle more with changing our minds than coming to a decision in the first place. The idea that when the facts change, people change their minds is a bit tricky for many. Associated with this curious aspect, researchers from Harvard Business School have claimed also that we tend to think we are more moral than we actually are and inhabit an “ethical mirage”.  This can mean there’s a disconnect between how we describe our decisions and how we actually behave. If we accept this somewhat unflattering portrait of human behaviour, it means that we tend to take a position that justifies our actions whatever they were, once we’ve made a decision. And we want more justification to change our minds than we needed to come to it in the first place!

But what if we could get people to make the ‘right’ decision in the first place. Then they wouldn’t have to justify wrong decisions. This is where the research findings of behavioural economics are tested at organisational and national scale.

Behavioural economics concepts are being applied at the public policy level by governments wanting to encourage certain behaviour without going to the expense of legislating compliance. It is expensive to make something illegal. Sometimes it is absolutely necessary e.g. murder, but the society has to create enforcement systems, pay the enforcers, and then who watches the watchers? Some enlightened government agencies are dabbling with the use of behavioural economics to achieve high levels of compliance.

In the UK and latterly also in Australia, the tax authorities have been attempting to use behavioural economics techniques. So called ‘nudge units’ have been set up to coax to get people to do their taxes by using social proof methods.  Informing taxpayers who are late paying that “90% of people pay their taxes on time” increases the rate of taxpayer compliance. This achieves the policy objective of getting timely tax payments, but does it in a way that won’t generate negative headlines. This in turn allows the tax agency to focus on individuals who are intentionally breaking the law, rather than doing so because life got in the way.

Another recent example has been the introduction of the “No Jab, No Pay” policy by the Australian Government where parents do not get all their family tax benefits unless they are willing to vaccinate their children. Rather than making it illegal for children to remain unvaccinated, the government has incentivised parents to vaccinate. This, added to significant social pressure from almost all the medical community, means that Australia’s childhood vaccination rates are generally very high and we see fewer distressing pictures of children with whooping cough around the country.

One interesting way that companies are using social proof is in encouraging households to save water and electricity. Increasingly, utility bills show householders where they stand in comparison to their suburb in terms of water or electricity use. The householder can then consider whether they want to moderate their behaviour. Literally to keep up with the Jones’!

Marketing firms use many behavioural economics techniques to encourage us to use particular products. Many of us take advantage of airline frequent flyer programs that give rewards for the flights taken by members. The extremely successful travel website, Tripadvisor awards points to its website users for the travel reviews that they produce. However, Tripadvisor points have absolutely no dollar value. They are valuable only to users in terms of social proof to that community that a member is a well seasoned traveller. You may have realised that the majority of social media operates in a similar way.

Why should enterprise security professionals consider using behavioural economics in their organisation?

It is expensive and time consuming to maintain rules for the increasingly complex environment that organisations operate in. Rules are difficult to write well and often only work in limited circumstances. The more detail, the more exceptions need to be built. Quite often rules also create a culture where individuals only follow the letter, not the spirit of the rules. This can contribute to the creation of a workplace which is not adaptable and where security is blamed for the problems of the organisation.

This can lead to situations where workers sometimes choose to circumvent organisational rules in order to achieve local goals. A worker might shortcut a process to ensure that their team are able to complete it faster. The individual might rationalise this as being good for their company in that the job is completed faster and good for themselves in that they can go home earlier. However, the decision that they have rationally come to might be the ‘wrong’ decision from the perspective of their organisation. The shortcuts that have been introduced may decrease organisational security.

How do organisations change this? By changing the decision-equation the worker takes when he or she makes that decision. This is very much the place of behavioural economics in enterprise security. Organisational messaging which demonstrates the social norms of the organisation from a security perspective are vital. So to are tools and procedures which endeavour where possible to make the secure decision, the easiest one to make.

In many ways the decision is very much linked to the ‘security culture’ of the organisation. The security culture is effectively the customs and practices of the organisation for whom the individual works.

Organisations are increasingly moving to principles and risk based frameworks in many areas including security because they find the sheer complexity of business overwhelming otherwise. This was one of the main drivers for the creation of the Australian Government’s Protective Security Policy Framework. The PSPF tries to get government agencies to focus on their security outcomes, rather than on process.

 

[Brain scan of white matter fibers, brainstem and above. The fibers are color coded by direction: red = left-right, green = anterior-posterior, blue = ascending-descending (RGB=XYZ). The Human Connectome Project, a $40-million endeavor funded by the National Institutes of Health, aims to plot connections within the brain that enables the complex behaviors our brains perform so seamlessly.MANDATORY CREDIT: Courtesy of the Laboratory of Neuro Imaging at UCLA and Martinos Center for Biomedical Imaging at MGH / www.humanconnectomeproject.org] *** []
Brain scan of white matter fibers, brainstem and above.  Laboratory of Neuro Imaging at UCLA and Martinos Center for Biomedical Imaging at MGH / www.humanconnectomeproject.org
Enterprise security professionals should be asking where they can apply these behavioural economics techniques in their organisations. The possibilities are varied and many, but one financial institution has used behavioural economics give nudges to staff regarding personnel security. In one case, to improve their reporting of change of circumstances by giving them a simple message that “most people in our organisation report their change of personal circumstances within four weeks”.

In the government space, there has been debate about whether it is possible to create an ‘information classification market’ which balances the need to classify information appropriately against the costs to organisation of over-classification in terms of long term storage and devaluation of security markings. Such a market could work by incentivising managers to ensure that staff were classifying information as accurately as possible. As always, the trick would be to ensure that the incentives matched the risk profile of the organisation.

Every organisation is different and so are the opportunities for using these techniques to improve your enterprise security.

For more information:

http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2002/advanced-economicsciences2002.pdf

http://theconversation.com/the-potential-of-behavioural-economics-beyond-the-nudge-43535

http://www.immunise.health.gov.au/internet/immunise/publishing.nsf/Content/clinical-updates-and-news/$File/Update-No-Jab-No-Pay-Immunisation-Catch-Up-Arrangements(D15-1126865).pdf

www.hbs.edu/faculty/Publication%20Files/08-012.pdf

https://en.wikipedia.org/wiki/Social_proof

 

Security Solutions Magazine 100th EDAn earlier version of this article appeared in the 100th edition of Security Solutions magazine  http://www.securitysolutionsmagazine.biz/

Climate sustainability and resilience

Resilience for organisations is bound to their adaptability to climate change both in the short and long term.

A review of US public companies shows a number of climate related risks and costs. Their ability to adapt and become resilient to climate change is starting to affect their finances.

The document reveals that USA S&P 500 companies are seeing climate change related risks increase in urgency, likelihood and frequency, with many describing significant impacts already affecting their business operations, according to a new report from CDP, which collects environmental performance information on behalf of investors.

company

Threats include damage to facilities, reduced product demand, lost productivity and necessitated write-offs. The impact of these threats being realised comes with costs that can reach millions of dollars.

Importantly, the proximity of the threat is quite near. 45% of the risks S&P 500 companies face from extreme weather and climate changes are current, or expected to fall within the next one-to-five years, up from 26% just three years ago. 50% of these risks range from “more likely than not” to “virtually certain”. This is up from 34% three years ago.

Around 60 companies describe the current and potential future risks and their associated costs in the research, which highlights excerpts from the companies’ disclosures to their investors between 2011 and 2013. Ironically, even NewCorp made the following contribution to the report.

“Climate projection models make it difficult to know exactly how business might be impacted by episodic weather events. However, it is clear from past severe weather events that some of News Corporation’s businesses are susceptible to such extreme weather.”(p6)

The media release accompanying the report asserts that

Dealing with climate change is now a cost of doing business

Making investments in climate change related resilience planning both in their own operations and in the supply chain has become crucial for all corporations to manage this increasing risk.

Resilience Outcomes has the skills and expertise to help your organisation develop its organisational resilience strategy to take into account how it will adapt to the changing environment. contact us via the form below or at [email protected] to discuss your needs.

Download the full report here

CDP is an international, not-for-profit organisation providing the only global system for companies and cities to measure, disclose, manage and share vital environmental information. We work with market forces to motivate companies to disclose their impacts on the environment and natural resources and take action to reduce them

 

SCADA CERT practice guide

ENISA has released a good practice guide for CERTs that are tasked with protecting industrial control systems  (SCADA).

The European Union Agency for Network and Information Security (ENISA) publishes a lot of advice and recommendations on good practice in information security. Necessarily, it has a European focus, but almost all the advice is applicable to systems in any region.

This SCADA CERT practice guide focuses on how Computer Emergency Response Teams should support Industrial Control Systems (ICS).The terms ‘ICS’ and ‘SCADA’ (Supervisory Control and Data Acquisition) are pretty much interchangeable.

SCADA systems were around before the Internet. The first systems were driven by mainframes and installed to control water and electricity networks. Since then, SCADA has become ubiquitous and systems that were initially designed to work on independent networks have been connected to the Internet.

Connecting SCADA to the Internet has many advantages. It increases system availability and reduces costs of connecting geographically disparate systems. At the same time, connecting SCADA to the Internet decreases system confidentiality and more importantly in this situation, system integrity.

CC Worldbank photo collection
Industrial Control Systems support every aspect of our daily lives. Photo CC WorldBank Photo Collection

The ENISA ICS guide tries to put together in one document, a guide for CERTs that are required to protect SCADA/ICS systems. Importantly, it doesn’t just focus on the technical capabilities required for operations, but also organisational capabilities and what it terms ‘co-operational capabilities’. This last part is important as computer emergency response teams can forget that they are part of a system and the system is only as strong as the weakest link. It is important to remember that preparation for things going wrong involves identifying people, resources and stakeholders that will be required. Developing relationships with other organisations will always pays dividends when an emergency occurs. This is where the ENISA advice is in some ways superior to the advice from the US DOE, although I acknowledge the attractive simplicity of some of their guidance.

It is good that the authors acknowledge that this area is one where there is limited experience and that the guide should be considered a ‘living document’. As usual in cyber-security protection, both technical expertise and organisational /management guidance are required.

 

More information available from ENISA

US DOE SCADA guide

 

 

NSA/GCHQ built vulnerabilities into encryption?

Have the NSA and GCHQ been building vulnerabilities into commercial encryption products?

If this is true, another argument for open source software has been made. Articles in the New York Times and the Guardian  alleged that  the N.S.A. has been deliberately weakening the international encryption standards adopted by developers. One goal in the agency’s 2013 budget request was to “influence policies, standards and specifications for commercial public key technologies,” .

The problem with this approach is that the NSA and GCHQ have two roles and it would seem that they have failed to balance them. This is the question of intelligence equities. These organisations are charged to reveal the secrets of their enemies, but also to protect the information of their countries. By building back doors into software and hardware being sold to unsuspecting customers, they are doing what they have accused the Chinese of doing.

Moreover the fact that these backdoor vulnerabilities exist, mean that others can find and use them, not just NSA and GCHQ but also other cyber criminals.

It is the ultimate hubris to think that NSA and GCHQ are the only ones capable of discovering and exploiting these vulnerabilities. “If you want to keep a secret, you must also hide it from yourself.”  George Orwell1984 . No organisation as large as the NSA can do this forever.

The USA tried under President Clinton to make all manufacturers insert a hardware ‘clipper’ chip  into their devices, but the backlash was such that the US government withdrew support for the idea. What this information is telling us is that the NSA didn’t give up and found alternative means to realise the  concept.

The only logical conclusion from this revelation is that the signals intelligence agencies are unable to both reveal the enemies’ secrets and protect those of their citizens at the same time. They should be split. The information assurance role should come under the control of the trade, infrastructure and industry portfolios.

 

You can find the NYT article here – http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?pagewanted=all 

You can find the Guardian article here – http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security

Cyber-Security doesn’t stop at the virtual perimeter

News that the New York Times was hacked by the Syrian Electronic Army  is interesting not because of the fact that NYT was hacked by the hacking group, but by the method of gaining access.

According to this article, information security at the NYT fell over because they forgot that cyber-security doesn’t stop at the perimeter. It would seem that MelbourneIT , an Australian hosting company for both Twitter and NYT was breached. This then allowed the Syrian Electronic Army to gain access to the DNS records of domains owned by Twitter and NYT which they then proceeded to change.

A number of quick conclusions

  1. This was a well planned attack almost certainly took some time to conceive, research and operationalise.
  2. You should assume your organisation will be hacked. Work out how to detect the breach and recover quickly.
  3. Cyber-security is an evolutionary struggle between those who wish to break systems and those who wish to stop systems being broken. Quite often its the same people eg NSA
  4. 80-90% of the differences between good cyber-security and great cyber-security are not in the IT, they are in the organisational approach and culture.
  5. In this hack, a variety of methods seem to have been used, including phishing and attacking the DNS servers via privilege escalation.
  6. Cyber-security requires expertise in managing information, risk and developing resilient organisational frameworks, something often forgotten.
  7. Everybody is your neighbour on the Internet, the good guys and the bad.
  8.  Cyber-security practitioners need to consider the risks to high-value systems that they are protecting from connected suppliers and customers.
  9. This requires cyber-security practitioners who are good people influencers, because the vulnerabilities tend to be at human interfaces.

Further technical details have been posted here.

http://www.flickr.com/photos/alextorrenegra/
New York Times – by ATorrenegra

 

Contact Resilience Outcomes to discuss how we can help your organisation become more resilient at [email protected]

 

 

Organisational resilience – biological approaches

A biological approach to organisational resilience

By a lapsed microbiologist
 “Organisational resilience is only achievable through adaptability”
Wattle flower
Flowers are just an adaptation of normal leaf on plants, a combination of genes normally responsible for forming new shoots.   Photo by AWebling 2013
Too many leaders start believing their own press and thinking that they are able to predict the future. Whilst it is absolutely true that the best indicators of the future are the events of the past. It is also true that the past is not an absolute indicator of future events because our view of the past is limited by our record of it. Some events are so rare that they are not recorded, yet they may have extreme consequences if they occur. So if we cannot predict the future with certainty, how is longevity possible for organisations?  The answer is resilience, and at the core of resilience is adaptability.

The lesson from biology is that adaptation to the environment that has allowed organisms to survive and thrive. However large and seemingly terrible[1] an organism is, if it is not adapted to its environment it will become extinct. The vast majority of species that have ever existed are not around today.

The same is true for organisations.

The vast majority of organisations that have ever existed are not around today

In simple terms the story is the same for each failed organisation. They were unable to adapt to the business environment before they ran out of resources. Those that survive a crisis are able to do so for two reasons

1               They have the resources, capital personnel leadership etc to manage themselves out of a crisis once it hits emerging weaker but alive; or

2               They are prepared to adapt if a crisis arises and have developed a broad set of principles which will work with minimal change in most eventualities. These companies still suffer from the crisis at first, but emerge stronger in the longer term.

By my reckoning, 99% of companies that manage to survive a crisis are in the first category. In most cases, those companies are then consigned to a slow death (My Space anyone?). Sometimes however, the first crisis weakens them, but they then become more resilient and bounce back to ride future crises.

This is an era of organisational accelerated extinction

What is more, the ‘extinction rate’ for companies is becoming faster as society and technology changes more rapidly.

I think we all understand that small businesses come and go, but this lesson is true for large organisations as well. Of the top 25 companies on the US Fortune 500 in 1961, only six remained there in 2011.

Research carried out on fortune 500 companies in the USA shows[2] that the average rate of turnover of large organisations is accelerating.  The turnover has reduced from around 35 years in 1965 to around 15 years in 1995.

If you think about how much the world has changed since 1995 when Facebook barely existed and Google just did search, you might agree with the idea that organisations that want to stick around need to adapt with the changing environment.

So give me the recipe!

Bad news, there isn’t a hard recipe for a resilient organisation, just like there isn’t one for a successful company, but they all seem to share some common attributes such as agility and the ability to recover quickly from an event and an awareness of their changing environment and the willingness to evolve with it amongst others. This is difficult for a number of reasons.

1               increasing connectedness – interdependencies leading to increasing brittleness of society/organisations  – just in time process management – risks, in rare instances, may become highly correlated even if they have shown independence in the past

2               increasing speed of communication forces speedier decision making

3               increasing complexity compounds the effect of any variability in data and therefore the uncertainty for decision makers

4               biology –  Organisations operate with an optimism bias[3]. Almost all humans are more optimistic about their future than statistically possible. We plan for a future which is better than it is and do not recognise the chances of outlier events correct. Additionally, we plan using (somewhat biased) rational thought, but respond to crises with our emotions.

5               Organisational Inertia. The willingness to change organisational culture to adapt to a change in the environment.

Something about organisational culture and resilience

When discussing culture, resilience is more an organisational strategic management strategy, and less a security protocol. In this sense, Resilience is the ‘why’ to Change Management’s ‘how’. But both are focused on organisational culture.

Organisations, particularly large organisations, all have their own way of doing things. Organisational culture is built up because individuals within the organisation find reward in undertaking tasks in a certain way. This is the same whether we are talking about security culture or indeed financial practice. Organisational culture goes bad when the reward structure in the organisation encourages people to do things that are immoral or illegal.

Larger organisations have more inertia and so take longer to move from good to bad culture and vice versa. Generally most organisations that are larger than about 150[4] staff have a mix of cultures.

The more successful an organisation has been in the past, the more difficult (inertia) it will be to make change and so it becomes susceptible to abrupt failure. Miller coined the term ‘Icarus Paradox‘ to describe the effect and wrote a book by the same name. Icarus was the fictional Greek character who with his son made wings made from feathers and wax, but died when he flew too close to the sun and the wax melted, causing the feathers to fall out of the wings.

Maybe the Kodak company is the best example of this. An organisation that had been very successful for more than 100 years (1880 -2007), Kodak failed to make the transition to digital and to transition from film as fast as its competitors. The irony is that it was Kodak researchers who in the 1970s invented the first digital camera thus sewing the seeds for the company’s doom forty years later.

Where does my organisation start on the path

So what is the answer, how do we make sure that our organisations adapt faster than the environment that is changing more rapidly every time we look around? The only way is to begin to adapt to the changing environment before crises arise. This requires making decisions with less than 100% certainty and taking risk. The alternative is to attempt to change after a crisis arises, which historically carries higher risk for organisations.

It is a combination of many things –

  • developing an organisational culture which recognises these attributes which is supported and facilitated from the top of the organisation;
  • partnering with other organisations to increase their knowledge and reach when an event comes; and
  • Lastly engaging in the debate and learning about best practices

Are there two sorts of resilience?

But is resilience just one set of behaviours or a number.  When we think of resilient organisations and communities, our minds tend to go to the brave community / people / organisation that rose up after a high consequence event and overcame adversity. These people and organisations persist in the face of natural and manmade threats. Numerous examples include New York after the September 2001 events; Brisbane after the floods in 2011; and the Asian Tsunami in 2004.

However there is another set of actions, which are more difficult in many ways to achieve. This is the capacity to mitigate the high consequence, low likelihood events or the creeping disaster before a crisis is experienced. The US behaved admirably in responding to the 9/11 terrorist disaster after it had occurred, but as the 9/11 Commission Report notes, terrorists had attempted on numerous occasions to bring down the World Trade Center and come quite close to succeeding.

Last Thoughts

Life becomes resilient in that it is replicated wildly so that many copies exist, so that if some number fail, life can continue. Individual creatures carry DNA, which is all that needs to be replicated. Those creatures compete with each other and the environment to become more and more efficient. An individual creature may or may not be resilient, but the DNA is almost immortal.

How an organisation achieves this is the challenge that every management team needs to address if they want to achieve longevity.

If you wish to discuss any of the issues in this whitepaper, please contact us



[1] noting that the word dinosaur is directly translated as terrible lizard

[2] http://www.kauffman.org/uploadedFiles/fortune_500_turnover.pdf

[4] Dunbar number

Complexity and organisational resilience

On the face of it, complex systems might have more resilience than those that are simple because they can have more safeguards built-in and more redundancy.

However, this is not supported by real world observation. Simply put, more complexity means more things can go wrong. In both nature and in human society, complex controls work well at maintaining systems within tight tolerances and in expected scenarios. However complex systems do not work well when they have to respond to circumstances which fall outside of their design parameters.

In the natural world, one place where complex systems fail is the immune system. Anaphylactic shock, where the body over-reacts because of an allergy to a food such as peanuts is a good example. Peanuts are of course, not pathogens, they are food, The immune system should not react to them. However people’s immune systems are made up of a number of complex systems built over the top of each other over many millions of years of evolution. One of these systems is particularly liable to overreact to peanuts. This causes in the worst case, death through anaphylaxis – effectively the release of chemicals which are meant to protect the body, but which do exactly the opposite. This is an example of where a safety system has become a vulnerability when it is engaged outside normal parameters.

We are beginning to see the resilience of complex systems such as the Great Barrier Reef severely tested by climate change. Researchers have found that the reef is made of complex interactions between sea fauna and flora, built upon other more complex interactions. This makes it nigh on impossible for researchers to find exact causes for particular effects, because they are so many and varied. Whilst the researchers confidently can say that climate change is having a negative effect on the coral and that bleaching effects will become more common as the climate becomes warmer, they cannot say with a great deal of certainty how great the other compounding effects such as excess nutrients from farm runoff or removal of particular fish species might be. This is not a criticism of the science, but more an observation that to predict the future with absolute certainty, when there are multiple complex factors at play is extremely difficult.

These natural systems are what some might call ‘robust yet fragile’. Within their design parameters they are strong and have longevity. Such systems tend to be good at dealing with anticipated events such as cyclones in the case of the Great Barrier Reef. However, when presented with particular challenges outside the standard model, they can fail.

Social systems and machines are not immune from the vulnerabilities that complexity can introduce into systems and can also be strong in some ways and brittle in others.

The troubles with the global financial system are a good example. Banking has become very complex and banking regulation has kept up with this trend. That might seem logical, but the complex rules may in themselves be causing people to calibrate the financial system to meet the rules, focussing on the administrivia of their fine print, rather than the broad aims that the rules were trying to achieve. As an example, one important set of banking regulations are the Basel regulations. The Basel 1 banking regulations were 30 pages long, the Basel 2 regulations were 347 pages long and the Basel 3 regulations are 616 pages. One estimate by McKinsey says that compliance for a mid-sized bank might cost as much as 200 jobs. If a bank needs to employ 200 people to cope with increased regulation, then the regulator will need some number of employees to keep up with the banks producing more regulatory reports, and so the merry-go-round begins!

A British banking regulator, Andrew Haldane is now one of a number of people who question whether this has gone too far and banks and banking regulation has become too complex to understand. In an interesting talk he gave in 2012 in Jackson Hole, Wyoming, USA titled the ‘Dog and the Frisbee’, Haldane uses the analogy of a dog catching a frisbee to suggest that there are hard ways and easy ways to work out how to catch a frisbee. The hard way involves some complex physics and the easy way involves using some simple rules that dogs use. Haldane points out that dogs are better in general at catching frisbees than physicists! I would also suggest that the chances of predicting outlier events, what Nicolas Taleb calls ‘Black Swans’  is greater using the simple predictive model.

This is in some ways a challenge to the traditional thinking behind risk modelling. When I did my risk course, it was all very formulaic. List threats, list vulnerabilities and consequences, discuss tolerance for risk, develop controls, monitor etc. I naively thought that risk assessment would save the world. But it can’t. Simple risk management just can’t work in a complex system. Firstly, it is impossible to identify all risks. To (misquote) Donald Rumsfeld, there are known risks, unknown risks, risks that we know we have, but can’t quantify and unknown risks that we can neither quantify nor know.

Added to this is the complex interaction between risks and the observation that elements of complex systems under stress can completely change their function (for better or worse). An analogy might be where one city under stress spontaneously finds that its citizens begin looting homes and another intensifies its neighbourhood watch program.

Thus risk assessment of complex systems is in itself risky. In addition, in a complex system, the aim is homeostasis, the risk model responds to each raindrop-sized problem, correcting the system minutely so there are minimal shocks and the system can run as efficiently as possible. A resilience approach might try to develop ways to allow the system/organisation/community to be presented with minor shocks, in the hope that when the black swan event arrives, the system has learnt to cope with at least some ‘off white’ events!

Societies are also becoming more complex. There are more interconnected yet separately functioning parts of a community than there were in the past. This brings efficiency and speed to the ways that things are done within the community when everything is working well. However when there is a crisis, there are more points of failure. If community B is used to coping without electricity for several hours a day, they develop ways to adapt over several months and years. If that community then finds that they have no power for a week, they are more prepared to cope than community A that has been able to depend on reliable power. Community B is less efficient than community A, but it is also less brittle.

This does however illustrate out a foible of humanity. Humans have evolved so that they are generally good at coping with crises (some better than others), however they are not good at dealing with creeping catastrophes such as climate change, systemic problems in the banking and finance sector, etc.

Most people see these things as problems, but think that the problems are so far away that they can be left whilst other more pressing needs are dealt with.

Sometimes you just need a good crisis to get on and fix long-term complex problems. Just hope the crisis isn’t too big.

Video Donald Rumsfeld – Known Knowns
Back To Top

Visualising organisational resilience

Resilience

I’ve been trying to summarise organisational resilience into a form that can be visualised for some of the people who I’m working with. The key has been to summarise the thinking on resilience as succinctly as possible.

Apart from the diagram you can see, the text below attempts to give concise answers to the following questions

  1. What is it (Resilience)?
  2. Why should my organisation care about resilience?
  3. Why is detailed planning not working anymore (if it ever did)?
  4. What’s the recipe for resilience?
  5. How does an organisation develop these characteristics?
  6. Resilience before and after (a crisis)
  7. How does nature do resilience?

 

Resilience in a mindmap

Visualising resilience is itself an exercise in complexity

The diagram should be A3, so You can download a pdf version here resilience in a mindmap PDF

Let me take you on a journey …

What is it?

Resilience is about the ability to adapt for the future and to survive. Whether that is for an organisation, country or an individual.
What seems sometimes forgotten is that the adaptation is best done before a crisis!
And here Resilience is more an organisational strategic management strategy, and not a security protocol. In this sense, Resilience is the ‘why’ to Change Management’s ‘how’

Why should my organisation care about resilience?

Research shows that the average rate of turnover of large organisations is accelerating. from around 35 years in 1965 to around 15 years in 1995. Organisations that want to stick around need to adapt with the changing environment.

Organisations know that they need to change to survive, but today’s urgency overrides the vague need to do something about a long term problem.  For this reason, crises can be the  catalyst for change.

Resilience is about dealing with organisational inertia, because the environment will change. The more successful an organisation has been in the past, the more difficult it will be to make change and so it becomes susceptible to abrupt failure. Miller coined the term ‘Icarus Paradox‘ to describe the effect and wrote a book by the same name. Icarus was the fictional Greek character who with his son made wings made from feathers and wax, but died when he flew too close to the sun and the wax melted, causing the feathers to fall out of the wings.

It is possible that Eastman Kodak is the best example of this trait. An organisation that was very successful between 1880 and 2007, Kodak failed to make the transition to digital and to move out of film fast enough.

Why is detailed planning not working?

Simply put, the world is too complex and the outliers becoming more common

  1. increasing connectedness – interdependencies leading to increasing brittleness of society/organisations  – just in time process management – risks, in rare instances, may become highly correlated even if they have shown independence in the past
  2.  speed of communication forces speedier decisionmaking
  3. increasing complexity compounds the effect of any variability in data and therefore the uncertainty for decisionmakers
  4. biology –  we build systems with an optimism bias. Almost all humans are more optimistic about their future than statistically possible. We plan for a future which is better than it is and do not recognise the chances of outlier events correct. Additionally, we plan using (somewhat biased) rational thought, but respond to crises with our emotions.

So if

  • we can’t predict the outlier events and
  • this makes most strategy less useful– especially that which is written and gathers dust without being lived ,

maybe we can be more resilient when we run into the outliers. What Taleb calls the Black Swans in the book of the same name.

Taleb’s book is available from Book Depository and is well worth the read, even if he can’t help repeating himself and dropping hints about fabulous wealth.

What’s the recipe for resilience?

Bad news, there isn’t a hard recipe for a resilient organisation, just like there isn’t one for a successful company, but they all seem to share some common attributes such as:

  • Agility and the ability to recover quickly from an event and,
  • an awareness of their changing environment and the willingness to evolve with it amongst others.

How does an organisation develop these characteristics?

It is a combination of many things –

  • developing an organisational culture which recognises these attributes which is supported and facilitated from the top of the organisation;
  • partnering with other organisations to increase their knowledge and reach when an event comes; and
  • Lastly engaging in the debate and learning about best practices

 Resilience before and after (a crisis)

But is resilience just one set of behaviours or a number.  When we think of resilient organisations and communities, our minds tend to go to the brave community / people / organisation that rose up after a high consequence event and overcame adversity. These people and organisations persist in the face of natural and manmade threats. Numerous examples include New York after the September 2001 events; Brisbane after the floods in 2011; and the Asian Tsunami in 2004.

However there is another set of actions which are more difficult in many ways to achieve. This is the capacity to mitigate the high consequence, low likelihood events or the creeping disaster before a crisis is experienced. The US behaved admirably in responding to the 9/11 terrorist disaster after it had occurred, but as the 9/11 Commission Report notes, terrorists had attempted on numerous occasions to bring down the World Trade Center and come quite close to succeeding.

In this thought may be one of the best argument for blue sky research. Serendipity – wondering through the universe with your eyes open to observe what’s happening around you, rather than head down and focussed only on one task – is this the secret to innovation?

How does nature do resilience ?

Life becomes resilient in that it is replicated wildly so that many copies exist, so that if some number fail, life can continue. Individual creatures carry DNA, which is all that needs to be replicated. Those creatures compete with each other and the environment to become more and more efficient. An individual creature may or may not be resilient, but the DNA is almost immortal.

How an organisation achieves this is the challenge that every management team needs to address. Over the next posts I will expand more

😉

back to top