Behavioural Economics as a tool for enterprise security

Have you ever wondered why on your electricity bill there is a representation of your household’s usage against the average 2, 3 or 4-person household telling you whether you are over or under? How does it make you feel?

The term behavioural economics has been around for maybe two decades. The marketing profession has been using the techniques it describes for even longer to get you to buy their brand. However, the use of behavioural economics as a tool for enterprise security is just emerging.

It is time for security professionals to start using these techniques to help protect organisations and not just to influence people to buy a particular soap, car or follow a sporting code.

What is behavioural economics

Behavioural economics looks at the relationship between the decisions that we make and the psychological and social factors that influence them. A significant amount of study in this area has been on people’s economic decisions, but the tools and techniques that have been tested can be applied in many other contexts.

Daniel Kahneman and his late research partner Amos Tversky are the two research psychologists most associated with behavioural economics. In 2002, Kahneman shared the Swedish Banker’s Prize in Economic Sciences in Memory of Alfred Nobel for this work. Kahneman’s 2011 book “Thinking Fast and Slow” explains many of the concepts in accessible terms. Kahneman and Tversky built on earlier studies that cut down an idea that now sounds quaint, the idea that humans act entirely rationally at the population or large group level. Even so, this idea was at the heart of much classical economic thinking.

You might not think at first that this seems entirely related to enterprise security. However, if you consider that the premise of behavioural economics is that people do not always make decisions that are entirely rational, you’d probably see the connection! In addition, the ideas that small (and sometimes even intangible) incentives and disincentives can be used to guide individual actions on a large scale are also very important. It is this second aspect which is of greatest use to the enterprise security practitioner.

Behaviour is at the heart of enterprise security, because people are every organisation’s greatest asset and often also their greatest risk. At its simplest, the key aim of good enterprise security is ensuring that individuals are encouraged to make the right decisions that benefit their organisation.

Behavioural economics works by assuming that in many cases, people making the ‘wrong’ decision within an organisation do so because they have imperfect information or lack the right incentives or disincentives.

Psychologists have also found that people can often exhibit a strong inclination to conform to social norms. The social norms change with the social groups that we participate in. Essentially, we often do things because our friends, colleagues, or those we admire, do.  Our friends and colleagues provide us with informational social influence or social proof. In plain English, we like to follow our herd and keep up with the Jones’.

Curiously though, we seem to struggle more with changing our minds than coming to a decision in the first place. The idea that when the facts change, people change their minds is a bit tricky for many. Associated with this curious aspect, researchers from Harvard Business School have claimed also that we tend to think we are more moral than we actually are and inhabit an “ethical mirage”.  This can mean there’s a disconnect between how we describe our decisions and how we actually behave. If we accept this somewhat unflattering portrait of human behaviour, it means that we tend to take a position that justifies our actions whatever they were, once we’ve made a decision. And we want more justification to change our minds than we needed to come to it in the first place!

But what if we could get people to make the ‘right’ decision in the first place. Then they wouldn’t have to justify wrong decisions. This is where the research findings of behavioural economics are tested at organisational and national scale.

Behavioural economics concepts are being applied at the public policy level by governments wanting to encourage certain behaviour without going to the expense of legislating compliance. It is expensive to make something illegal. Sometimes it is absolutely necessary e.g. murder, but the society has to create enforcement systems, pay the enforcers, and then who watches the watchers? Some enlightened government agencies are dabbling with the use of behavioural economics to achieve high levels of compliance.

In the UK and latterly also in Australia, the tax authorities have been attempting to use behavioural economics techniques. So called ‘nudge units’ have been set up to coax to get people to do their taxes by using social proof methods.  Informing taxpayers who are late paying that “90% of people pay their taxes on time” increases the rate of taxpayer compliance. This achieves the policy objective of getting timely tax payments, but does it in a way that won’t generate negative headlines. This in turn allows the tax agency to focus on individuals who are intentionally breaking the law, rather than doing so because life got in the way.

Another recent example has been the introduction of the “No Jab, No Pay” policy by the Australian Government where parents do not get all their family tax benefits unless they are willing to vaccinate their children. Rather than making it illegal for children to remain unvaccinated, the government has incentivised parents to vaccinate. This, added to significant social pressure from almost all the medical community, means that Australia’s childhood vaccination rates are generally very high and we see fewer distressing pictures of children with whooping cough around the country.

One interesting way that companies are using social proof is in encouraging households to save water and electricity. Increasingly, utility bills show householders where they stand in comparison to their suburb in terms of water or electricity use. The householder can then consider whether they want to moderate their behaviour. Literally to keep up with the Jones’!

Marketing firms use many behavioural economics techniques to encourage us to use particular products. Many of us take advantage of airline frequent flyer programs that give rewards for the flights taken by members. The extremely successful travel website, Tripadvisor awards points to its website users for the travel reviews that they produce. However, Tripadvisor points have absolutely no dollar value. They are valuable only to users in terms of social proof to that community that a member is a well seasoned traveller. You may have realised that the majority of social media operates in a similar way.

Why should enterprise security professionals consider using behavioural economics in their organisation?

It is expensive and time consuming to maintain rules for the increasingly complex environment that organisations operate in. Rules are difficult to write well and often only work in limited circumstances. The more detail, the more exceptions need to be built. Quite often rules also create a culture where individuals only follow the letter, not the spirit of the rules. This can contribute to the creation of a workplace which is not adaptable and where security is blamed for the problems of the organisation.

This can lead to situations where workers sometimes choose to circumvent organisational rules in order to achieve local goals. A worker might shortcut a process to ensure that their team are able to complete it faster. The individual might rationalise this as being good for their company in that the job is completed faster and good for themselves in that they can go home earlier. However, the decision that they have rationally come to might be the ‘wrong’ decision from the perspective of their organisation. The shortcuts that have been introduced may decrease organisational security.

How do organisations change this? By changing the decision-equation the worker takes when he or she makes that decision. This is very much the place of behavioural economics in enterprise security. Organisational messaging which demonstrates the social norms of the organisation from a security perspective are vital. So to are tools and procedures which endeavour where possible to make the secure decision, the easiest one to make.

In many ways the decision is very much linked to the ‘security culture’ of the organisation. The security culture is effectively the customs and practices of the organisation for whom the individual works.

Organisations are increasingly moving to principles and risk based frameworks in many areas including security because they find the sheer complexity of business overwhelming otherwise. This was one of the main drivers for the creation of the Australian Government’s Protective Security Policy Framework. The PSPF tries to get government agencies to focus on their security outcomes, rather than on process.

 

[Brain scan of white matter fibers, brainstem and above. The fibers are color coded by direction: red = left-right, green = anterior-posterior, blue = ascending-descending (RGB=XYZ). The Human Connectome Project, a $40-million endeavor funded by the National Institutes of Health, aims to plot connections within the brain that enables the complex behaviors our brains perform so seamlessly.MANDATORY CREDIT: Courtesy of the Laboratory of Neuro Imaging at UCLA and Martinos Center for Biomedical Imaging at MGH / www.humanconnectomeproject.org] *** []
Brain scan of white matter fibers, brainstem and above.  Laboratory of Neuro Imaging at UCLA and Martinos Center for Biomedical Imaging at MGH / www.humanconnectomeproject.org
Enterprise security professionals should be asking where they can apply these behavioural economics techniques in their organisations. The possibilities are varied and many, but one financial institution has used behavioural economics give nudges to staff regarding personnel security. In one case, to improve their reporting of change of circumstances by giving them a simple message that “most people in our organisation report their change of personal circumstances within four weeks”.

In the government space, there has been debate about whether it is possible to create an ‘information classification market’ which balances the need to classify information appropriately against the costs to organisation of over-classification in terms of long term storage and devaluation of security markings. Such a market could work by incentivising managers to ensure that staff were classifying information as accurately as possible. As always, the trick would be to ensure that the incentives matched the risk profile of the organisation.

Every organisation is different and so are the opportunities for using these techniques to improve your enterprise security.

For more information:

http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2002/advanced-economicsciences2002.pdf

http://theconversation.com/the-potential-of-behavioural-economics-beyond-the-nudge-43535

http://www.immunise.health.gov.au/internet/immunise/publishing.nsf/Content/clinical-updates-and-news/$File/Update-No-Jab-No-Pay-Immunisation-Catch-Up-Arrangements(D15-1126865).pdf

www.hbs.edu/faculty/Publication%20Files/08-012.pdf

https://en.wikipedia.org/wiki/Social_proof

 

Security Solutions Magazine 100th EDAn earlier version of this article appeared in the 100th edition of Security Solutions magazine  http://www.securitysolutionsmagazine.biz/

Security Professionalisation in Australasia

Security Professionalisation in Australasia

Security Professionalisation is an issue that all who are involved or care about societal resilience should be concerned about. I’ve just written an article for Security Solutions Magazine talking about the efforts that a new organisation, Security Professionals Australasia (SPA) is undertaking to work with the security industry and governments to improve the state of affairs.

The article has been published in the latest edition of Security Solutions Magazine (Nov/Dc 2015) which is available at  http://www.securitysolutionsmagazine.biz/ 

(Disclosure of interest, Alex Webling is a member of SPA)

Complexity and Resilience in an Information Centric World

Complexity and Resilience

How do organisations develop resilience in the complex environment that is the 21st century information centric world?

The lifeblood of the modern organisation is information. Every organisation, from small business to government department depends on information being passed to the right place at the right time.

Organisations and society are becoming more complex, but that doesn’t mean that they are more resilient. Complexity and resilience are more often enemies than friends!

Complex Organisations in the 21st Century

The opportunities posed by increased information flows are enormous,

Information is being gathered, stored and manipulated in larger quantities at higher speeds and analysed in more detail by organisations and society. They aim to to drive greater efficiencies and provide new and improved services. The information revolution allows organisations to become larger and more complex and to develop more complex systems and processes to support their organisational models.

1 billion Carbanak hackThe threats are also enormous

But the opportunity to become larger and therefore more complex often comes with a downside for organisational resilience and longevity. Complex systems are prone to catastrophic failure as small problems cascade and become enormous.

Information is damaging organisations when it is leaked or lost. Organisations are struggling to cope and governments are struggling to keep their own data secure. In other cases, too little information being passed to the places that need them. The organisational strategy is a delicate balancing act!

Survival and resilience

Why do organisations fail. Organisations are by definition self organising systems. However, when a self organising system loses the capacity to self organise – it is dead. Broadly, the story is similar for each one. The organisation was unable to adapt to the business environment before it ran out of resources. The end is often brought about by an acute event, but in many ways such an event is really just the ‘final straw that breaks the camel’s back’ .

The Australian Government’s resilience strategy shows Australia’s leadership in resilience thinking. It identifies four options for an organisation

  • Decline;
  • Survive;
  • Bounce Back;
  • Bounce Forward. 

However, in practice I think this may be too gentle. Taken over the longer term, organisations either live or die. There is no middle ground. Organisations that survive crises are able to do so for two reasons

  1. They have the resources, capital personnel,  leadership etc to manage themselves out of a crisis once it hits emerging weaker but alive; or
  2. They are prepared to adapt if a crisis arises and have developed a broad set of principles which will work with minimal change in most eventualities.

It is this second group which are truly resilient and survive long term. They still suffer from crises, but emerge stronger over the long term as they adapt to their new environment.

ICT is a two edged sword in the quest for resilience

As organisations become more complex, they are relying more and more on information technology and systems to help them understand themselves and their environment. Organisations can become more efficient. However, most organisations do not have control of their ICT infrastructure and it is increasingly difficult to understand how information flows within an organisation. It is also important to realise that efficiency and resilience are not the same. In fact, some efficiency practices may increase organisational fragility

Are the tools that organisations are using to try to understand their own organisations becoming in themselves part of the problem?

Possibly, though it is more the issue of complexity. There are a number of other factors

Speed of change

The speed that societies are changing is accelerating as technology advances. This means that organisations need to be able to adapt faster in order to keep up.

Interdependence

Organisations are more interdependent than ever. It is a trend that will continue to increase. In fact, countries are also more interdependent than ever. During the Cold war, sanctions didn’t affect Russia nearly as much as they do now. This is positive from a global political perspective, no country can survive without others, not even the USA or China. It is even forcing Iran to make compromises. In some ways this trade interdependency may be an alternate for the Mutually Assured Destruction (MAD) that nuclear weapons threatened to the USA and Russia during the cold war.

However, interdependency inherently leads to complexity and that is not a characteristic of resilience. Most organisations are increasingly dependent on long supply chains for materials and services, meaning that failure at one end of the supply chain can be expensive or time-consuming. On the other hand, international supply chains are extremely reliable … until they aren’t.

Everyone’s your neighbour

Because everyone is connected. Organisations can get closer to their customers and suppliers via the Internet. At the same time criminals and competitors are able to get closer to their target organisations as well.

Some organisations have been struggling. Sony corporation is one of the most prominent, but it is by no means the only one.

sony hacked again
From http://blogs.umb.edu/itnews/2015/01/06/the-sony-hack/

 

 

 

 

 

Affecting organisational longevity?

The evidence seems to be showing that organisational longevity is being reduced by a number of factors. Not least the ones I’ve written about above.

This graph produced by Innosight plots the average company lifespan on the USA Standard and Poor’s company index from 1958 to 2012 and extrapolates this out to 2030.

average company lifespan on SP500

US corporations in the S&P500 in 1958 remained in the index for an average of 61 years. By 1980, the average tenure of a similar organisation was 25 years. By 2011, that average had been cut to 18 years. In other words, the churn rate of companies has been accelerating over the last Century. On average, one S&P500 company is dropping off the index every two weeks! In total, 23 companies were removed from the S&P in 2011, either due to

  • declines in market value – eg Radio Shack’s stock no longer qualified in June 2011.
  • acquisition – eg National Semiconductor was bought by Texas Instruments in September 2011.

At the current churn rate, 75% of the S&P organisations that were there in 2011, will no longer be on the index in 2027.

The flaws in simple risk

Risk assessment loses specificity with complexity. That is, the larger, more complex the organisation, the less accurate the risk assessment can be. This is also true when we think about societal risks.

The sum of overall risk that an organisation has, is greater than its parts.

It is hubris to think that an organisation or society can know all its risks. There will be risks faced by an organisation that are either unknown, unquantifiable or both. Moreover:

  • The organisational environment continues to change rapidly. This means that risk owners ie company boards have less time for consideration and risk assessments need to adapt to the changing circumstances.
  • Perception bias is a significant problem. Gardner talks about bounded rationality in risk – suffice to say we downplay risk of things that we think we understand. Taleb talked in the Black Swan that people focus on the simple things they could understand.

In a complex organisation, people tend to focus on problems in parts of the organisation, rather than the organisation as a whole.

Different risk events

We see these issues playing out in different events that affect organisations, whether it is a

acute failure

such as the
– Deepwater Horizon Oil Spill that may yet cause BP’s demise, but seems to have been caused by a failure in the relationship with its drilling contractor, Haliburton

Target(USA) hack which saw tens of millions of credit cards stolen due to weaknesses in service provider security.

Or chronic failure

such as Kodak’s failure over decades to manage the transition to digital imaging, despite the fact that it’s own researchers had discovered the technologies in the 1970s.

A resilient approach

Resilience is the capacity for complex systems to survive, adapt, evolve and grow in the face of turbulent change. Resilient enterprises are risk intelligent, flexible and agile
(Adapted from www.compete.org)

A ‘Resilience approach’ does ignore risk assessment and management, it builds upon it to address weaknesses in terms of dealing with unknowns (known and unknown) and perception bias. Particularly those ‘high consequence low likelihood events’ – the black swans, that sit untreated at the bottom of any risk assessment, or fall off the bottom because nobody wants to think about them, or are not acute but in the chronic creeping ‘must deal with it sometime’ category. Worse still, they may be completely unknown.

Resilience approach allows enterprises to put in place mechanisms ‘deal with the gaps’ in the risk approach – those things that have been missed or underestimated.

As the world becomes more complex and organisations become more complex themselves. A resilience approach is the only option.

The resilient organisation

Develops organisational adaptability. A culture of making things work in spite of adversity. This creates a capacity to deal with adverse events – adaptability to deal with rapid onset of shocks. They also analyse to see whether improvements can be made out of any adversity.

Organisations look for mitigations that are able to treat a range of threats, because these techniques are likely to be more adaptable than highly specialised methodologies.

Testing – Organisations test systems to breaking point and beyond in the most realistic scenarios possible.

Resilience from Chaos (Monkey)

An example of testing to breaking point in a real environment is the ‘chaos monkey’ tool developed by Netflix. This application/agent randomly turns off parts of the Netflix production environment simulating the failure of different parts of their infrastructure. It is set to only do this during working hours when engineers are about to respond. In this way, the system is tested in the best manner possible short of the real thing.

Chaos Monkey Released Into the Wild

 

 

 

 

 

This post is based on a presentation I gave in Singapore. Here are my slides

This slideshow requires JavaScript.

 

Resilience Outcomes would like to acknowledge the assistance of Emirates Airlines for getting Alex to and from Singapore in great comfort.

Security Standards are important

Security Standards are vital to our society

That’s why Alex Webling has accepted a nomination to join the Australian Standards Committee for Security Standards and to join the Australian Delegation to ISO TC292, Morioka, Japan in March 2015.

We congratulate Alex on this recognition of his security knowledge and expertise particularly  in the areas of enterprise security and resilience and his work in the Australasian Council of Security Professionals and its successor, Security Professionals Australasia.

The Technical Committee will have the following provisional title and scope:

Title: Security

Scope: Standardization in the field of security, including but not limited to generate security management, business continuity management, resilience and emergency management, fraud countermeasures and controls, security services, homeland security.
Excluded: Sector specific security projects developed in other relevant ISO committees and standards developed in ISO/TC 262 and ISO/PC 278.
The committee temporary structure covers the following areas;

ISO/TC 223/WG 1 – Framework standard on societal security management
ISO/TC 223/WG 2 – Terminology
ISO/TC 223/WG 3 – Emergency management
ISO/TC 223/WG 4 – Resilience and continuity
ISO/TC 223/WG 6 – Mass evacuation
ISO/TC 223/AHG – Professional development
ISO/TC 223/AHG – Information exchange
ISO/TC 223/AHG – Continuity management
ISO/TC 223/AHG – Revision of ISO 22320
ISO/TC 223 TF – Task force on strategic dialogue
ISO/TC 223/AHG 4 – Communication group
ISO/TC 223 DCCG, Developing countries contact group
ISO/TC 247/WG 1 – MSS for security assurance
ISO/TC 247/WG 2 – Terminology
ISO/TC 247/WG 3 – Guidelines for interoperable object and related authentication systems to deter
counterfeiting and illicit trade
ISO/TC 247/WG 4 – Product Fraud Countermeasures and Controls
ISO/TC 247/WG 5 – Document Fraud Countermeasures and Controls
ISO/PC 284/WG 1 – Management system for private security operations – Requirements with guidance

—-
 Security Standards ISOWe also wish to thank IAPPANZ and Attorney-General’s Department for supporting Alex’s nomination.

Sydney Siege

The siege in a chocolate shop in Sydney’s CBD ended early this morning AEST. Three people died, including one purported to be the gunman Haron Monis.

There will necessarily be intense scrutiny on the forces used to resolve a violent event. However, it is important to remember that they do not happen in isolation.

The factors that lead us to these events are always complex and often have geo-political, sociological and psychological underpinnings.  In this case, the gunman, was a convicted criminal and seems to have latched on to the idea of violent jihad to justify his own failings. 

This is the time for cool heads. It is far more effective and efficient to invest in efforts which counter radicalism before it descends into violence. To that end, we should remember the quiet work of those who enfranchise the disenfranchised and seek to strengthen social cohesion.

It is these people, who make our way of life so great.  

Governments at all levels must lead in these efforts. Politicians must remember, whatever their political colour, that radicalism  is a complex societal issue, not a sound bite. Else we descend into barbarism.

As a society, we must remember that the work of all members of the civil society needs to be focussed on countering radicalism.

This event received so much coverage precisely because it is uncommon in Australia

Just remember that the reason this event received so much coverage in the media is precisely because it is so rare. And of course, it was across the road from the HQ of one of the big Australian TV channels.

Yet, at the same time across the world, six people died, one was wounded, and the gunman escaped in a shooting in Philadelphia. In that case, it seems that the gunman is a mentally disturbed ex soldier.

Yet, although it was reported, multiple shootings are depressingly common in the US. They are even more common in parts of Africa, and often the reports don’t even make it beyond the local news.

It all comes back to risk and societal resilience, because when citizens are allowed to panic, governments start using extreme measures in our names. Professionalism in risk and security is about understanding the difference between perception and reality and taking an evidence based approach to dealing with the issues.

More information

http://www.abc.net.au/news/2014-12-15/sydney-siege-hostages-cafe-martin-place-police-operation/5967232

http://www.nbcphiladelphia.com/news/local/Lansdale-Shooting-285800521.html

http://www.nytimes.com/2014/12/15/us/politics/cheney-senate-report-on-torture.html?_r=0

http://link.springer.com/search?facet-author=%22Roy+Gardner%22

Trusted Insider cont.

Trusted Insider continued

Part 2 of 2 talking about the trusted Insider and how organisations can address the problems at an organisational level

In part 1 of this we talked about who are the trusted insiders, why organisations are concerned and what the motivations of the trusted insider are. Part1 is here – https://www.resilienceoutcomes.com/identity/trusted-insider/

In this part, we talk about some approaches to the trusted insider problem.

Organisations are asking “How can we stop employees becoming the next Edward Snowden?”

I think we should question is why aren’t there more people like Edward Snowden? I think it is worth noting that the NSA is huge with an unconfirmed staff count in the order of 30,000-40,000. One or even ten ‘rogue insiders’ is as a percentage very small – even though the damage to the USA and its allies has been very significant.

Organisations, including intelligence organisations, develop very rigorous and reliable procedures to ensure that people who shouldn’t be trusted don’t join their organisations. Good recruitment practices which exclude people who won’t fit and don’t let people become insiders in the first place are the best defence. However, one of the hardest issues to manage is to deal with people who gradually become disgruntled after they’ve been working in an organisation for a while.

Of course, organisations can use infosec procedures such as internal surveillance mechanisms and information compartmentalisation. These can reduce the consequences wrought by trusted insiders. However these mechanisms can inhibit the rest of the employee body from working at their full potential. It also can affect staff morale if not carefully marketed. Interestingly SIG attendees were told that the Attorney-General’s Department was considering the possibility of a continuous disclosure regime for security clearances which would in real or near real time provide information to security officials about whether employees were undertaking activities which might raise eyebrows.

A Sharing economy model?

Considering an organisational ‘sharing economy’ model when considering the trusted insider threat might help. The employee/employer relationship is one of mutual benefit. It can be also one of mutual harm.

http://pixabay.com/

Employees work for their organisation and their identity becomes entwined in the reputation and identity of that organisation. As mentioned previously, the trusted insider that does the wrong thing by their organisation does so for a number of reasons. The most dangerous reason has always been those who are motivated not by money or greed, but by a grievance or revenge.

If we extrapolate using the NSA/Snowden example…. The NSA has built up an impressive reputation over many years for technical excellence. But maybe some of its employees believed the propaganda of their employer. More importantly, it would seem that NSA’s management failed to completely disabuse their employees of the fact that intelligence agencies live in a grey world and do things that are morally grey. Consequently people working inside the NSA seem to have been surprised when they found that some of the things it was doing were dark. Unfortunately for the NSA, brilliant people became disillusioned and turned against it.

This explanation is probably not the whole answer. However a couple of thoughts arise both of which may help to prevent future events:

  • is it possible to develop an internal organisational market for the reputation of the organisation?
  • A meaningful alternative chain of reporting to vent frustrations is vital.

A market of organisational reputation

Many private and public organisations organisations spend significant sums to monitor their public relations posture. There is benefit in understanding what the organisation thinks about itself as well.  An anonymous reporting mechanism can allow an organisation to get some information about whether it is ‘on the nose’. Such data might also be combined with metrics such as the number of relevant social media postings.

http://pixabay.com/

An alternative chain of reporting

Both USA and Australia now have whistle-blower mechanisms for their intelligence services. In Australia, the Inspector-General of Intelligence and Security performs this role.

Many organisations both in the private and public sector could consider the benefits of taking on aspects of this system. It obviously doesn’t work perfectly, but it certainly contributes to the protection of the intelligence agencies from trusted insiders.

Mr Snowden has claimed that “he had raised alarms at multiple levels about the NSA’s broad collection of phone, email and Internet connections.” However, this is disputed by the USA. Whatever the truth of the matter, it seems that Snowden felt he wasn’t being listened to. So maybe the take-home from this aspect is that the ‘alternate chain’ of reporting needs to have big teeth to make changes where there are real problems identified. Balancing natural justice against the consequences of a breach is incredibly important. Not only for the individual concerned, but for the organisation itself, because you know people in organisations gossip about each other!

http://pixabay.com/

This is of course a governance issue, and this makes it very tricky to get right – this is where Resilience Outcomes Australia can help your organisation, because resilience and longevity of organisations is what we do.

Further reading:

Managing the insider threat to your business – a personnel security handbook (PDF) from the Australian Attorney-General’s Department is a good place to start.

Australian IGIS – Inspector-General of Intelligence and Security – the reports are worth having a look at.

USA Department of Defense Whistleblower Program is part of the Office of the Inspector General of the US Department of Defense. One of the sub-programmes it runs is specifically for the US Intelligence Community.

http://pixabay.com/

The trusted insider

The trusted insider.

Helping organisations protect themselves against trusted insiders

I attended the Security in Government (SIG) conference in Canberra earlier this month. I am somewhat biased, but I think that SIG is probably the best annual security related gathering in Australia.

If you compare it to a lot of international gatherings SIG certainly holds its own. Although, the US and German conferences in particular have glitz and size, the quality of the discussion and the more intimate nature is refreshing. SIG, as you may have guessed is primarily targeted at government, but there are good lessons for all organisations to be had there. Ok, enough of the fanboy …

The 2014 SIG theme was the ‘trusted insider’. Whilst the discussions were often very good, I wondered whether there are additional approaches to reducing the problem of the trusted insider. These approaches focus more on the relationship between employees and their organisations.

http://pixabay.com/

Who are the trusted insiders?

A trusted insider is somebody who uses their privileged access to cause harm to their employer or their interests. I’ll be a bit controversial here and note that, whether these people are traitors, spies or whistle-blowers depends somewhat on perspective. In any case these people evoke strong almost visceral emotions in many people.

Why are organisations so concerned about the trusted insider?

Despite fears about rogue hackers attacking organisations from the outside, the trusted insider is still considered the biggest threat to an organisation. In Australia and overseas, trusted insiders ‘going rogue’ have caused the significant damage to national security, government agencies and private organisations. The harm done can be from loss of secrets, money or even life.

Secrets: The most glaring examples in the information security space have probably come out of the USA in recent times. People like Edward Snowden and Chelsea (Bradley) Manning spring to mind in the national security sphere. However, some Swiss banks have also been stung by Bradley Birkenfield whom some in those establishments might call a trusted insider and the US tax agency would call a whistle-blower!

http://pixabay.com/

Money: Fraud is probably the most significant threat to private organisations from trusted insiders, particularly those in the finance and insurance industry. Sometimes the size of an event can be enormous, such as when $2billion was lost in 2011 through ‘unauthorised transactions’ in a Swiss bank.

http://pixabay.com/

Life and property: Whilst we often focus on loss of information confidentiality, trusted insiders were also responsible for assassinating the Indian Prime Minister Indira Gandhi in the 1980s and shooting fellow soldiers in the USA and Afghanistan in the last decade. There have also been a number of cases of ‘issue motivated’ insiders harming organisations by damaging plant and equipment.

http://pixabay.com/

What motivates the trusted insider?  C.R.I.M.E.S.

The motivations of trusted insiders are varied, however they broadly fit under the standard drivers of criminal behaviour as described by the mnemonic ‘crimes’.

Coercion – being forced, blackmailed or intimated

Revenge – for a real or perceived wrong, it could be about disaffection and or a grudge

Ideology – radicalisation or advancement of an ideology /religious objective

Money – for cash, profit, dosh, moolah – whatever you call it, and/or

Exhilaration or Ego– for the excitement or because they think that they are in someway cleverer than their compatriots –  Christopher Cook seemed driven by the excitement..
The USA’s “worst intelligence disaster” was Robert Hanssen, who might be described as an egomaniac.

Sex and personal relationships. The combination of sex and coercion is a lethal one.

Of course, some are also mentally fragile and may not have a motivation that is exactly clear to others.

End of part 1

In the coming part, we talk about some approaches to the trusted insider problem.

Cyber security focus in the oil and gas sector to increase significantly

Energy companies will need to significantly increase their focus on cyber security in the next three to five years if they wish to keep ahead of the increasing risks to their business from direct cyber attack and malware.

Oil and Gas 32 by Michael Dance http://www.flickr.com/photos/gpmarsh/page4/

The Oil and Gas sector will need to invest around $1.87 Bn USD into upgrading its SCADA* and general corporate systems to defend against direct cyber attack and malware, according to technology intelligence company, ABI research.

There have been several attacks targeted at oil and gas firms in the last two years, including:

  • Night Dragon in 2011. Originating from China according to McAfee. The attacks were a mixture of social engineering and unsophisticated hacks with the aim of gaining access to corporate forecasts and market intelligence from petrochemical firms. Most alarming was the assertion by McAfee that it had been undetected for up to four years.
  • Shamoon targeted Saudi Aramco in 2012, taking out up to 30,000 workstations. This attack has been linked to (and disputed by) Iranian interests.

The examples given are or attacks on energy companies’ corporate systems. The fear is that issue motivated groups or nation states might now choose to attack poorly protected SCADA systems owned by oil and gas companies.  The ability to do this has been demonstrated in the wild with Stuxnet, but not on energy installations.

 

What are the key security issues surrounding SCADA systems?

  • The general observation that SCADA systems are built for throughput, and security is bolted on as an afterthought, rather than being built in at the design stage.
  • An overemphasis on security through obscurity, with the belief that the use of specialised protocols and proprietary hardware provides more than cursory protection against cyber-attack. Better to assume the enemy knows or will know the system.
  • Over-reliance on physical security to provide protection
  • An assumption that the SCADA system can be kept unattached to the Internet and therefore will be secure.

A bit of background.

SCADA systems have been around since the mainframe era. However, these systems were based on proprietary hardware and software and they weren’t connected to open systems. The main threat to these systems was the ‘trusted insider’, such as when a disgruntled contractor, Vitek Boden used his knowledge and some ‘acquired’ proprietary hardware to cause sewage to overflow in a plant in Maroochy Shire, Queensland.

In the 1990’s, SCADA systems began to be built using the same technology as the Internet (TCP/IP) and early this century, companies began to connect these systems to the Internet.  In 2010, Stuxnet apparently caused centrifuges to spin out of control and self-destruct in nuclear processing plants at Natanz in Iran. Attribution is difficult, but the finger is alternately pointed at Israel and the USA (or both).

 What next?

Organisations, particularly in the oil and gas industry need to change their approach to cybersecurity and take a holistic and strategic view. This starts at the board level and requires a cultural change. This does not necessarily mean buying the latest machine that goes ‘ping’. It does mean thinking about how to integrate security at the core of the business, just like finance and HR.

 ———-

More info from ABI research

SCADA – Supervisory Control and Data Acquisition

Claude Shannon‘s maxim  “The enemy knows the system.”

Photo: Matthew Dance, used under creative commons – http://www.flickr.com/photos/gpmarsh/page4/ 

Security is your business 2

I was at the launch last Thursday of ‘Security is your business 2’. If you are interested or responsible for Enterprise Risk Management on a practical level, then this DVD will help your organisation.

The DVD includes interviews with Australian and overseas (UK mostly) security literati talking about a number of issues related to ERM. It builds upon the well regarded ‘Security is your business’ but stands alone.

Apart from the fact that I know and respect a number of the talking heads on the dvd, I have no association with the enterprise.

More information from

http://www.securityisyourbusiness.com/Security_Is_Your_Business/Home.html

 

Complexity and organisational resilience

On the face of it, complex systems might have more resilience than those that are simple because they can have more safeguards built-in and more redundancy.

However, this is not supported by real world observation. Simply put, more complexity means more things can go wrong. In both nature and in human society, complex controls work well at maintaining systems within tight tolerances and in expected scenarios. However complex systems do not work well when they have to respond to circumstances which fall outside of their design parameters.

In the natural world, one place where complex systems fail is the immune system. Anaphylactic shock, where the body over-reacts because of an allergy to a food such as peanuts is a good example. Peanuts are of course, not pathogens, they are food, The immune system should not react to them. However people’s immune systems are made up of a number of complex systems built over the top of each other over many millions of years of evolution. One of these systems is particularly liable to overreact to peanuts. This causes in the worst case, death through anaphylaxis – effectively the release of chemicals which are meant to protect the body, but which do exactly the opposite. This is an example of where a safety system has become a vulnerability when it is engaged outside normal parameters.

We are beginning to see the resilience of complex systems such as the Great Barrier Reef severely tested by climate change. Researchers have found that the reef is made of complex interactions between sea fauna and flora, built upon other more complex interactions. This makes it nigh on impossible for researchers to find exact causes for particular effects, because they are so many and varied. Whilst the researchers confidently can say that climate change is having a negative effect on the coral and that bleaching effects will become more common as the climate becomes warmer, they cannot say with a great deal of certainty how great the other compounding effects such as excess nutrients from farm runoff or removal of particular fish species might be. This is not a criticism of the science, but more an observation that to predict the future with absolute certainty, when there are multiple complex factors at play is extremely difficult.

These natural systems are what some might call ‘robust yet fragile’. Within their design parameters they are strong and have longevity. Such systems tend to be good at dealing with anticipated events such as cyclones in the case of the Great Barrier Reef. However, when presented with particular challenges outside the standard model, they can fail.

Social systems and machines are not immune from the vulnerabilities that complexity can introduce into systems and can also be strong in some ways and brittle in others.

The troubles with the global financial system are a good example. Banking has become very complex and banking regulation has kept up with this trend. That might seem logical, but the complex rules may in themselves be causing people to calibrate the financial system to meet the rules, focussing on the administrivia of their fine print, rather than the broad aims that the rules were trying to achieve. As an example, one important set of banking regulations are the Basel regulations. The Basel 1 banking regulations were 30 pages long, the Basel 2 regulations were 347 pages long and the Basel 3 regulations are 616 pages. One estimate by McKinsey says that compliance for a mid-sized bank might cost as much as 200 jobs. If a bank needs to employ 200 people to cope with increased regulation, then the regulator will need some number of employees to keep up with the banks producing more regulatory reports, and so the merry-go-round begins!

A British banking regulator, Andrew Haldane is now one of a number of people who question whether this has gone too far and banks and banking regulation has become too complex to understand. In an interesting talk he gave in 2012 in Jackson Hole, Wyoming, USA titled the ‘Dog and the Frisbee’, Haldane uses the analogy of a dog catching a frisbee to suggest that there are hard ways and easy ways to work out how to catch a frisbee. The hard way involves some complex physics and the easy way involves using some simple rules that dogs use. Haldane points out that dogs are better in general at catching frisbees than physicists! I would also suggest that the chances of predicting outlier events, what Nicolas Taleb calls ‘Black Swans’  is greater using the simple predictive model.

This is in some ways a challenge to the traditional thinking behind risk modelling. When I did my risk course, it was all very formulaic. List threats, list vulnerabilities and consequences, discuss tolerance for risk, develop controls, monitor etc. I naively thought that risk assessment would save the world. But it can’t. Simple risk management just can’t work in a complex system. Firstly, it is impossible to identify all risks. To (misquote) Donald Rumsfeld, there are known risks, unknown risks, risks that we know we have, but can’t quantify and unknown risks that we can neither quantify nor know.

Added to this is the complex interaction between risks and the observation that elements of complex systems under stress can completely change their function (for better or worse). An analogy might be where one city under stress spontaneously finds that its citizens begin looting homes and another intensifies its neighbourhood watch program.

Thus risk assessment of complex systems is in itself risky. In addition, in a complex system, the aim is homeostasis, the risk model responds to each raindrop-sized problem, correcting the system minutely so there are minimal shocks and the system can run as efficiently as possible. A resilience approach might try to develop ways to allow the system/organisation/community to be presented with minor shocks, in the hope that when the black swan event arrives, the system has learnt to cope with at least some ‘off white’ events!

Societies are also becoming more complex. There are more interconnected yet separately functioning parts of a community than there were in the past. This brings efficiency and speed to the ways that things are done within the community when everything is working well. However when there is a crisis, there are more points of failure. If community B is used to coping without electricity for several hours a day, they develop ways to adapt over several months and years. If that community then finds that they have no power for a week, they are more prepared to cope than community A that has been able to depend on reliable power. Community B is less efficient than community A, but it is also less brittle.

This does however illustrate out a foible of humanity. Humans have evolved so that they are generally good at coping with crises (some better than others), however they are not good at dealing with creeping catastrophes such as climate change, systemic problems in the banking and finance sector, etc.

Most people see these things as problems, but think that the problems are so far away that they can be left whilst other more pressing needs are dealt with.

Sometimes you just need a good crisis to get on and fix long-term complex problems. Just hope the crisis isn’t too big.

Video Donald Rumsfeld – Known Knowns
Back To Top