Skip to content

What is it?

Security measures that staff create to manage security to the best of their knowledge and ability, avoiding official security policies and mechanisms that get in the way of their tasks and reduce productivity.

Why is it important?

Shadow security practices reflect the best compromise staff can find between getting their job done and managing the risks to the assets they use. It presents an opportunity for the organization to learn how to maintain both security and productivity.

Why does a business professional need to know this?

Shadow security emerges in organizations where: (1) employees have reasons to comply with security and are motivated to do so, but (2) security mechanisms are not fit to support their work goals. As a result: (3) a significant amount of security mediation takes place at the team level, and (4) employees become isolated from the security division.

Although not compliant with official policy and sometimes not as secure as employees think, shadow security practices reflect a working compromise between security and getting the job done. Its occurrence signals the presence of unusable security mechanisms. These can lead to errors and workarounds that create vulnerabilities, people ignoring security advice, and systemic non-compliance, all of which can act as noise that makes genuine cybersecurity attacks hard to detect in systems.

Security management should not ignore shadow security. Organizations must be able to recognize when, where, and how shadow security practices are created. Once identified they should not be treated as a problem, but rather as an opportunity to identify shortfalls in current security implementations that can be leveraged to provide more effective security solutions.

This can be done by taking the following steps:

  • Simplifying compliance with security
  • Measuring the effectiveness of security mechanisms after deployment
  • Engaging users when designing security solutions
  • Leveraging the position of team managers as both a mediator for security and a conduit, providing feedback as to the appropriateness of security solutions in supporting productive tasks
  • Giving team managers the responsibility of acting as mediators for security and as a conduit for feedback from users on the impact of security processes on productivity

References

  • (Kirlappos 2014) Learning from “Shadow Security”: Why understanding noncompliant behaviors provides the basis for effective security.: Kirlappos, Iacovos, Simon Parkin, and M. Angela Sasse (2014). Workshop on Usable Security, San Diego, CA. PDF. Proceedings Paper. doi:10.14722/usec.2014.23. Analysis of in-depth interviews with employees of multinational organizations about security noncompliance. Reveals instances in which employees created alternative shadow security mechanisms that allowed them to complete their work and feel like they were working securely, despite not following official policies and procedures. Suggests that lessons learned from shadow security workarounds can be used to create more workable security solutions in the future.
  • (Kirlappos 2015) “Shadow Security” as a tool for the learning organization.: Kirlappos, Iacovos, Simon Parkin, and M. Angela Sasse (2015). ACM SIGCAS Computers and Society, 45 (1), 29-37. PDF. doi:10.1145/2738210.2738216.
  • (Jon L 2017) People: the unsung heroes of cyber security: Jon L. (2017), National Cyber Security Centre. Video. Discusses the need to make cybersecurity people-centered in order to defeat cybercriminals. Argues for the importance of exceptional user experiences to help make it easy for employees to comply with cybersecurity guidelines, rules, and regulations.

About Iacovos Kirlappos

Photo of Iacovos Kirlappos

Iacovos Kirlappos is an information security and risk professional with strong academic and industry credentials. He obtained his bachelor of arts in computer science from the University of Cambridge, UK, and his master of science in human-computer interaction, master of research in security science, and PhD in information security from University College London.

Term: Shadow Security

Email: iacovos.kirlappos@gmail.com

Twitter: @ikirlappos

LinkedIn: linkedin.com/in/iacovos-kirlappos-phd-89477b18

What is it?

The psychological state one reaches when security decisions become too numerous and/or too complex, thus inhibiting good security practices.

Why is it important?

Security fatigue can cause weariness, hopelessness, frustration, and devaluation, all of which can result in poor security practices.

Why does a business professional need to know this?

Security fatigue — feeling tired, turned off, or overwhelmed in response to online security — makes users more likely to ignore security advice and engage in online behaviors that put them at risk. Users favor following practices that make things easier and less complicated, even if they recognize that these practices may not be as secure.

Security fatigue presents a significant challenge to efforts to promote online security and online privacy. The ability to make decisions is a finite resource. Security fatigue is a cost that users experience when bombarded with security messages, advice, and demands for compliance.

Too often, individuals are inundated with security choices and asked to make more security decisions than they are able to process. Adopting security advice is an ongoing cost that users continue to experience. When faced with this fatigue and ongoing security cost, users fall back on heuristics and cognitive biases such as the following:

  • Avoiding unnecessary decisions
  • Choosing the easiest available option
  • Making decisions driven by immediate motivations
  • Choosing to use a simplified algorithm
  • Behaving impulsively
  • Resignation

Understanding how the public thinks about and approaches cybersecurity provides us with a better understanding of how to help users be more secure in their online interactions. The following steps can help users adopt more secure online practices:

  • Limit the decisions users have to make for security
  • Make it easy for users to do the right thing related to security
  • Provide consistency (whenever possible) in the decisions users need to make

References

About Mary Frances Theofanos

Photo of Mary Frances Theofanos

Mary Theofanos is a computer scientist with the National Institute of Standards and Technology, Materials Measurement Laboratory, where she performs research on usability and human factors of systems. Mary is the principal architect of the Usability and Security Program, evaluating the human factors and usability of cybersecurity and biometric systems. She represents NIST on the ISO JTC1 SC7 TAG and is co-convener of Working Group 28 on the usability of software systems.

Term: Security Fatigue

Email: mary.theofanos@nist.gov

Website: nist.gov/topics/cybersecurity

1

What is it?

A human-centric manipulation technique that uses deceptive tactics to trigger emotionally driven actions that are in the interests of a cybercriminal or attacker.

Why is it important?

Exploiting people can be an effective means for criminals to bypass security processes and technology controls. Social engineering can be used to create a point of entry into a computing device, application, or network via an unsuspecting person.

Why does a business professional need to know this?

Social engineering attacks can cost millions of dollars. Recently, MacEwan University was the victim of a phishing attack(Huffington Post 2017) that fooled employees into changing banking information for a major vendor. As a result, nearly $12 million was transferred to the attackers.

Social engineering can take many forms. It includes phone scams, face-to-face manipulation and deception, email-based phishing attacks, targeted spear phishing of specific individuals, and whaling attacks, which are aimed at senior executives. Social engineering poses a tangible business risk for security professionals, executives, and boards of directors alike.

Social engineering through phishing is a growing threat to individuals and organizations of all types. According to the 2016 Verizon Data Breach Investigations Report(Verizon 2016), 30 percent of targeted individuals will open a phishing email message, with 12 percent also opening attachments or URLs which may contain malicious code.

Over the past two years, a new type of social engineering attack targeting senior executives and financial departments has emerged. Known as whaling (because big fish are the targets), these attacks seek to deceive employees to authorize six, seven, and even eight-figure fraudulent wire transfers.

Countering social engineering requires organizations to think beyond technology-based defenses such as email filtering, firewalls, or endpoint detection. An effective technique to defend against social engineering is to identify and manage employees at risk and create an educated workforce that is aware of all forms of social engineering.

Engaging leadership and employees in managing the risks of succumbing to social engineering attacks can be an effective proactive strategy. Further, this creates a critical cultural shift from cybersecurity as an IT-centric service to cybersecurity as a shared responsibility.

References

  • (Beauceron) Social Engineering: Beauceron Security. Web page with resources and definitions related to social engineering.
  • (Huffington Post 2017) MacEwan University defrauded of $11.8M in online phishing scam: Canadian Broadcasting Corporation (2017). Describes how a Canadian university was defrauded of $11.8 million after staffers fell prey to an online phishing scam.
  • (Verizon 2016) 2016 Data Breach Investigations Report: Executive Summary: Verizon (2016). PDF. Detailed analysis of more than 100,000 cybersecurity incidents in 2015, including 2,260 confirmed data breaches in 82 countries.
  • (Alperovitch 2016) Bears in the Midst: Intrusion into the Democratic National Committee: Alperovitch, Dmitri (2016). Crowdstrike. Analysis and findings identifying two separate Russian-intelligence-affiliated adversaries -- Cozy Bear and Fancy Bear -- present in the computer network of the US Democratic National Committee (DNC) in May 2016. Discusses details of the attacks and provides links to related articles on the subject.

About David Shipley

Photo of David Shipley

David Shipley is a recognized Canadian leader in cybersecurity, frequently appearing in local, regional, and national media and speaking at public and private events across North America. He is a Certified Information Security Manager (CISM) and holds a bachelor of arts in information and communications studies as well as a master of business administration from the University of New Brunswick (UNB).  

David helped lead the multi-year effort to transform UNB’s approach to cybersecurity.  He led UNB's threat intelligence, cybersecurity awareness, and incident response practices. His experience in managing awareness programs, risk management, and incident response helped shape the vision for the Beauceron platform. 

Term: Social Engineering

Email: david@beauceronsecurity.com

Website: beauceronsecurity.com

LinkedIn: linkedin.com/in/dbshipley

The Language of Cybersecurity Term of the Week postings will begin on July 31, 2018, and continue for one year. Each week, we will post a new term on this site.

You can follow our RSS feed or watch the XML Press twitter feed to get an announcement each time we post a term.

During the month of August, the book will be on sale at the XML Press eBook store for $14.95 (retail $19.95).