Home Hot TopicsSecurity People Do People Things: The Future of Security is Human

People Do People Things: The Future of Security is Human

by CIO AXIS

By Dr. Margaret Cunningham, Principal Research Scientist at Forcepoint

• Workarounds, shortcuts and creative work strategies are simultaneously a celebration of human creativity and a risk for organizations who are desperately trying to maintain visibility of their assets.

• Learn more about what motivates behavior, and commit to designing and implementing security practices and tools that work with humans rather than against them.

• 2020 demanded that humans find innovative ways to keep organizations running. 2021 will continue to reflect human resilience and ingenuity.

As 2020 comes to an end, the importance of understanding the relationship between humans and technology is at an all-time high. Widespread shifts in the fabric of our society, prompted by the ongoing pandemic, exposed weaknesses in security tools and protocols for remote workers, highlighted issues of network reliability and accessibility, and demanded that humans find innovative ways to keep organizations running. While the fallout from the pandemic is unignorable, the ability for people to respond to seemingly endless challenges has been nothing short of remarkable.

The year 2021 will continue to reflect human resilience and ingenuity. It will be the year of workarounds and self-serving insider threats, where people find ways to accomplish their goals despite dealing with personal and professional adversity. Workarounds, shortcuts, and creative work strategies are simultaneously a celebration of human creativity and a risk for organizations who are desperately trying to maintain visibility of their assets. Ultimately, people sharing data and accessing corporate networks in new and potentially unsanctioned ways carries quite a bit of risk – especially for organizations that are new to managing remote workers.

The result of these changes is that successful cybersecurity strategies will stop trying to use technology as a unilateral force to control human behavior. Rather, organizations will come to terms with the reality that adding more and more technology or security does not lead to behavioral conformity, especially not conformity that aligns with security principles and adequate cyber hygiene. In fact, additional layers of security may push more people outside of the guiderails due to increasingly aggravating security friction that blocks them from completing tasks or easily accessing critical organizational assets.

Understanding Precedes Predicting
In light of this, understanding how people adapt to, respond to, and inform their environments is critical for organizations heading into the new year. For far too long, the tech world has created products with the assumption that people will use them in an expected or uniform way, or that people would conform to the rules and constraints laid out by well-meaning engineering teams. If we’ve learned anything from 2020, it is that people are not always predictable, and making assumptions about human behavior is a dangerous game to play. What’s surfaced is that expectations, guidelines, best practices, and even commands will yield every type of behavioral response – from rigid compliance to retaliatory noncompliance.

What can we do? We can learn more about what motivates behavior, and how people ultimately choose to behave. We can also commit to designing and implementing security practices and tools that work with humans instead of against them. To do this, however, we have to focus on measuring and understanding behavior instead of focusing exclusively on detecting compromises and vulnerabilities.

For instance, we know that people’s immediate needs often outweigh potential negative consequences – especially when the consequences do not have a direct, individual, and immediate impact. This means that when we need to accomplish our goals we often take the easiest route. Unfortunately, the easiest route is often riskier than the “ideal” route. When faced with frustrating, security-heavy file and data sharing tools, we may turn to sharing via personal cloud applications. Making rules to stop people from engaging in this type of behavior is not working – so rather, we have to better understand these behaviors to find ways to mitigate their risk to organizations and organizational assets.

Building Behavioral Understanding Into Systems
Within the cybersecurity industry, observing and understanding behaviors must come with context. What may appear at first glance like an obviously malicious act likely to lead to data loss – for example an engineer requesting access to multiple sensitive data repositories over the course of two days – could simply be a person getting their job done. Our engineer may be doing this because she’s been added to several new projects and needs to be able to collaborate with her new team.

We want people to be able to do their jobs within the constraints of our corporate network and policies, so blocking them would only encourage the human tendency to find an easier (and less secure!) route for getting their jobs done. With an interdisciplinary research team, pulling experts from security, counter-intelligence, IT, and behavioral sciences together, behavioral understanding can be built into cybersecurity systems. And this is the first important step for finally starting to move cybersecurity left of breach – designing security for the human element.

This article has been authored by Dr. Margaret Cunningham, Principal Research Scientist at Forcepoint

Recommended for You

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Close Read More

See Ads