Book Preview: Knowledge without Action is a Wasted Opportunity

by

On October 19, 2021, we published the book, "Modern Cybersecurity: Tales from the Near-Distant Future". This is an excerpt from one of the chapters.  

Watching my niece learn how to dive is hilarious. First, she will stand at the top of the diving board, deep in thought for a few minutes. Then she will get down, shake her arms, do a couple practice jumps, and then get back on the diving board to stand still again. I totally get it; taking action can be hard when you don’t know how it’s going to end. In diving, the worst case scenario is somewhere between embarrassment and a painful belly flop. This hesitancy to take action also applies in the field of cybersecurity; it’s impossible to solve hard security problems when you are frozen in place.

There are several important skills needed to lead successful cybersecurity teams, but two skills in particular are the most important. The first skill is the ability to take action with limited information. Leaders need to have the willingness to experiment in order to solve the hardest security problems. The second skill is the awareness that security teams should stop sending mounds of data to stakeholders with the expectation they will know what to do. Instead, cybersecurity teams should provide stakeholders with unambiguous actions we expect them to take.

A Bias for Action

Let’s dive a little deeper into the first position: the most successful cybersecurity leaders and teams are willing to experiment to address the hardest security problems. Some companies call this a “bias for action.” It is a core value of many organizations because in business, speed matters. We can all think of colleagues who have a lot of knowledge, but struggle to translate that knowledge into meaningful change. The caveat to this position, of course, is that acting without enough information is foolish. Doing a reasonable amount of research, investigation and critical thinking is the cornerstone of any good decision, but many business decisions and actions are reversible.

Taking calculated risks to make decisions to test a hypothesis sets apart the great leaders from those invested in the status quo. I recently watched a documentary on the rise and fall of Blockbuster video stores1. Most of the analysis agrees that there were several factors that contributed to the overall decline of Blockbuster, but one pivotal error was “analysis paralysis” and unwillingness to invest in a small proof of concept for online video rental. Blockbuster waited too long and ultimately was unable to pivot to compete with a new business model.

Bias for Action is Not Enough

Bias for action within the cybersecurity team is not enough; we must also be extremely explicit about what actions we expect our customers to take. For most application security functions, customers are the teams using their products (e.g. developers, product owners, team leaders, auditors). This clarity for customers is vitally important; the security team cannot be everywhere so we rely on other teams to build products that are secure.

Clarity around the actions to take and how to measure success saves time and protects the organization. While important, being direct and clear with customers when the question “what action should I take with this information” can be tough to do. This can be evident after a penetration test; many developers read the findings, but struggle to know what to do next.

The Role of Psychological Safety

There can’t be a conversation about taking smart risks without acknowledging the role psychological safety plays in a team’s risk tolerance. At its core, psychological safety is the belief that you won't be punished for trying new things and making mistakes. As cybersecurity leaders, if we want our teams to innovate and think creatively to solve decades-old problems, we need to establish safety and give teams room to fail.

Entire volumes have been written on building teams with psychological safety. A few of the strategies I utilize to build teams with psychological safety are:

  • Never make an employee regret bringing you a problem. This means being an active listener and not blaming or shaming someone if they actually created the problem. I’ve heard the adage that some leaders believe you should never bring forward a problem without a proposed solution2; I disagree with that statement. If you only want people to bring forward problems coupled with proposed solutions, they’ll only bring forward problems that are within their sphere of influence to solve. Teams with great psychological safety feel comfortable bringing really BIG problems forward that will require a lot of collaboration to tackle.
  • Focus on team dynamics. Do your teams have running jokes or operating rhythms to foster any collaboration beyond the task at hand? It may seem counterintuitive when talking about getting things done, but these intra-team behaviors are critical for a sense of team and psychological safety. Here are a few simple examples of creating fun team rituals to build team cohesion:
  • Whiteboards (or Slack polls) with a question of the day ranging from “what kind of pie do you like most” to “what are you scared of”.
  • A “soup off,” which works even with teams working remotely.  Everyone makes or brings their favorite soup to a virtual lunch meeting and shares recipes or restaurant links.
  • For all-day meetings, we often start with something fun to set the tone (playing the game Catch Phrase is a long-standing tradition).
  • Emphasize the success of the team over the success of an individual. On successful teams, the collective achievement of the team is paramount and teams with strong psychological safety also find personal growth as the team achieves its goals.
  • Take swift action when individuals damage the team's safety. Collaborative work is not for everyone; one person can tear down the psychological safety of an entire team.. Be mindful of how each member of the team contributes to or detracts from the team's safety and step in early and often to set clear expectations.

Real Life Example #1: Product Intelligence

Many security professionals agree that one of the most common security challenges is effectively empowering application teams to build secure systems. Applications generally exist to serve a business function and unless your business is security, building an application securely can detract from the features and functions that are part of the core business. For a security-motivated developer, navigating the priority of security findings, regulatory expectations, system vulnerabilities, good development practices and tech debt can be a struggle; that doesn’t even include the developers who are less inclined to care about security or those who care but don’t know how to build an application securely.

As a member of the cybersecurity community, I regularly engage with peers, industry leaders, and vendors to learn more about themes across the industry. From various discussions about overall security issues, a few themes stand out at many organizations.

  1. Teams outside the security organization don’t have the same understanding of what “secure” means. Some organizations have policies or statements that outline what needs to be done (e.g., “ensure separation of duties”), but how to achieve these directives leaves a lot of room for interpretation.
  2. Leaders across our technology teams often have a hard time comparing the security health of different types of systems, so dashboards about security are hard to understand and don’t answer the question “is this secure enough?”
  3. There is often a lack of consistent priority from security teams, especially in large organizations. It’s not uncommon for a development team to be contacted by different areas of security asking a development team to take action, but because those security teams aren’t aware of other requests, the development team needs to make their own decision about priority.

To address many of these systemic issues, Target sought to provide a simple way for a team to know how secure their application is, and clear actions to take to improve security health. We built a system called Product Intelligence, with the goal to empower and incentivize teams to build secure applications. This system is unique in a lot of ways4, and has made a huge difference in the way we communicate with our development community.

About Product Intelligence (PI)

The Product Intelligence system (called PI for short) is a collection of different types of security data. There are over 30 sources of data, but some primary elements are the taxonomy (all the applications and how they are grouped), security findings, security vulnerabilities and whether or not applications are using key security services. This data is normalized so different kinds of data still make sense to development teams. Each logical grouping of applications (called “Products”) are given a Product Intelligence score. This allows any engineer, manager, or C-level leader to know the relative security of each application. For familiarity, the score is based on the same scale as a personal credit score, ranging from 300-850. This score is generated from an in-house built algorithm that evaluates observable data about each system with some characteristics having more weight than others.

Each year, the leaders in Target’s technology team come together and collectively decide the minimum score goal for all applications so each team knows what to strive for. See figure 1 below for a snapshot of a fake product. Teams can see their score in the upper right corner along with their score trend for the past 12 months.

Figure 1: a screenshot of the Product Intelligence user interface

Knowing your score is great; but what has been very impactful is that the system outlines every action a team should take to improve their score. This could include requesting a new penetration test, closing findings, taking a security training or other security-related actions. You can see the actions to take for this fake product in Figure 1 as well.

Establishing Trust in PI

When you build a system that measures the security health of applications and set organizational-wide goals, data accuracy and trust are incredibly important. We work to achieve and maintain that trust in several ways.

  • Avoid surprising your customers with new data.  When we onboard a new source of data, that data is displayed to teams in a beta version before it is included in the score.  This way teams have the right visibility and can work through any data accuracy issues before their score is officially impacted.  When we rolled out this system, we showed teams a lot of data they had never seen before, so we spent almost the entire first year partnering with security and application teams to improve data accuracy.  
  • Avoid surprising your customers when changing the algorithm.  We refresh the algorithm only once per year; the data feeding into the system can change multiple times a day, but we only change the algorithm for the Product Intelligence score annually.  When we change the score, we also issue beta version first; teams can see how they are performing and validate the accuracy of the data for a minimum of three months before the beta score becomes the actual Product Intelligence score.
  • Build customer trust with thoughtful consideration about priority.  In the security world it is easy to overwhelm teams with all the actions they could take.  In the Product Intelligence system, we have limited the areas of action to what really matters.  Today our customers trust that when we specify something needs action, it truly does.

Figure 2 below shows more representative data displayed to customers.

Figure 2: additional (fake) information that can be found in the Product Intelligence system

Since we launched this system, it’s hard to overstate the impact it’s had on the relationship between security and product teams. Each development team has their security data at their fingertips and know if they meet the minimum threshold that their leaders have set. They also know exactly what steps to take to improve the security of their application. It has allowed the security team to focus more on risk and to form a tighter partnership built on collaboration and accountability.

This has been an excerpt from Jennifer Czaplewski's chapter, "Knowledge without Action is a Wasted Opportunity" in the newly released book, "Modern Cybersecurity: Tales from the Near-Distant Future". The book is published in a digital version for free download by the community or is available in hard copy format on Amazon.

Jennifer Czaplewski
Jennifer Czaplewski

Jennifer Czaplewski is a cyber security executive known for building and leading critical security functions. She is an industry leader in DevSecOps and shares insights at thought leadership forums like RSA, the Cyber Security Summit, DevOps.com and All Day DevOps. She is a patent holder and her work has been featured in SC Magazine and Dark Reading.  Jennifer is currently a Senior Director at Target.

Keep Reading

Proactive IAM Security: Transforming Identity Security with Actionable Insights | Okta Integration with JupiterOne
December 19, 2024
Blog
Unlocking Proactive Security: How Okta and JupiterOne Elevate IAM Insights

Unlock proactive IAM security with Okta and JupiterOne, gaining real-time insights, enforcing least privilege, and reducing risks in dynamic cloud environments.

Transitioning from Vulnerability Management to Exposure Management | JupiterOne
December 13, 2024
Blog
Transitioning from Vulnerability Management to Exposure Management with JupiterOne

Explore Gartner's latest report on Exposure Management and learn how your organization can prioritize vulnerabilities and minimize exposures.

The Ultimate CAASM Guide for 2025 | JupiterOne
November 20, 2024
Blog
The Ultimate CAASM Guide for 2025

Discover how Cyber Asset Attack Surface Management (CAASM) is providing enhanced visibility of internal and external assets in 2025.

15 Mar 2022
Blog
One line headline, one line headline

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud eiut.

15 Mar 2022
Blog
One line headline, one line headline

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud eiut.

15 Mar 2022
Blog
One line headline, one line headline

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud eiut.