On October 19, 2021, we published the book, "Modern Cybersecurity: Tales from the Near-Distant Future". This is an excerpt from one of the chapters.
Fear Driven Development
"We'll make teams accountable to security. If they cause a breach, we identify the guilty developer and terminate them. Now we are DevSecOps!"
In organizations with legacy testing and security practices, it's common to find legacy postmortem practices. "Document what happened and who caused it!" In 2017, Equifax experienced a data breach that exposed sensitive data for 145 million accounts. When former Equifax CEO Richard Smith was questioned by Congress, he said it was the fault of a single person. Smith testified, "Both the human deployment of the patch and the scanning deployment did not work. The protocol was followed. The human error was that the individual who's responsible for communicating in the organization to apply the patch, did not."
This, of course, is nonsense. The only time failure is a single person's responsibility is when only a single person is involved in delivering value. The failures were guaranteed by how they did their work, but this statement gives us an insight into their culture and why a breach was probably inevitable.
What is the insight we can get from this? Fear Driven Development is the scapegoat culture of "if something bad happens, we will identify the person responsible and hold them accountable." This generates a culture of hiding problems either because we are afraid of being blamed for something we didn't do or we did do it but the consequences of trying to fix it are too high. Lack of trust in the value stream does not yield better value. Lack of trust means that if someone identifies a problem, the safest thing to do is ignore it and hope someone else gets blamed. This doesn't make us safer.
We need to create an environment of trust so that when failure happens, and it will, we can use postmortems to identify the failure in the system instead of finding a scapegoat.
Security Hobbyists
We want to push testing and security further left in the value stream and we want them embedded into everything we do. We have a problem though. This is a new way of working. We shouldn't simply direct it to be so and expect it to happen, yet that happens all too frequently. When bad outcomes occur, the response is "developers don't care about testing" and "developers have no interest in security". Is that really true?
Who decides what developers care about? The leadership in the organization. Everyone cares about what they are incentivized to care about. Incentives can be intrinsic, but if the organization has operated with testing and security as "not development", even those who care can get discouraged and fall into line with what the leadership rewards people for. Now we want them to care and we've changed the incentives, but where is the support they need? We are depending on them to be good at it, but are we investing in that happening or depending on them to just pick it up on the job? On-the-job training is fine as long as the majority of the organization isn't doing that together at the same time.
"I spent years as a developer before I ever saw a testing framework. Even then, management did not value testing. "We are falling behind on our deliverables. We can worry about testing later!" was an all too common response to the perceived slowdown from writing "extra code". I was never taught effective testing techniques. Instead, I found myself being "quality curious".
"Learning to test became my hobby because I wanted to understand how to implement a continuous delivery workflow. I knew that automated validation of delivery fitness was core to CD, but I had no real experience beyond a few simple tutorials on unit testing and some coding exercises using test-driven development. I was left to sort out the good and bad information about testing on my own. I even fell into the trap of thinking that 100% test coverage meant that an application was well tested.
"It took years of trying and failing to become competent. I look around and all of the advances I've seen from other developers in testing have been from the same hobbyist process. I wonder how much further we could have come and how much better the company outcomes could have been if our company had invested in growing our knowledge with dedicated training from reliable sources instead of expecting us to pick it up at home after hours?" -- Bryan Finster
Stories like this are the norm, not the exception. Even Google went through a phase of ignoring this problem until they finally assembled the "Test Mercenaries" and the "Test Certified" program to systematically upskill their development teams instead of hoping hobbyists learned the right things and helped spread good practices.
This same problem occurs with security. Developers are generally expected to be security hobbyists. Guided, self-paced learning can be effective, but self-guided learning paths are seldom effective. Which information is good or bad? Which is outdated or doesn't apply to the current problem? Functional and performance testing are solved problems. The patterns for effective testing are well understood. While it requires good resources to understand the best practices, the practices themselves have been tested over time and only the tools see significant changes. Security is a whole other ballgame.
"Modern Cybersecurity: Tales from the Near-Distant Future" is available as a free download, or can be found on Amazon, where you can leave a review.