In Ireland, police are called “peelers.” In Britain, they are called “bobbies.” A crime novelist I spoke to a little while back explained to me that both of these terms are rooted in the name of the same guy, Sir Robert Peel, the man who modernized British policing in the late 1800s.
Security researcher Brendan O’Connor looked to Peel’s principles for policing as he developed a mental model for how security teams can do better work within the businesses that they have been hired to protect. He presented his principles at O’Reilly Security on Wednesday in Manhattan.
It’s no surprise to forward-thinking security pros that teams need to interact with product teams and executives better, but “simply shouting: ‘Be less terrible to everyone’—while correct—hasn’t worked so well so far,” as O’Connor put it. So here’s a paraphrased list of his principles:
- The basic work of security teams is prevention.
Security professionals are experts of known risks, and it’s their job to minimize those risks wherever reasonable. On the other hand, it’s not reasonable to prevent all risks. “As you are probably aware, the United States has lost its mind on risk,” O’Connor said. There are complex failures that will still happen in well-engineered systems. That doesn’t mean that security teams failed, it means the world is a crazy place.
- Public approval of the security team is key to its ability to do its work.
“Security is not the mission of the organization,” O’Connor said, adding that this is even true of organizations making security-oriented products. The safest thing is for no one to ever do anything ever, but that’s not a reasonable way of living in the world. Security teams need to internalize the fact that their work supports a larger mission, and it does not trump that mission.
- Security teams need to secure the willing compliance of those they are protecting.
“I’ve had clients come to me with exquisitely self-inflicted wounds,” he said. Security experts are known for mocking the short-sighted things non-technical people do, but he cautioned that the behavior creates its own vulnerability. “Remember that when you rant about how awful non-technical people are in public, you run the risk that you make it more unlikely that users will bring things to you.” O’Connor said he’s seen non-technical staff alert security teams to major vulnerabilities that they could never have seen looking at lines of code on a server, such as supervisors collecting the passwords of staff on paper.
- Cooperation will diminish with the use of coercion.
O’Connor took a page from improv comedy for this point. In that art form, actors create an imagined scenario. No one ever retracts from the scenario, but they add to it. It’s called “Yes, and…” The same approach in helping an engineering team build its product builds political capital for the security team. “If every time someone asks a question, the answer is no, people stop asking,” O’Connor said. Instead, if a security team helps teams understand what it takes to securely implement a potentially dangerous idea, the security team will be seen as a resource. And the product team might even listen and scale back the danger. It’s more work in the moment for a security expert, but it should also lead to less triage down the line.
- Good humor and friendship to all.
The business wants what it wants. “You are expected to go the extra mile to make things work,” O’Connor said. Putting on a happy face even with difficult collaborators can have long tail payoffs.
- Minimize coercion.
Security teams end up having a lot of power and a lot of visibility on their networks. This power is best used judiciously. It works best when security teams make it clear how much reach they have, what their policies are for using that reach and then rigidly following those policies. For example, if a team can remotely wipe a laptop, it should be very clear about when and if it would use that power, rather than just letting it loom as a constant threat. Teams should demonstrate that they “know the special access you’ve been given is both a trust and a burden,” he said.
- There is no “us” or “them” (everyone is a part of a larger team).
Security professionals can gain a lot of political capital by pitching in on the product side as much as possible This can help with revealing vulnerabilities, too. By doing some work with engineering teams, it can illuminate pain points that might lead staff to want to create security workarounds.
- Security teams aren’t in charge.
Another way of saying this: “I only work here.” If a security team gives good security advice and the organization doesn’t follow it, at the end of the day, that’s someone else’s responsibility. This is another way of saying that security is in a support role. It’s not the organization’s mission.
- Security can blend in.
Users should be able to go throughout their day without thinking about keeping everything safe, even though that is partly everyone’s responsibility. “We don’t win when users feel like security is always watching them,” he said.
In a civil society, it is everyone’s job to help prevent crime and disorder, but the police are the only ones paid to watch out for it all the time. In an organization, it’s everyone’s job not to create vulnerabilities and look out for dangerous behavior, but only security teams get paid to worry about that constantly.
Security teams, O’Connor suggests, will have far fewer headaches in the end if they treat the people in the companies they have been hired to protect less like the causes of headaches and more like collaborators in preventing them.