Tanya Janca was depressed.

As a member of a team writing online articles and creating videos to help consumers more easily use the products their employer was making, her contributions were getting far fewer hits than a colleague.

She asked him why. “I post links to my stuff on Reddit,” he replied.

But when she did the same for some reason it didn’t help. Then she asked the team to find out the amount of time users were spending on each article. It turned out they were spending about a minute and a half — suggesting they were reading her stuff — versus mere seconds for the pieces by the colleague she was jealous about.

The lesson, she told the SecTor 2020 virtual conference this week, is that collecting the right data is vital.

Janca, now head of She Hacks Purple, a Victoria, B.C., -based security training company, says infosec teams need metrics badly. And not just reports for management, she added.

“We’ll never have enough time and money to do all we want so we need metrics to work smarter,” she said. “We have to make the best decisions we can, and for that we want metrics.”

The question is, which metrics? There’s no shortage of numbers that can be pulled out of logs or user surveys. So, Janca said, choose metrics that are relevant to your organization, and choose them wisely.

Beware of Vanity Metrics, she said — numbers that look impressive but mean nothing. She recalled going into a meeting with a colleague who’d just been appointed temporary head of their team determined to present a pie chart boldly called the “Threat Chart.” It showed the high, medium and low threats the organization faced.

Janca warned her leader the chart didn’t answer important questions such as whether the threats were increasing or decreasing. “Executives love pictures,” the leader said dismissively. Instead, their boss asked penetrating questions and wound up by saying, ‘You realize this is crap.’ The takeaway lesson there: Sometimes management is smarter than you.

Related:

 

A specialist in application security, Janca said it’s important appsec teams do more than count the number of vulnerabilities discovered in code. If more bugs are found it may not mean coders are doing a worse job than before; it can mean the testing is better, or a new testing tool is discovering different vulnerabilities.

What developers need to know is whether the same type of vulnerabilities is being found and whether that number is going up, which could be a sign developers aren’t learning from their mistakes. Timing in capturing metrics is important, too. If developers receive reminders or training about a particular vulnerability, measuring the rate of discovering those bugs after training will show if the lesson was effective.

Important appsec metrics include:

  • Time to detection.
  • Time to remediation/patch.
  • Where are you in your baseline security posture, vs where you expect to be?
  • Are you meeting service level agreements with developers and operations (and vice versa)?
  • Is the average number of vulnerabilities per system or app going up/down over time?
  • The detection of vulnerabilities that are both the same and different.
  • The after-effect of education and training.

It’s important infosec leaders set goals for any security program, Janca said, and then measure progress.

One problem can be tools that measure vulnerabilities in different ways: One may show CVE scores, while another may rate them on a 1 to 10 scale and another high-medium-low. In that case, create a chart saying ‘This equals that.’

The best way, she said, is to create an in-house scoring system. It should be based on several factors including the ease of exploit (if it’s hard maybe you can wait until a patch is released); is it on a public website or internal (risk is lower if internal so you don’t have to work all night fixing it); does it result in escalating privileged access; how long has the vulnerability been known (generally you need to fix older ones faster because they’re easier to exploit); is it a specific risk to your organization; how sensitive is the data affected; is the CIA affected (confidentiality, integrity, availability of data).

These and other metrics will help infosec pros determine if incidents are being repeated or are new ones popping up.  Security incident metrics that matter include:

  • Was the incident response procedure followed?
  • Were tools needed by staff available?
  • What was the time to detect, time to diagnose and resolve the incident?
  • What was the types/category of the incident?
  • Cost and damage of the incident.

Where possible automate the collection of metrics, Janca advised. While many managers use Excel spreadsheets there are vulnerability management tools available. Some business intelligence tools may be useful, although they may need a specialist to run. Ideally, data is funnelled into one place and can be seen on a single dashboard. Some cloud dashboards, she said, such as Azure, can swallow data from several sources.

Finally, there are using metrics for reports to management. Early in her career when Janca was in charge of incident response at a company she noticed insecure software was the source of the biggest number of incidents. To get funding for a basic application security program she explained how much incidents cost the organization, and what it would cost to run the program that would catch at least 80 per cent of the vulnerabilities.

She got her funding.

Would you recommend this article?

2
0
Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication. Click this link to send me a note →

Jim Love, Chief Content Officer, IT World Canada