08 November 2018

Why the big Mean-Time-To-Detection?

Mean time to detection illustration

“197 days: The average length of time it takes for organisations to identify a data breach” 1

This statistic Mean-Time-To-Detection (MTTD) comes up every year. 197 days is the 2018 figure, but it’s always quite big. It is marginally up on the previous year, up from 191 days. This begs two questions: why does it take so long to identify breaches, and are we even identifying all of them?

There are two ways to identify a breach:  

  1. Spot some type of unusual behaviour in the network, on our devices, or in our applications, and track this down to malicious activity. 
  2. The results of the breach become public. 

The first of these, spotting unusual behaviour, typically requires highly sophisticated tools, be that log centralisation, SIEM, Intrusion Detection and Prevention Software, etc. And just as important as these tools is the resource to manage them. Breach detection is a highly sophisticated activity, and the more sophisticated tools require more sophisticated users and analysts. And even with the best tools and the best people, there are still a huge number of alerts to analyse every day. (And don’t forget that most companies can’t afford state-of-the-art, and cannot recruit, never mind retain, top analysts.) It’s not at all surprising that companies do not spot everything. 

Which means it’s quite common for breaches to be detected by somebody outside, who then reports it to the company. According to a 2018 report2, in 38% of breaches, the company finds out from an external source. That seems like a high number, but in fact it’s significantly down from 2017 when it was 53%. Often, this will simply be users who have discovered their data is public – be that commercial customers or private users – and they will tell the company who will then investigate and identify the breach.

All of which leads to an inexorable conclusion: it is likely that large numbers breaches go completely undetected. In some cases, it may never be noticed that the data was stolen. In other cases, stolen data may be in the public domain, but it will not be clear from where it was stolen. At this point, I’m sure you’d like me to provide some wisdom on how we address this. Unfortunately, there is no silver bullet.

That said, I, and in fact, we at IDECSI, do think that breach detection can be improved. Way too much monitoring and detection depends on smart people and deep (and expensive) technology. However, we think you can be more precise in breach detection, identifying things at a user and application level which allows almost instant identification of breaches. It’s a very different approach from the AI of some SIEM and IDPS solutions, but also very effective. You can find out more by reading about the IDECSI MyDataSecurity. 

A few words about Ben Miller
Ben Miller is an experienced technologist and entrepreneur with a background in mathematics and software engineering. He is focused on bringing new technologies to market, which change conventional thinking. Within cyber security, we have long been used to complaining about users, and driving more work into the security team. Ben’s particular focus today is technologies which challenge this approach and instead make user empowerment a key part of the cyber discussion.

[1] IBM Report "How much does a data breach cost?"

[2] FireEye, Special Reports "M-Trends 2018"

Our articles

These articles may
interest you

Classification with MIP
Microsoft 365

Classify and protect sensitive data: focus on MIP

Lire l'article
microsoft teams data protection
Microsoft 365

Microsoft Teams: 5 key points to improve data protection

Lire l'article
Microsoft 365 environment
Microsoft 365

How to improve the access, sharing control on Teams, SharePoint

Lire l'article
Data protection, let's discuss your project?