In the Preparation posts of this Incident Management Roadmap, we covered the foundations: Governance (your emergency manual), Technology (your locks and cameras), Communication (your crisis messaging), and People (your response team). All of that groundwork serves one purpose: enabling you to act decisively when something goes wrong.
Now we move from preparation to action. The Identification phase is where theory meets reality. Something has happened, or might be happening, and you need to figure out what it is, whether it's real, and how serious it is.
Think of it like this: your office building has smoke detectors, security cameras, and a reception desk where visitors check in. The Identification phase is the moment when one of those systems activates. Maybe a smoke detector goes off. Maybe a security guard notices someone behaving strangely. Maybe an employee reports that their badge isn't working. The question isn't just "did something happen?" It's "what exactly happened, how bad is it, and what do we do next?"
The faster your organization can answer those questions, the less damage an attacker can do. Every hour of delay is an hour the attacker has to dig deeper, move through your systems, and access more sensitive data.
Where Incidents Come From
The three detection channels
Incidents don't announce themselves with a flashing red light and a siren. They emerge from multiple sources, often as fragments of information that need to be pieced together. As a leader, you don't need to understand the technical details of every security tool, but you do need to understand the channels through which incidents surface, because each requires different management attention.
Channel 1: Your Security Systems
Your IT team operates various monitoring tools that watch for suspicious activity, much like a building's security system watches for intruders. These tools generate alerts when something looks wrong: an employee logging in from an unusual location, a computer behaving strangely, data moving in unexpected ways.
The management challenge here isn't the technology itself. It's that these systems generate a lot of alerts, and most turn out to be nothing (false alarms). Your security team spends significant time separating genuine threats from background noise. If they're overwhelmed, real incidents get missed. If they're under-resourced, they burn out. This is a staffing and process question, not a technology question.
Channel 2: Your People
Your employees are sensors distributed throughout the organization. They notice things that automated systems miss: a colleague asking unusual questions, an email that doesn't quite look right, a customer complaint about something they didn't authorize, a system behaving differently than expected.
This channel requires active cultivation. Do your employees know how to report something suspicious? Do they feel comfortable doing so, or do they worry about looking foolish? When they do report something, does anyone respond, or does their concern disappear into a queue?
A practical insight: the way someone reports an issue often signals its urgency. If they send an email, they're accepting they might not hear back today. An instant message suggests they'd like a faster response. A phone call means they think this is serious. Make sure your team reads these signals.
Channel 3: External Parties
Sometimes the first indication of a problem comes from outside your organization:
- Law enforcement contacts you because your company appeared in a criminal investigation
- A security researcher discovered a vulnerability in your systems and is reporting it responsibly
- A business partner notices unusual activity involving shared systems
- A customer reports unauthorized transactions on their account
- An industry group alerts members to a threat affecting your sector
External notifications require careful handling. Verify the source before taking action, but don't dismiss them out of hand. Some of the worst breaches in history were first reported by outsiders while internal teams saw nothing wrong.
Validating What You're Seeing
Is this real?
Here's the uncomfortable truth about security monitoring: most alerts are false positives. The security equivalent of a smoke detector triggered by burnt toast rather than an actual fire.
The validation phase is about determining whether you're looking at a real incident or a false alarm. This matters because responding to every alert as if it were a major breach would exhaust your team within days. But dismissing alerts too quickly means missing real attacks hiding among the noise.
What validation looks like in practice:
Your security team receives an alert. Before escalating it to incident status, they need to answer several questions. Is this an isolated event or part of a pattern? Have we seen similar activity elsewhere in our environment? Does the affected system contain sensitive data? Is the timing suspicious (weekends, holidays, outside business hours)? Could there be an innocent explanation?
This is detective work. A single data point rarely tells the full story. A failed login attempt is routine. A hundred failed login attempts from the same source is an attempted break-in. A successful login following those failures, from a location the user has never accessed before, is a likely compromise.
The "declare high" principle:
Experienced incident managers know it's better to declare an incident as serious and then downgrade that assessment, rather than underestimate and lose critical response time. You can always stand down. You can't get back the hours lost to a slow start.
This requires a culture where escalating a concern that turns out to be nothing is acceptable, even encouraged. If your team fears being blamed for "crying wolf," they'll hesitate when speed matters most.
The dangerous assumption:
One common mistake deserves special attention. Finding that a system was vulnerable doesn't mean it was compromised. Conversely, finding that a system is fully protected doesn't mean it's clean.
Here's how this plays out: IT discovers a server missing security updates. They apply the updates and close the ticket, never checking whether anyone exploited the gap before it was closed. The real status of that system remains unknown. Alternatively, a security alert fires, IT runs a scan that finds no vulnerabilities, and they dismiss the alert as a false positive, when actually the compromise came through a tricked employee rather than a technical flaw.
Determining Severity
Not alI Incidents are equal
Not All Incidents Are Equal
Once you've confirmed something real is happening, you need to determine how serious it is. Resources are finite. You can't treat every incident as a five-alarm fire, but you can't afford to underestimate the ones that are.
Two categories to understand:
Security professionals distinguish between warning signs and active incidents:
- Warning signs indicate that an attack may be coming. Someone probing your external systems, testing for weaknesses, gathering information. These deserve attention and monitoring but don't require emergency response.
- Active incidents indicate that an attack is happening or has happened. Malicious software detected, unauthorized access confirmed, data being extracted. These require immediate action.
Four factors for prioritization:
When determining how to prioritize an incident, consider:
- Reliability of the information
How confident are we that this is real? An alert confirmed by multiple sources carries more weight than a single anomaly. Direct evidence of compromise trumps suspicious-but-ambiguous indicators. - Importance of what's affected
A compromised laptop in a branch office is serious. A compromised system that controls access to your entire network is catastrophic. Know which systems matter most to your operations and weight your response accordingly. - Clarity of malicious intent
Some activities are unambiguously bad: files being encrypted by ransomware, sensitive data being sent to unknown external locations. Others are suspicious but could have innocent explanations. Clear malicious activity demands immediate action; ambiguous situations may allow for more measured investigation. - Potential business impact: What's the worst-case scenario if this is real and you don't act quickly? Think about data sensitivity, regulatory implications, operational disruption, and reputational damage. The higher the potential impact, the faster you need to move.
Document your severity levels in advance!
Your incident handling plan should define what constitutes a low, medium, high, or critical incident, and what response each level triggers. This removes ambiguity in the moment and ensures consistent handling regardless of who's on duty. When the building is on fire is not the time to debate what constitutes an emergency.
Understanding the Scope
What are we actually dealing with?
Prioritization tells you how urgently to respond. Scoping tells you what you're responding to. Before you can contain an incident, you need to understand its boundaries.
Key scoping questions:
What's affected? Start with what you know and expand outward. If one system triggered the alert, has the problem spread to others? Your IT team's ability to answer this depends on how well you've documented your environment. You can't assess damage to assets you don't know you have.
What data might be at risk? Identify what sensitive information the affected systems have access to. Customer data? Financial records? Employee information? Intellectual property? This shapes both your technical response and your communication obligations.
How long has this been going on? The timeline matters enormously. An attacker who gained access yesterday has had hours to explore. An attacker who's been present for months has had time to establish themselves deeply, map your entire environment, and position themselves for maximum damage. This is why logging and record-keeping matter: without historical data, timeline reconstruction is guesswork.
How did they get in? Understanding how the attacker gained access helps you close that door and look for others who might have entered the same way. Was it a tricked employee? An unpatched system? Stolen credentials? A malicious insider?
Who needs to know? Based on what you've learned, identify which stakeholders need to be informed. Technical teams need details to act. Executives need business impact to make decisions. Legal needs to assess notification obligations. The scoping phase generates the information that drives these communications.
The Management Investment
What Makes Identification Work
Strong identification capability doesn't happen by accident. It requires deliberate investment in three areas:
Know what you have. You can't detect problems with assets you don't know exist. Maintain an up-to-date inventory of your systems, understand how they connect, and know where your sensitive data lives. This knowledge is the foundation everything else builds on. It's not glamorous work, but without it, your security team is navigating blind.
Staff appropriately. Alert monitoring and validation require skilled people with enough time to do the job properly. If your security team is drowning in alerts, they'll miss the real incidents. If they're covering too many hours with too few people, they'll burn out. This is a budget and headcount decision.
Create a reporting culture. Technical systems catch technical attacks. People catch social engineering and notice things that seem "off." Invest in security awareness, make reporting easy, and respond visibly when employees raise concerns. If people feel their reports go into a black hole, they'll stop reporting.
Consider external support. For many organizations, maintaining around-the-clock monitoring capability in-house is impractical. External security monitoring services can extend your reach, providing expert coverage during nights, weekends, and holidays. They're not a replacement for internal capability, but they can be a valuable supplement.
Wrapping Up
The Identification phase is where speed matters most. Every hour between an attacker gaining access and your team detecting them is an hour they spend digging deeper, accessing more data, and positioning themselves for greater impact.
Effective identification requires three things working together: detection channels that cover your environment (automated monitoring, employee reporting, external notifications), validation processes that separate real incidents from false alarms, and prioritization frameworks that focus your limited resources on what matters most.
None of this works without the preparation we covered in earlier posts. Your asset inventory tells you what's affected. Your logging infrastructure provides the data. Your trained team knows how to interpret it. Your incident handling plan defines the response. Preparation enables identification; identification enables response.
In the next post of the Incident Management Roadmap, we'll move to Containment: how to stop the bleeding, limit the damage, and prevent the attacker from expanding their foothold while you work to remove them completely.
Five Questions Every C-Level Executive Should Ask
1. If an attacker was inside our network right now, how long would it take us to notice, and do we actually know that number?
→ Industry research consistently shows that attackers often remain undetected for weeks or months. Do we measure our detection time? Are we working to improve it, or just hoping our defenses hold?
2. When an employee reports something suspicious, what happens next, and how quickly do they hear back?
→ Our people are one of our best detection channels. If their reports disappear into a queue for days, we're wasting that advantage. Is there a fast path for security-relevant concerns?
3. Can our security team see a complete picture when investigating an alert, or are they looking at fragments?
→ Attackers don't stay in one place. If our team can only see activity in isolated pockets, they'll spot individual events but miss the overall pattern. Do we have the logging and tools to connect the dots?
4. Is our security team overwhelmed by false alarms, and what are we doing about it?
→ Alert fatigue is real and dangerous. If the team is drowning in noise, they'll miss the genuine threats. Are we investing in better tools, better processes, or more people to address this?
5. If a journalist, regulator, or law enforcement agency contacted us tomorrow about a breach, would we know where to start looking?
→ External notifications happen. Do we have the records, documentation, and investigative capability to respond credibly? Or would we be starting from scratch?
Let's Connect
Want to discuss how your organization approaches incident detection and identification? Whether you're evaluating your current capabilities, considering external monitoring services, or working to build a stronger reporting culture, feel free to reach out. We are always happy to talk through approaches that fit your organization's specific situation.