1 - Telemetry Dashboard

The Telemetry dashboard provides a simple self explanatory high level view of your Managed Detection and Response service telemetry metrics.

Summary Panels

Within the dashboard are various summary panels which can be updated based on a specified time period and includes:

  • Total number of events ingested into the SamurAI platform
  • Total log volume
  • Number of integrations (this is current state and not affected by the specified time period)
  • Unhealthy integrations (these integrations likely need action, please review the Telemetry Monitoring article for further information)

Figure 1: Example summary panel

Time period

You can update relevant panels to specific date and time ranges. We have included Quick time ranges or you can specify a date and time period.

Figure 2: Date and time selection

Detail Panels

Additional panels provide event data based on products you have integrated with the SamurAI platform.

Events per product

Figure 3: Example events per product bar graph

Events per product

Figure 4: Example events per product pie chart

Data ingested per product

Figure 5: Example data ingested per product table

If you wish to drill down into the events we recommend you use the Advanced Query feature. Review Advanced Query Introduction for more information.

2 - Alerts Dashboard

The alerts dashboard provides valuable insights into your organization’s security landscape, despite all alerts being handled by the SamurAI Security Operation Center (SOC) it provides visibility into the volume of alerts which potentially lead to validated threats and reported to you as a security incident. Additionally we provide transparency by categorizing the alerts by detection engine and highlighting top threat signatures, whilst you do not need to act upon these alerts, this information demonstrates the SamurAI MDR service’s scale and effectiveness.

Outlined below are examples and an explanation of each panel within the dashboard:

Monitoring, Detection and Response summary

The funnel outlines telemetry ingested (events) by the SamurAI platform from your configured integrations, the security detections (alerts) made by the SamurAI platform detection engines and third party vendors which are triaged and investigated by the SamurAI SOC, and the number of security incidents reported to your organization. The funnel infers the value of the service based on the data analyzed focusing on detecting and reporting threats to your organization.

funnel.png

Figure 1: Example summary

Number of alerts

The total number of alerts analyzed by the SamurAI platform and SOC analysts.

alerts.png

Figure 2: Example number of alerts

Number of unique signatures

The total number of unique alert signatures.

signatures.png

Figure 3: Example unique signatures

Alerts per detection method

Donut chart showing the alerts per detection method. For a brief explanation of the detection engines please refer to Alerts.

detection_method.png

Figure 4: Example alerts per detection method chart

Alerts timeline per detection method

Bar graph showing alerts over the time period per detection method.

alerts_timeline.png

Figure 5: Example alerts timeline per detection method graph

Top 10 signatures

Top 10 alert signatures from all detection methods.

top10_signatures.png

Figure 6: Example top 10 signatures

Top 10 signatures for Hunting Engine

Top 10 alert signatures for the SamurAI hunting engine.

top10_signatures_huntingengine.png

Figure 7: Example top 10 signatures for hunting engine

Top 10 signatures for Real-time Engine

Top 10 alert signatures for the SamurAI real-time engine.

top10_signatures_realtimeengine.png

Figure 8: Example top 10 signatures for real-time engine

Top 10 signatures for vendor

Top 10 alert signatures from your vendor product integrations.

top10_signatures_vendor.png

Figure 9: Example top 10 signatures for vendor

3 - Security Incident Dashboard

The Security Incidents dashboard provides a simple self explanatory high level view of your Managed Detection and Response service security incidents.

Current open security incidents per severity

For more information on severity definitions, refer to Security Incident Fields.

Figure 1: Example current open security incidents by severity

Current open security incidents by state

For more information on state definitions, refer to Security Incident Fields.

Figure 2: Example current open security incidents by state

Current open security incidents (days)

This graph helps you understand how long (in days) a security incident has remained open - this could be in ‘Awaiting feedback’ or ‘Awaiting SOC’ states. Ideally the goal is to remediate and close a security incident as quickly as possible to mitigate risk.

Figure 3: Example current open security incidents (days)

New security incidents per month by severity

Figure 4: Example new security incidents per month by severity)

Security incidents average closing time by severity (days)

This graph shows the average closing time (in days) of security incidents per severity. Ideally the goal should be to keep this average closing down to a minimum.

Figure 5: Example security incidents average closing time by severity (days))

Security incidents total opened/closed per month

Figure 6: Example Security incidents total opened/closed per month))

4 - MDR Metrics

The MDR Metrics Dashboard provides clear, data‑driven insight into how SamurAI MDR detects, validates, notifies, and resolves security incidents in your environment.

All measurements are based entirely on actual incident activity reported to you organization. This gives you an accurate understanding of how the SamurAI MDR service performs, how incidents progress through their lifecycle, and where improvements in visibility or response processes may be valuable.

The MDR Metrics Dashboard helps you:

  • Understand how effectively SamurAI MDR identifies, validates, and communicates threats
  • Gain transparency into the investigation and notification processes
  • Support audit, governance, and compliance reporting with objective data
  • Observe how your internal teams respond to incidents over time
  • Build confidence in SamurAI MDR operations through measurable outcomes
  • Identify patterns in adversary behaviour and security posture in your environment

Where to Find MDR Metrics

From the SamurAI Portal main menu, select Dashboard, MDR Metrics

Each panel includes hover over definitions for clarity.

What the dashboard shows

The dashboard provides a complete view of incident activity and service performance using four key categories of insight:

1. Mean Time‑to‑X (MTTx) Metrics — Averages Over Time

  • These panels and charts show the average time taken for key stages of the incident lifecycle across all relevant incidents in the selected timeframe. This helps you understand general performance trends, rather than focusing on individual outliers.

2. Time‑to‑X (TTx) Metrics — Per Incident

  • Stacked bar graphs show the time taken for each stage of each individual incident.
  • This enables case‑by‑case understanding and highlights variations caused by incident complexity, evidence availability, or operational conditions.

3. Security Incidents by Severity

  • A distribution of incident severities (Low, Medium, High, Critical).
  • This helps assess overall security posture and the proportion of high‑impact incidents requiring greater attention.

4. MITRE ATT&CK Tactics

  • A visual breakdown of MITRE ATT&CK tactics identified across incidents in the selected timeframe.
  • Incidents may involve multiple tactics, offering insight into attacker behaviour and common patterns relevant to your environment.

Key Metrics Explained

Time to Validate (TTV) & Mean Time to Validate (MTTV)

This represents how thoroughly and confidently SamurAI MDR confirms that a potential threat is a genuine security incident.

  • TTV measures the time from the first contributing alert visible to the SOC to incident creation in the SOC ticketing system.

  • This phase involves analyst investigation, evidence gathering, correlation across alerts, and ensuring the situation is fully understood.

  • TTV naturally varies depending on the complexity of the activity being analysed. A longer TTV often indicates a more intricate or multi‑stage threat - not inefficiency.

  • MTTV is the average of these validation times across all incidents within the selected timeframe.

  • It provides a broad view of validation performance, smoothing out individual cases that required deeper investigation.

Time to Notify (TTN) & Mean Time to Notify (MTTN)

How quickly you are informed after an incident has been confirmed.

  • TTN measures the time from incident creation in the SOC ticketing system to publication (client notification) in the SamurAI Portal.
  • MTTN is the average notification time for all incidents within the selected timeframe. This metric helps you see the consistency of communication, accounting for variations caused by incident complexity, evidence volume, or analyst review.

Time to Client Close (TTCC) & Mean Time to Client Close (MTTCC)

How quickly your organization completes remediation and closes the incident.

  • TTCC measures the time from SamurAI Portal publication to the point your organization closes the incident in the SamurAI Portal. This duration reflects your internal remediation processes, approval flows, and operational workload.
  • MTTCC is the average client closure time for all incidents closed in the selected timeframe. MTTC helps you identify broader trends in internal response efficiency, rather than focusing on any single outlier incident.

These metrics offer valuable insight into your organization’s remediation speed and support continuous improvement in your internal response processes.

How Timeframes Work

Selecting a timeframe (e.g., last 3 or 6 months) updates every metric and chart.

Because incidents may fall inside or outside that timeframe:

  • Shorter periods may show greater variation due to fewer incidents
  • Longer periods provide more stable averages and clearer trends

This ensures metrics always reflect the specific window you are reviewing.

What’s Included and What’s Not

To maintain quality, fairness, and accuracy, the dashboard only includes reported security incidents that meet measurement criteria.

Included:

  • Security incidents created by the SOC
  • Alerts and timestamps sourced directly from the SamurAI platform
  • Client‑initiated incident closure times

Excluded:

  • Consulting driven incidents
  • Incidents without evidence (e.g alerts. this is very rare but do occur if the SOC need to notify you of a critical incident quickly)

These exclusions ensure that the metrics reflect real‑time performance rather than retrospective or atypical cases.

Interpreting MTTV and TTV

Validating whether a potential threat is a genuine security incident is a careful, evidence‑driven process. For this reason, Time to Validate (TTV) and Mean Time to Validate (MTTV) should not be interpreted as “how fast an analyst works.” Instead, these metrics show how long it takes before there is enough reliable evidence to confidently confirm that an incident is real.

Why MTTV should not be viewed solely as a speed metric

Analyst accuracy and confidence come first

  • Validation requires correlating multiple alerts, understanding context, reviewing telemetry, and sometimes waiting for additional evidence
  • Rushing this process would reduce the quality and reliability of incident reporting

Some incidents validate quickly; others require deeper analysis

  • Simple activity (e.g. brute‑force attempts) can be confirmed quickly, whereas identity‑based attacks, multi‑stage compromises, and “slow‑and‑low” behaviours naturally take longer

A longer TTV is often a sign of a complex or high‑fidelity investigation — not inefficiency

  • These cases often provide more complete insights and ensure the correct classification of the threat

MTTV varies because real‑world threats vary

  • The time required reflects the nature of the threat and the telemetry available, not analyst performance

What MTTV actually tells you

  • How effectively SamurAI MDR correlates signals and determines when an incident is genuine
  • The point at which sufficient evidence exists to confirm malicious activity
  • How visibility, alert quality, and environment context influence the validation process
  • Trends in how quickly incidents move from “potential issue” to “confirmed security incident” across the selected timeframe

What MTTV does not tell you

  • It does not measure analyst productivity or speed
  • It does not indicate whether an investigation was rushed or delayed
  • It does not penalise thorough investigations or legitimate deep‑dive cases
  • It does not represent alert‑to-alert “reaction time” (For this reason, MTTV is considered a quality‑based validation metric, not a stopwatch metric)

Why this matters

By focusing on accuracy, context, and completeness, MTTV helps demonstrate:

  • Consistency in how SamurAI MDR identifies true security incidents
  • The benefit of strong telemetry integration
  • The role of investigative depth in producing reliable incident reports
  • How improved visibility or reduced alert noise can shorten validation time without compromising quality
  • MTTV reflects the quality and completeness of an investigation — not analyst speed
  • It naturally varies depending on the complexity of each case
  • A “longer” MTTV often means the SOC devoted the time needed to correctly understand and validate the threat