This is the multi-page printable view of this section. Click here to print.
Knowledgebase
1 - SamurAI MDR
1.1 - What are the Integration Table fields?
The Integration Table lists all of your integrated products to the SamurAI platform. The table has a number of fields which are described below:
| Field Name | Description | Additional Info |
|---|---|---|
| Status | Color indication of integration status | Pending Unknown OK Warning Critical |
| Status Description | Description of the status | Pending: Telemetry components installing / provisioning or awaiting status Unknown: The SamurAI platform is unable to determine a status OK: All components healthy Warning: The SamurAI platform has not seen any events per the defined threshold Critical: The SamurAI platform has not seen any events per the defined threshold |
| Info | Useful information displayed when hovering over icon |
|
| ID | Universally Unique Identifier (UUID) for integration | UUID utilized by the SamurAI Platform |
| Vendor | Vendor name of the product | unknown indicates the vendor is not currently supported by SamurAI |
| Product | Product name | unknown indicates the product is not currently supported by SamurAI |
| Type | Integration type used to gather or ingest telemetry. | Potential entries displayed include:
|
| HA Group | High Availability (HA) group of integrations. | Review High Availability and Integrations for more information. |
| Name | Integration name | The name provided during integration configuration or could denote the hostname if sending syslog to a Local Collector |
| IP Address | IP address of the host | |
| Collector | Collector name associated with the integration | The Collector used to ingest telemetry to the SamurAI platform |
| Description | Optional description provided during integration configuration | This description can be edited at any time within Integration Details |
| Last Event Seen | The last event seen from the telemetry source | Format is [yyyy:mm:dd], [hh:mm:ss] with time represented in Universal Time Coordinated (UTC) by default or as configured. |
| Created | Date and time of integration creation | Format is [yyyy:mm:dd], [hh:mm:ss] with time represented in Universal Time Coordinated (UTC) by default or as configured. |
1.2 - Best Practices for Custom Status Thresholds
Custom status thresholds allow you to fine-tune telemetry monitoring. This article provides practical guidance for setting thresholds that balance responsiveness with noise reduction.
Why it matters
- Low thresholds may result in status ‘flapping’ and not reflect accurate status
- High thresholds may result in delayed detection of telemetry ingestion/collection issues.
The goal is to align thresholds with integration behaviour, business impact, and maintenance patterns.
Best Practices
1. Start Conservative, Then Optimize
- Review and leverage default thresholds to begin with and if not satisfactory then look to customize for your operational needs.
- Monitor notification frequency and adjust to reduce false positives while maintaining reliability.
2. Understand the Integration’s Normal Behavior
- Review historical data through the events graph or via Advanced Query to determine typical ingest times or intervals.
- Avoid setting thresholds too aggressively to ensure notifcation of minor, non-critical fluctations in system behaviour.
3. Balance Sensitivity and Noise
- Warning Threshold: Set this to catch early signs of delay without triggering too often.
- Critical Threshold: Make this significantly higher than the warning threshold to indicate a real issue.
4. Consider Business Impact
- For mission-critical integrations use tighter thresholds.
- For non-critical allow more flexibility to avoid unnecessary notifications.
5. Account for Maintenance Windows
- If integrations have scheduled downtime or maintenance, adjust thresholds or disable notifications during those periods.
Common Pitfalls
- Ignoring seasonal patterns: Traffic spikes or quiet periods can skew thresholds.
- One-size-fits-all settings: Different integrations need different thresholds.
- Not revisiting thresholds: Review quarterly or after major changes.
1.3 - Boost Scoring
Boost Scoring is a technique used by the SamurAI platform which improves the ability to find Advanced Persistent Threats (APTs) by using a methodology which helps to link seemingly unrelated events allowing the platform to determine where a set of events becomes notable enough to warrant investigation as a threat.
This is done by using the ability to identify suspicious activities using the combined insights offered by multiple enrolled sources, irrespective of technology type or vendor. This enables detection using activities and events that normally would not be of a significant interest by themselves. When seen in combination however they represent individual aspects of a threat. Boost scoring provides a method to link these events and strengthen their relevance when they are combined.
By grouping activities and events on a user and entity basis and Mitre tactic basis, Boost scoring enables identification of suspicious behaviors which are identified via combined insights. The Boost score increases over time providing more accurate confidence and threat severity scoring for each group over time.

Figure 1: Boost scoring
By keeping the Group state for a long period of time (typically over 60 days) SamurAI is able to detect evasive threats that have stayed dormant for a longer period of time after the initial breach by linking additional events which can be linked to the initial breach attempt.
Once a Boost score reaches a predetermined level it will be used to generate an alert which is presented to SOC analysts. This helps to suppress single indicators from raising alerts, and rather permits the gathering of evidence until a confidence threshold is reached where the raising of an alert is justified.
This technique enables detection of dormant threats and slow-moving attacks (a traditional evasion technique). Suspicious activities are assessed in their entirety regardless of threat severity, time or log source.
Simply put, Boost scoring helps to find the balance between too many alerts (false positives) and too few alerts (false negatives) and in that process selecting the activity which is of real importance in identifying the activity of threat actors.
1.4 - How do I know if my integration is functioning?
It is important to understand that the integrations you have configured are working correctly and sending telemetry to the SamurAI platform.
Integration Health
You can easily get an overview of which of your integrations are not healthy (e.g the integration status is not reported as OK) by viewing the Integrations Table or Telemetry Monitoring. Telemetry Monitoring provides you a concise overview of any integrations which are considered unhealthy, or in other words, integrations where the SamurAI platform has not seen events/data over specific time periods:

Figure 1: Example telemetry monitoring view
The fact that an integration is unhealthy doesn’t necessarily mean that there is a fault. For example, your integration may send telemetry intermittently. The SamurAI platform takes this into consideration through default status thresholds, however as environments may differ you may consider updating the Status Threholds to meet your requirements.
Telemetry Monitoring Notifications
By default the SamurAI platform will send email notifications to registered users if an integration is reported as Critical.
Notification settings can be fully customized to meet your requirements, refer to Notification Settings for more information.
Managing Integration Health
There are a few factors which could result in telemetry not being properly ingested. This article takes you through the main factors which could impact whether an integration is working or not, who is responsible for them, and how to address them.
In order for data to be ingested into the platform, the following main areas need to be functioning properly:
- Platform is available: We are responsible for making sure that the SamurAI platform is available.
- Data source configuration: Often the first place to check is that the source is correctly configured to send data. If your data source uses a Cloud Collector, you will also need to check that the Cloud Collector is functioning and healthy. Make sure that you have followed all of the configuration steps outlined in the configuration guide for the Integration.
- Connectivity: Any data sources using Local Collectors are dependent on:
- required ports from the device to the Local Collector as outlined within the configuration guide
- internet connectivity between your premises and the SamurAI platform. Check that your internet connection is available and that firewalls are configured to allow traffic.
- the Local Collector article provides a detailed explanation of all of the ports that a Local Collector needs to have open in order to function correctly.
- Local Collector: If your data source uses a Local Collector, you will need to ensure that the Local Collector is available. You will also need to ensure that the virtualization platform that hosts the Local Collector is healthy. For more information see the section on Local Collectors below.
- Cloud Collector: If your data source uses a Cloud Collector, the health of your integration is also dependent on the Cloud Collector being operational. If your cloud collector monitors a cloud storage account, ensure that your integrated sources are sending and storing data to the storage account. If your data source is correctly configured but it remains unhealthy e.g reported status is Pending and/or Unknown, it is our responsibility to ensure the Cloud Collector is operational for you.
Local Collectors
If your integration is utilizing a Local Collector, first ensure it’s running as expected. If there is a problem with your Local Collector you should receive an email notification when the status is reported as Critical (by default) or per your configured user notification settings.
When you drill down into a Local Collector in the SamurAI Portal, you are provided a view which shows you the health of the Collector, together with all of the Integrations that are configured to use that Collector:

Figure 2: Example local collector view
For integrations that utilize a Cloud Collector you can jump directly to checking the Integration status.
Integration Status
Once you have confirmed that the Local Collector is Healthy (communicating with the SamurAI platform), check the Integration status. From the Collectors menu (applicable to both Local Collectors and Cloud Collector) you can view associated integrations to view their state of health. Alternatively, navigate to the Integrations page. Refer to Integrations Details and Status.
In both cases you will see a column called ‘Last Event Seen’. This column provides a timestamp of the last received event with the time format as [yyyy:mm:dd], [hh:mm:ss]) and in your timezone set per Time Zone.
The SamurAI platform monitors for ‘Last Event Seen’ within specific timeframes that relate directly to the status. There are default status thresholds, but they can also be fully customized, therefore if you are troubleshooting problems, check for any custom thresholds. Refer to Status Thresholds for more information.
If for some reason, the Integration is not healthy (e.g. not Green), then run through the Integration guide for your specific device and confirm there are no other controls blocking the traffic to the Local Collector or Cloud Collector.
If your Integration is of type Local or Cloud and is not healthy, then review the integration configuration to ensure it is correct and also ensure you followed the appropriate Integration guide for your device.
If you still have issues and please submit a ticket via the SamurAI Portal
Querying the detail
If you would like to go into more detail about the events from your log sources, you can make use of Advanced Query to analyze the events stored in the SamurAI platform. This will help you to answer questions like:
Is my log source generating logs intermittently?
By querying your log source over a period of time, the graphical representation of events will quickly show you time periods when your log source was not generating logs:

Figure 3: Example query
When did my log source last generate an event and what was that event?
You can easily find the last time when a log source generated an event. This will be the same as the “Last Event Seen” field for the Integration. For instance, the following query shows the last log generated in the last 7 days:

Figure 4: Example query
Is my log source configured to generate correctly formatted logs?
Sometimes a configuration error on your log source might result in your log source generating incorrectly formatted logs. By examining the raw log content you can check that your logs are correctly formatted. This will assist in correcting any configuration errors which may have resulted in incorrectly formatted logs being sent.
Is my log source sending the logs I need?
By checking the types of events generated, you can verify that you have configured the log source to send the logs you require, and that it is generating them. For instance, in this example, we are verifying that a device is generating DNS logs as expected:

Figure 5: Example query
1.5 - SamurAI Glossary of Terms
The definitions provided below are used within SamurAI documentation, all legal terms can be found under Legal.
Agent
A software component installed on an endpoint. The SamurAI Endpoint ‘Agent’ is a native software agent of the SamurAI Platform that enables telemetry collection and query execution in the delivery of the SamurAI Managed Detection and Response service.
Advanced Analytics
Detection capabilities, including machine learning, big data, and complex event processing analysis, that are used by the SamurAI platform.
Alert
Security detection made by the SamurAI platform or third party vendor where we are ingesting telemetry.
Boost Scoring
Boost Scoring is a technique used by the SamurAI platform which improves the ability to find Advanced Persistent Threats (APTs) by using a methodology which helps to link seemingly unrelated events.
Collector
A Collector is responsible for ingesting telemetry (or logs) into the SamurAI platform. There are two types of Collector, Local Collectors and Cloud Collectors.
A Local Collector is a virtual appliance which is deployed in your environment. Typically you will use the Local Collector as the destination for syslog messages produced by your devices.
A Cloud Collector provides the ability to ingest telemetry from cloud platforms and services, and is hosted centrally as part of the SamurAI platform. For specific integrations we leverage a cloud collector to monitor public cloud storage and pull data to the SamurAI platform.
Correlation
The ability for our systems to find a common linkage in Logs or Events (via source or destination IP address, Common Vulnerabilities and Exposures identifier, or other attributes) and combine them within one Event to add context to an Alert.
Endpoint
A physical or virtual device (e.g workstation, laptop, server) that connects to a computer network.
Enrichment
The process of adding contextual information (such as geolocation, evidence from packet captures or other data) to log information, either programmatically, or by a Security Analyst.
Event
All of the individual data points (Telemetry) ingested via Collectors into the SamurAI platform are known as Events. Through the use of Advanced Analytics, our systems are able to generate Alerts from Events which indicate the presence of threat actor activity. All events are stored in our data lake, and can be queried using Advanced Query.
Global Threat Intelligence Center (GTIC)
The organization within NTT’s Security Holdings responsible for, threat research, vulnerability tracking and the development, aggregation and curation of threat intelligence.
Integration
Integrations provide the mechanism to ingest telemetry (in other words logs and data) into the SamurAI platform.
Managed Detection and Response (MDR)
SamurAI Managed Detection and Response is a service which delivers cybersecurity insights, advanced threat detection, response, and protection capabilities via the ingestion of varied telemetry sources including cloud, network, compute and mobility sources. Supported telemetry combined with our proprietary Advanced Analytics, analyst threat hunting, and AI-based threat detection capabilities translate to faster, more accurate detections and most importantly reduced business risk.
MITRE ATT&CK Framework
MITRE ATT&CK® is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community.
Threats detected by the SamurAI platform are mapped against MITRE ATT&CK to assist in better understanding the nature of the activity detected, possible countermeasures and the urgency of response.
Network Traffic Analyzer (NTA)
A native component of the SamurAI Platform. It is a passively deployed sensor that monitors mirrored network traffic to provide real-time, agentless visibility into network activity. The NTA is designed to detect suspicious or malicious behavior across both perimeter and internal segments of a network in conjunction with the SamurAI Managed Detection and Response (MDR) service.
Node
A registered instance of the SamurAI Endpoint Agent with the SamurAI platform.
SamurAI Endpoint Agent
A native software agent of the SamurAI Platform. It is installed on an endpoint enabling telemetry collection and query execution in the delivery of the SamurAI Managed Detection and Response service.
SamurAI plaform
SamurAI is a vendor-agnostic, cloud native, scalable, API-driven, advanced threat detection, and response platform.
SamurAI Hunting Engine
Intelligence-driven detection engine based on the Sigma project but customized by NTT with additional detection capabilities. The SamurAI hunting engine performs automated threat hunting to idenfiy and alert on possible adversary activity.
SamurAI Real-time Engine
Proprietary NTT developed detection engine that leverages behaviour modeling, machine learning, and the latest threat research to automatically identify suspected threats during real-time analysis of ingested telemetry into SamurAI.
Security Incident
A notable threat to a client environment detected and validated via automation or by Security Analysts. Security Incidents may require a response to mitigate or eliminate the identified threat. Information related to Security Incidents are available via the SamurAI Portal and downloadable in PDF format as required.
Severity
Severity is the term used to describe the potential magnitude of impact of a detected threat which is presented as a Security Incident. Severity is presented as Unknown, Low, Medium, High or Critical.
Telemetry
In the context of SamurAI, Telemetry refers to the data, usually in the form of logs, collected from different security solutions and other sources which is then ingested into the SamurAI platform. This includes but is not limited to network, firewall , DNS, email, endpoint, server, and cloud workloads.
Each telemetry source contains different types of activity data. The SamurAI platform is able to collect a wide variety telemetry in order to detect and hunt for unknown threats and assist in forensic analysis.
Tenant
A tenant is the entity used to represent an organization using SamurAI. Individual users can be invited to one or more tenants.
1.6 - Telemetry Data Source Categorization
SamurAI telemetry support is categorized using the following three levels. These categories describe the estimated value that a specific telemetry data source is expected to add to the Managed Detection & Response (MDR) service, whilst providing clarity and expectations of threat detection capabilities.
1. Foundation
- Vendors and technologies with excellent threat detection, validation and hunting capabilities and where evidence collection is performed (such as IDS/IPS).
2. Detection
- Vendors and technologies with good threat detection capabilities and where evidence collection is performed (such as Sandbox). Although best offered in combination with Foundation sources, Detection level sources are sufficiently high value to be monitored in isolation.
3. Enrichment
- Vendor and technologies with no / limited threat detection / validation capabilities in isolation. Used mainly for correlation, Threat Hunting and Enrichment purposes in combination with Foundation/Detection sources.
Some examples:
- An IDS/IPS telemetry source where full API integration is available and evidence (e.g Packet Capture - PCAP) is collected for analysis would be used for threat detection purposes. However the same technology type without such an integration (e.g syslog only) would only provide data with no actual detail in support of qualifying threats, and would therefore primarily be used for Enrichment purposes in relation to events from sources of a higher support level.
- A DHCP log would add no actual detection capability, but it can be used to identify the actual physical host in a network using dynamic net assignment.
For technologies consisting of a combination of data source types, our policy is that the highest level of support that a source reaches also determines the overall support category of the technology.
For example, a Unified Threat Management (UTM) data source consisting of multiple types (e.g. Firewall, URL, IDS/IPS, Sandbox) would when evidence collection is supported (e.g. PCAP, Sandbox Execution reports) be categorized as a Foundation source as the IDS/IPS with PCAP collection is considered to be at such a level.
A UTM consisting of the same source types, without evidence collection, would be categorized as Detection support as the highest level source would be at Detection level (e.g. URL/FW).
All supported telemetry data sources with the assigned category can be found under Supported Integrations.
2 - Support
2.1 - Getting Help
Contacting Support
You can contact us by submitting a ticket from the SamurAI Portal. We will get back to you in accordance with our Support Policy.
General Tickets
Submit a ticket from the SamurAI Portal:
- Ensure you are signed in to the SamurAI Portal
- From the main menu at the top left click General Tickets
- Select Create Ticket
- Add a Title and Description that describes your issue or request.
- Click Create ticket
Emoji support is also available.
- Windows: Press Windows + . (period) to open the emoji picker.
- macOS: Press Control + Command + Space to open the emoji and symbol picker.
Tracking Tickets
- Ensure you are signed in to the SamurAI Portal
- From the main menu at the top left click General Tickets
General Ticket Status
- Awaiting SOC: Ticket is currently awaiting feedback / input from the SOC
- Awaiting Feedback: Ticket has been created or updated and is awaiting your feedback / response
- Closed: The Ticket is Closed.
Security Incidents
Review the following articles on Security Incidents and The Situation Room for further information on Security Incidents.
Tracking Security Incidents from the SamurAI Portal:
- Ensure you are signed in to the SamurAI Portal
- From the main menu at the top left click Security Incidents.
Security Incident Status
- Awaiting Feedback: Security Incident has been created or updated and is awaiting your feedback / response
- Awaiting SOC: Security Incident is currently awaiting feedback / input from the SOC
- Closed: The Security Incident is Closed.
How do I access documentation for the SamurAI Portal?
You must already know, if you are reading this!!
- From the SamurAI Portal select Documentation on the main menu
) will be displayed if notifications are disabled