Explore lead risk score

About risk score

Watch video!


Hunters Risk Score is a a multi-layer evaluation of the urgency and fidelity of security attacks in the organization. Hunters runs multiple scoring models that simultaneously calculate the risk level of each lead. The scoring models examine the activity and its results, the threat type, the essence and importance of the affected assets, the prevalence of the activity, and more. The models are based on the lead activity and information as well as on Hunters' enrichments and auto-investigation results.

Each scoring layer affects the Confidence of the lead (how likely it is to be malicious) or the Severity of the lead (what is the potential damage to the organization). All layers are combined to determine the Lead Risk level based on the Confidence and Severity levels.

In addition to the scoring models provided by Hunters, you can impact the Confidence of leads using Custom Scoring rules and Asset Tags.

Risk calculation

Risk is calculated based on a matrix composed of two axes: Confidence and Severity.

Confidence Severity
The likelihood of the security event being malicious in the organization. There are five levels of Confidence:

- Very Likely
- Likely
- Possible
- Unlikely
- Very Unlikely
The potential impact and damage of the security event on the organization. There are five levels of Severity:

- Severe
- Major
- Moderate
- Minor
- Insignificant


Here's the general flow of the risk calculation process:

image

  1. Base Layer: The system calculates the initial values of base confidence and base severity. Base severity calculation is based on the severity levels of the MITRE ATT&CK TTPs (Tactics, Techniques, and Procedures) that were identified as part of the security event.

  2. Advanced Layer: Once the base values are set, the system applies any custom scoring rules existing and adjusts the confidence and severity values appropriately. The Hunters scoring model is applied during this phase.

    📘Hierarchy in risk calculation

    There are several kinds of custom scoring rules you can use to impact and adjust the risk score of your leads. In case multiple rules are matching a leads, they will be processed in the following order: Ignore rules, Set confidence rules, Increase/decrease rules.

    When multiple Set confidence rules are matching a lead, the rule with the higher Confidence level will take priority.

  3. Result: Final confidence and final severity are used to determine the final risk score of the lead.

image

Final risk score

Hunters has four Risk levels that allow security analysts to prioritize Alerts and Leads:

Risk Description
Low The security event may have a negligible adverse effect on organizational operations or assets.
Medium The security event may have a limited adverse impact on organizational operations or assets.
High The security event may have a serious adverse effect on organizational operations or assets.
Critical The security event may result in severe or catastrophic consequences for organizational operations or assets.

Explore lead risk score

To view the scoring models used on a particular lead, navigate to the Lead Details panel and scroll to the Confidence section. Every model explains the logic behind it. See the below examples for more information.

This example shows a detection that had a default Confidence score of Unlikely, and its score was further reduced due to the Prevalent Values model. The model reduced the score because similar values in this lead were found in other leads in the environment, decreasing the likelihood of the activity being malicious, and hinting that this may be known behavior in the environment.

This example shows a lead that had a default Confidence score of Unlikely, and its score was both raised and reduced by different scoring models. The first scoring model is the Domain Details model, which increased the score of the detection because it contained a domain which was relatively new (newer than 90 days).

The second model was the Threat Intel model, which reduced the score because the IOC of the detection is very old, and was published more than 400 days before the detection took place.

Description of the image

This example shows a lead that had a default Confidence score of Possible, and its score was raised due to a proprietary scoring model built specifically for the detector that detected this signal (Malicious use of HTA files). This model contains explanation of why the score was raised.

Description of the image