Rhino Security Labs

Risk Misconceptions in Social Engineering Testing

Intro: Risk Misconceptions in Social Engineering

Through thousands of social engineering engagements, our researchers have found that there are two philosophies that dominate the way enterprises look at Social Engineering: a) it is considered an inherent risk that cannot be managed and isn’t worth measuring, or b) it is viewed as a non-risk because users are trained via a typical automated ’email testing’ service. Both positions are incomplete in how they identify the risks of social engineering.

Social Engineering Risk Breakdown

Successful and effective risk management requires that an organization have a clear understanding of the risks the business faces in context. This involves identifying risks, characterizing them by their probability of occurrence and potential impact. That information must then be structured so that it can be viewed as relevant to the business and used as a basis for action.

When considering Social Engineering risk, exploit likelihood is usually broken down to ‘how likely are my users to click links?’

In this scenario, the risk picture looks like this:

           Risk = Exploit Likelihood (LI) * Potential Impact (PI)

While social engineering risk can be reduced via training, it cannot be eliminated. With increasingly sophisticated techniques, these attacks are becoming more difficult to spot and they are still highly effective as an entry point to an organization when successful. This is a people focused risk—people will continue to be part of the attack chain and, as a result, the system will continue to be vulnerable. Further, without an understanding of other controls in the environment, PI cannot be accurately estimated.

Evaluating Risk Through Potential Impact

While social engineering risk is often seen as “how likely are users to be duped”, we see that as only half the equation- the likelihood of attack.  Instead, when advising clients we emphasize the potential impact (PI) of social engineering and the technical controls of what happens after a user is tricked into an action.  These technical controls – from two factor authentication (2FA) to effective antivirus – act as hurdles to a successful attack, and can limit the damage to the organization.


While employees will be coaxed into unauthorized actions, that doesn’t mean its an inevitable risk.
Even if we spearphish email credentials from a user (very common), our next steps are often limited if 2FA is forced across the environment.  Similarly, malicious Office document attacks (such as our own Word SubDoc discovery) can be rendered ineffective with the right security architecture.  Every social engineering risk we identify has technical controls which can limit the security impacts.

The difficulty we often encounter is that organizations believe that user training or automated social engineering programs are enough to mitigate social engineering risks. They aren’t. In fact, they can build a false sense of security that can lead to lax technical controls. Having these systems in place does nothing to ensure a layered defense model and failing to proactively test your systems leads to complacency. Worse yet, they create a sense that ‘everything is fine.’ Until it isn’t. And then it’s too late.

Creating a Culture of Security Awareness

Another pitfall of an increasing dependency on automated testing tools and employee training is that this approach often over-simplifies social engineering attacks.

There are a few hallmarks that are typical of these services:

  • One-approach fits all types of engagements (automated testing and automated training)
  • Utilization of lower-level techniques such as mass emails that hit a group or department like a ‘wave’ making them easier to spot
  • Lack of targeted, detailed attack scenarios based on significant reconnaissance

All to often, these types of programs create a culture of fear within an organization where employees may even be reluctant to call IT if they receive or click on something they think might be suspicious. No one wants to be ‘that guy’ (or gal) best known around the office for failing to spot the trap.

The benefit of having effective controls can result in every user becoming ‘be the hero’ by looking out for suspicious emails, notifying IT and, yes, even letting IT know if they are ‘patient zero’ in an attack. Because the potential impact is mitigated, users can focus on being transparent rather than hiding a potential incident.

Technical Controls Testing: What Happens After the ‘Click’?

With the case of technical controls, having 2FA or alerts on anomalous behavior (like user logins) in place can gate an attack and provide early detection. On the host side, ask if there are effective host protection tools in place. Can they be bypassed to install malware on a user’s machine? Once that malware is installed, is the user a local admin? Are there tools to proactively identify suspicious traffic and hunt for infections? Each one of these questions represents a potential control that can constrain the impact of a social engineering attack. With effective technical controls, the attack becomes less about the ‘phish’ and more about the compromise and how devastating it will be.

In an environment with effective technical controls, in-depth social engineering testing moves beyond creating a ‘gotcha’ for your employees. It’s about uncovering areas of weakness, identifying risk and mitigating impact through controls. If your system lacks controls, your social engineering PI is high, and your overall risk is elevated as a result. With the right controls in place, tested for effectiveness, you can improve the health of the entire system.

With that in mind, our security engineers recommend the following:

  • Acknowledge that limiting risk through layered defenses is more effective than relying on user behavior
  • Challenge technical controls by engaging in proactive in-depth social engineering testing annually
  • Provide basic guidance and training to staff but don’t guild the lily
  • Create a security culture that rewards users for identifying potential social engineering attempts rather than punishing or embarrassing staff

Conclusion

Mitigating risk from social engineering attacks begins by creating a nuanced view of the enterprise exposure and developing a risk breakdown. Correctly implemented, technical controls such as 2FA, effective host protections, and a prompt IR team are effective ways to reduce the impact of a social engineering attack. By testing both liklhood of social engineering (clickrate of users) and the potential impact of an attack, you can reduce social engineering risk for your environment.