The Static Analysis Paralysis

January 22, 2024

Barry Schwartz, in his book "The Paradox of Choice: Why More Is Less," defines "analysis paralysis" as a state where an individual becomes overwhelmed by the abundance of options and factors to consider, leading to difficulty in making a decision. This phenomenon occurs because, as the number of choices increase, the ease of making a decision decreases, and the expectations for how satisfying the decision should be also increase.

Having too many choices can lead to feelings of anxiety, indecision, paralysis, and dissatisfaction. Analysis paralysis is when an individual overanalyzes or overthinks a situation to the point that a decision or action is never taken, thereby inhibiting outcome and productivity. (Schwartz has a great TED talk on this!)

In the context of cybersecurity, nowhere is this more true than in the case of static analysis or SAST as it is often called. It fits the ‘paralysis analysis’ definition perfectly, meeting the 3 key conditions and consequences.

Increased number of alerts

SAST tools at their core are rule-based checkers - they have thousands of rules on how software code ought to be written to be secure. SonarQube has 5000+ rules, Semgrep has 1500+ rules, and so on. While a great idea in theory, there are two fundamental issues with this approach when it comes to the current state of application security. 

Firstly, the rules are intended to cover as wide a range of software projects as possible and as a result are generic and lacking context. This means that the analysis ends up looking for and finding vulnerabilities that may not be relevant. 

Secondly, every analysis must balance accuracy and coverage. For SAST tools, their selling point is the number of vulnerabilities they can find, incentivizing them to sacrifice accuracy. The median false positive rate for SAST tools is 37% as determined by NIST. This leads to a large number of false positives i.e. reported vulnerabilities that aren’t actual threats.

Statistics from the NIST Report on SAST performance

The result? Every analysis ends up with hundreds, if not thousands, of reported vulnerabilities, overwhelming developers and security teams.

Un-ease of deciding which ones to patch

The sheer volume of vulnerabilities reported by SAST tools complicates the decision-making process on what to fix, but it is not the only factor. Vulnerability results often lack clear prioritization, using generic scoring systems like NIST’S CVSS to highlight which vulnerabilities are of the highest concern. These scores are often criticised for lack of application-specific context and pragmatic estimations of exploitability, which makes them a distraction rather than an aid in the triaging process.

Choosing which vulnerabilities to fix and in what order also requires an understanding of the development effort involved. Facing an inundation of supposed vulnerabilities and conflicting demands to build new features and functionality, developers often push back due to the effort required for resolution. This pushback is even stronger when it risks breaking existing functionality due to code compatibility concerns, or when the exploitability risk of the vulnerability is non-obvious, causing alert fatigue. Put together, these concerns add another layer of friction to the already daunting task of securing software.

Increased security expectations

There's a significant investment in both time and resources when it comes to static analysis tools. Spending six digits on subscriptions, training developers, adding it to IDEs, setting up checks and gates in the CI/CD process, defining security policies, and generating periodic reports - are all considerable commitments. With this investment often comes the expectation that all identified vulnerabilities will be resolved and resolved fast, fostering an unrealistic pursuit of ‘perfect security’ based on security policies and SLAs that are defined for an ideal state. 

The reality on the ground invariably falls short of these expectations, which triggers a vicious downward spiral of reduced team morale and lowering of the security bar - further perpetuating the static analysis paralysis.

Overcoming the paralysis with Patched

The 'Analysis Paralysis' in AppSec, particularly with SAST, stems from two core challenges: the accuracy of results (especially false positives) and the constraints on development resources. Patched effectively tackles both.

Our approach begins by intelligently sorting through your static analyzer's output. We prioritize genuine security issues and set aside false positives or non-critical vulnerabilities. How do we achieve this? By employing a meticulous process that places each vulnerability in the context of your application's specific control and data flows, which significantly reduces the noise created by irrelevant alerts.

But identifying the real issues is just the start. Patched goes a step further by generating secure, compatible patches for these vulnerabilities. We don't just stop at patch creation; we rigorously validate each patch to ensure they fully remediate the issue without disrupting your existing codebase. This process results in high-quality, reliable patches, boasting an acceptance rate over 80% - a stark contrast to the average 69.6% merge rate of more basic tools like Dependabot. With Patched, you're not just applying fixes; you're enhancing your code with well-considered, robust security solutions.

Are you facing the daunting task of sifting through endless security alerts? Or maybe you're just looking for a smarter way to manage application security? Try Patched for free today. Experience firsthand how we can transform your AppSec strategy from a paralyzing challenge into a streamlined, efficient process. Let's move beyond finding problems. Let's get Patched.

Stop Scanning. Start Patching.
Patched increases your security coverage, not your workload.
Get Patched