Static Analysis Evaluation Benchmark with OpenAI's GPT-4o Fine-Tuning

August 24, 2024

OpenAI invited Patched to fine-tune gpt-4o to build the SOTA model for this benchmark. We worked alongside their team to understand how performance can be improved by fine-tuning frontier models.

Update on 26th Aug

OpenAI invited Patched to fine-tune gpt-4o to build the SOTA model for this benchmark. We worked alongisde their team to understand how performance can be improved by fine-tuning frontier models. All the code and data on how they did it is available on their GitHub.


The Static Analysis Evaluation Benchmark is a benchmark specifically designed to evaluate the performance of Large Language Models (LLMs) in fixing software vulnerabilities. This makes it an invaluable resource for researchers, developers, and security professionals looking to assess and improve AI-driven code repair tools.

Available Resources:

The New Version: Raising the Bar

A new version of the benchmark was recently released, featuring more challenging instances than the previous iteration. This update was necessary due to the rapid progress in model capabilities over the past year.

Key Differences in the New Version:

  1. Larger Sample Sizes: While the previous version included vulnerable files that were less than 512 tokens in size, the new version only includes files that are strictly between 512 and 1024 tokens.
  2. Increased Difficulty: The samples in the new version are deliberately chosen to be more challenging, pushing the boundaries of what current LLMs can handle.
  3. Broader Coverage: By including larger files, the benchmark now covers more complex and realistic scenarios that developers might encounter in real-world projects.

Methodology

The benchmark was created using a rigorous process:

  1. Data Collection: Scanning the top 100 Python repositories on GitHub to identify vulnerable files.
  2. Size Filtering: Keeping only files between 512 and 1024 tokens for a good balance of complexity and manageability.
  3. Vulnerability Verification: Analyzing each file using Semgrep to confirm the presence of exactly one vulnerability.
  4. Dataset Curation: Carefully selecting examples that represent real-world vulnerabilities in popular open-source projects.

The detailed script used for generating the dataset can be found here: _script_for_gen.py

Evaluation Process

The benchmark includes an evaluation script that allows users to test their models against the dataset. Here's a brief overview of how it works:

  • The script loads the vulnerable code samples from the dataset.
  • It then passes each sample to the model being evaluated, asking it to fix the vulnerability.
  • The fixed code produced by the model is then re-analyzed using Semgrep to check if the vulnerability has been successfully removed.
  • The script calculates a final score based on the percentage of vulnerabilities successfully fixed.
  • Benchmark Results

    Let's take a look at some of the key results from the new version of the benchmark:

    See Interactive visualization of the results.

    Key Insights:

    1. Additive Improvements: The results demonstrate that various techniques can be combined to achieve better performance. Starting from a base model, we can see improvements through few-shot prompting, retrieval-augmented generation (RAG), and fine-tuning.
    2. Fine-tuning Effectiveness: Fine-tuned models consistently outperform their base counterparts, highlighting the importance of domain-specific training data.
    3. RAG's Impact: Retrieval-augmented generation shows significant improvements across both model sizes, suggesting its potential in enhancing vulnerability fixing capabilities.
    4. Model Size Matters: The larger GPT-4o model generally outperforms its mini counterpart, especially after applying optimization techniques.

    The Power of Fine-Tuning: synth-vuln-fixes Dataset

    A key component in achieving top performance on this benchmark is the use of the patched-codes/synth-vuln-fixes dataset for fine-tuning. This dataset has several notable features:

    • 203 High-Quality Examples: Each example in the dataset was carefully curated to represent realistic vulnerable and fixed code pairs.
    • Human Review: All examples underwent human review to ensure accuracy and relevance.
    • Static Analysis Verification: Each example was checked using Semgrep to confirm the presence of vulnerabilities in the original code and their absence in the fixed version.
    • Diverse Vulnerability Types: The dataset covers a wide range of common security issues, providing a comprehensive training resource.

    The effectiveness of this dataset is evident in the performance boost seen in fine-tuned models across the board.

    Patched MOA: Optimizing Inference for Diverse Tasks

    In addition to the techniques mentioned above, the benchmark also evaluated the performance of models using Patched MOA (Mixture of Agents), an innovative approach to optimizing inference for diverse software development tasks. Patched MOA is based on the idea of using multiple specialized agents to handle different aspects of a task, potentially leading to improved performance.

    You can read more about Patched MOA in this detailed blog post: Patched MOA: Optimizing Inference for Diverse Software Development Tasks

    Patched MOA Results

    Key Insights from Patched MOA Results:

    1. Effectiveness of MOA: Even the base MOA-GPT-4o model outperforms the standard GPT-4o model, demonstrating the potential of the Mixture of Agents approach.
    2. Synergy with Other Techniques: MOA appears to work particularly well when combined with few-shot prompting and RAG, achieving the highest scores in the benchmark.
    3. Inference Optimization: These results show that post-inference optimization techniques like MOA can significantly improve performance without the need for fine-tuning, which can be computationally expensive.

    Implications and Future Directions

    The Static Analysis Evaluation Benchmark serves several crucial purposes in the field of AI-assisted software security:

  • Tool Validation: It provides a standardized way to evaluate and compare different AutoFix tools from various application security vendors.
  • Research Acceleration: Researchers can use this benchmark to quickly assess new models and techniques for vulnerability fixing.
  • Industry Standards: As the first of its kind, this benchmark could help establish industry standards for AI-driven code repair capabilities.
  • Continuous Improvement: As models improve, the benchmark can be updated to include even more challenging examples, ensuring it remains relevant.
  • Conclusion

    The Static Analysis Evaluation Benchmark represents a significant step forward in our ability to measure and improve AI's capability to fix software vulnerabilities. By providing a standardized, challenging dataset and evaluation methodology, it enables researchers and developers to push the boundaries of what's possible in automated code repair.

    As we continue to see rapid advancements in AI and machine learning, tools like this benchmark will play a crucial role in ensuring that these technologies can be effectively applied to critical tasks like maintaining software security. The combination of techniques demonstrated in the results – from few-shot learning to RAG, fine-tuning, and innovative approaches like Patched MOA – paints an exciting picture of the future of AI-assisted software development and security.

    We encourage researchers, developers, and security professionals to explore the benchmark, contribute to its improvement, and use it to drive innovation in the field of AI-powered vulnerability fixing. Together, we can work towards a future where AI significantly enhances our ability to create and maintain secure software systems.

    Boost Release Velocity

    Don't make developers wait - provide instant feedback and action items for the code they push.

    Unburden Developers

    Automate security and quality fixes so developers can focus on the building the features your users love.

    Enhance Maintainability

    Keep your codebase and documentation clean and up to date - just like it was on the first day!

    Go beyond the IDE with Patched.

    Get Started
    2,000+ Patchflows Run

    On 500+ Code Repositories

    Choose between Self-hosted and Managed Service
    Free and Open Source
    Run Locally or On Cloud
    Customize to your needs