AI-Powered Threat Modeling

AI Threat Modeler

Leverages Claude Code to perform threat modeling of GitHub repositories through three distinct phases: Ingestion, Prioritization, and Analysis. High-fidelity findings with zero false positives.

How It Works

Three distinct phases transform a repository into actionable, verified security findings.

Phase 01

Ingestion

Picks up review requests from an AWS SQS Queue, clones the repository to tmpfs, redacts secrets after finding them with Trufflehog, and builds a manifest and dependency graph.

  • Picks up review requests from an AWS SQS Queue
  • Clones the repository to tmpfs
  • Redacts secrets after finding them with Trufflehog
  • Builds a manifest and dependency graph
Phase 02

Prioritization

Intelligently decides what matters most for security analysis, approximating what an experienced security engineer would instinctively reach for first when dropped into an unknown codebase.

  • Builds a file manifest
  • Scores each file by security relevance
  • Applies penalty signals to deprioritize low-value files
  • Builds a dependency graph
  • Produces a prioritized send list
Phase 03

Analysis

Prompts Claude Code to take the role of an experienced security engineer. Performs thorough analysis of repository code for threats, finding as many code issues and application threats as possible.

  • Claude Code assumes the role of an experienced security engineer
  • Performs thorough analysis of repository code for threats
  • Finds as many code issues and application threats as possible
  • Ensures findings are not false positives and are high fidelity
  • Verifies each finding with a confidence score before reporting

Intelligent Prioritization

Approximates what an experienced security engineer would instinctively reach for first when dropped into an unknown codebase. Every file is scored and ranked so the analysis focuses on what matters.

File Manifest

Builds a complete manifest of every file in the repository to establish the full scope of analysis.

Security Relevance Scoring

Each file is scored based on its security relevance - authentication logic, input handling, API endpoints, and data access patterns rank highest.

Penalty Signals

Applies penalty signals to deprioritize files unlikely to yield security findings - generated code, test fixtures, vendored dependencies, and static assets.

Dependency Graph

Maps how files relate to each other, so the analysis understands which components interact and where trust boundaries exist.

Prioritized Send List

Produces a final ordered list of files to analyze, ensuring Claude Code spends its context window on the code that matters most.

High-Fidelity Findings

Every finding is verified with a confidence score before reporting. The adjustable confidence threshold lets operators tune signal-to-noise ratio, and operator triage ensures only validated issues reach your team via Jira.

Confidence Scoring

Every finding is assigned a confidence score. Only findings that meet the threshold are reported, ensuring high signal and low noise.

Adjustable Confidence Level

Operators can tune the confidence threshold to match their risk tolerance - raise it for fewer, higher-certainty findings or lower it for broader coverage.

No False Positives

Each finding is verified before reporting. The analysis cross-references code context, data flow, and reachability to eliminate false positives.

Jira Integration

Findings are reported to Jira one by one after operator triage, creating actionable tickets with full context and remediation guidance.

Operator Triage

Every finding passes through operator triage before being filed. This human-in-the-loop step ensures only validated issues reach your engineering team.

Full Visibility

No black boxes. The train of thought, high-level actions, and analysis plan are all visible in the UI so you understand exactly what the AI is doing and why.

Train of Thought

See exactly how the AI reasons about your codebase - every step of its thought process is visible in the UI for full transparency.

High-Level Actions

Track every high-level action the AI takes during analysis - which files it reads, what patterns it investigates, and how it builds its understanding.

Analysis Plan

View the structured plan the AI builds before diving into code - understand its strategy and priorities before results arrive.

Start Threat Modeling Your Repositories with AI