Three Stages
Stage 1: Static Analysis
Regex pattern scanning for 25+ vulnerability categories:- SQL injection (string concatenation, format strings)
- Command injection (eval, exec, subprocess)
- Hardcoded secrets (API keys, passwords, private keys, connection strings)
- XSS (innerHTML, dangerouslySetInnerHTML)
- Path traversal (../ in file operations)
- Insecure deserialization (pickle, yaml.load)
- Weak cryptography (MD5, SHA-1)
- Resource abuse (infinite loops)
- Supply chain attacks (pipe-to-shell, insecure package indices)
Stage 2: LLM Adversarial Review
A local Ollama model with an attacker-mindset prompt reviews the output. Catches logic flaws and attack vectors that regex can’t find.Stage 3: Sandbox Execution
Runs code blocks in ephemeral Docker/Podman containers with strict security constraints:| Constraint | Value |
|---|---|
--network=none | No network access |
--memory=128m | Memory limit |
--cpus=0.5 | CPU limit |
--read-only | Read-only root filesystem |
--cap-drop=ALL | Drop all Linux capabilities |
--security-opt=no-new-privileges:true | No privilege escalation |
| TTL timeout | Force-kills after configured seconds |
Usage
Configuration
Shadow is configured viaShadowConfig (JSON file at ~/.config/laminae/shadow.json):
| Field | Default | Description |
|---|---|---|
enabled | true | Master enable/disable |
aggressiveness | 2 | 1=static, 2=static+LLM, 3=all stages |
llm_review_enabled | true | Enable LLM adversarial reviewer |
sandbox_enabled | false | Enable container sandbox (requires Docker/Podman) |
shadow_model | qwen2.5:14b | Ollama model for LLM review |
sandbox_image | python:3.12-slim | Docker image for sandbox |
sandbox_ttl_secs | 30 | Max execution time per block |
Custom Analyzers
Implement theAnalyzer trait to add custom analysis stages:

