How It Works
AI generates output
Your LLM produces a response
User edits it
They shorten, remove filler, change tone, etc.
Cortex detects the pattern
What kind of edit was this? Shortened? Removed AI phrases?
Pattern becomes an instruction
“Never start with I think” — injected into future prompts
8 Pattern Types
| Pattern | What It Detects |
|---|
| Shortened | User made it shorter |
| Removed Questions | User stripped trailing questions |
| Stripped AI Phrases | User removed “furthermore”, “it’s worth noting” |
| Tone Shifts | User changed formality level |
| Added Content | User added information the AI missed |
| Simplified Language | User replaced jargon with plain words |
| Changed Openers | User rewrote the first sentence |
| Structural Change | User reorganized paragraphs |
Usage
use laminae::cortex::{Cortex, CortexConfig};
let mut cortex = Cortex::new(CortexConfig::default());
// Track edits over time
cortex.track_edit(
"It's worth noting that Rust is fast.",
"Rust is fast."
);
cortex.track_edit(
"Furthermore, the type system is robust.",
"The type system catches bugs."
);
// Detect patterns
let patterns = cortex.detect_patterns();
// → [RemovedAiPhrases: 100%, Shortened: 100%]
// Get prompt block for LLM injection
let hints = cortex.get_prompt_block();
// → "--- USER PREFERENCES (learned from actual edits) ---
// - Never use academic hedging phrases
// - Keep sentences short and direct
// ---"
Instruction Deduplication
Instructions are ranked by reinforcement count. When a new instruction has >80% word overlap with an existing one, it increments the existing instruction’s count instead of creating a duplicate.
| Operation | Time |
|---|
track_edit (single) | ~85 ns |
track_edit (100 edits) | ~9.3 µs |
detect_patterns (100 edits) | ~426 µs |
detect_patterns (500 edits) | ~2.2 ms |
Edit tracking is near-instant. Pattern detection scales linearly.
Cortex is WASM-compatible (edit tracking + pattern detection) and available via Python bindings. LLM-powered instruction learning requires Ollama (native only).