Skip to main content

How It Works

1

AI generates output

Your LLM produces a response
2

User edits it

They shorten, remove filler, change tone, etc.
3

Cortex detects the pattern

What kind of edit was this? Shortened? Removed AI phrases?
4

Pattern becomes an instruction

“Never start with I think” — injected into future prompts

8 Pattern Types

PatternWhat It Detects
ShortenedUser made it shorter
Removed QuestionsUser stripped trailing questions
Stripped AI PhrasesUser removed “furthermore”, “it’s worth noting”
Tone ShiftsUser changed formality level
Added ContentUser added information the AI missed
Simplified LanguageUser replaced jargon with plain words
Changed OpenersUser rewrote the first sentence
Structural ChangeUser reorganized paragraphs

Usage

use laminae::cortex::{Cortex, CortexConfig};

let mut cortex = Cortex::new(CortexConfig::default());

// Track edits over time
cortex.track_edit(
    "It's worth noting that Rust is fast.",
    "Rust is fast."
);
cortex.track_edit(
    "Furthermore, the type system is robust.",
    "The type system catches bugs."
);

// Detect patterns
let patterns = cortex.detect_patterns();
// → [RemovedAiPhrases: 100%, Shortened: 100%]

// Get prompt block for LLM injection
let hints = cortex.get_prompt_block();
// → "--- USER PREFERENCES (learned from actual edits) ---
//    - Never use academic hedging phrases
//    - Keep sentences short and direct
//    ---"

Instruction Deduplication

Instructions are ranked by reinforcement count. When a new instruction has >80% word overlap with an existing one, it increments the existing instruction’s count instead of creating a duplicate.

Performance

OperationTime
track_edit (single)~85 ns
track_edit (100 edits)~9.3 µs
detect_patterns (100 edits)~426 µs
detect_patterns (500 edits)~2.2 ms
Edit tracking is near-instant. Pattern detection scales linearly.
Cortex is WASM-compatible (edit tracking + pattern detection) and available via Python bindings. LLM-powered instruction learning requires Ollama (native only).