Skip to main content

Full Stack

[dependencies]
laminae = "0.3"
tokio = { version = "1", features = ["full"] }

With LLM Backends

# Claude (Anthropic)
laminae = { version = "0.3", features = ["anthropic"] }

# OpenAI / Groq / Together / DeepSeek / local
laminae = { version = "0.3", features = ["openai"] }

# All backends
laminae = { version = "0.3", features = ["all-backends"] }

Individual Layers

Pick only what you need:
[dependencies]
laminae-psyche = "0.3"       # Cognitive pipeline
laminae-persona = "0.3"      # Voice extraction & enforcement
laminae-cortex = "0.3"       # Learning loop
laminae-shadow = "0.3"       # Red-teaming
laminae-glassbox = "0.3"     # I/O containment
laminae-ironclad = "0.3"     # Process sandbox
laminae-ollama = "0.3"       # Ollama client
laminae-anthropic = "0.3"    # Claude EgoBackend
laminae-openai = "0.3"       # OpenAI-compatible EgoBackend

Python Bindings

PyPI package coming soon. For now, build from source with maturin.
cd crates/laminae-python
pip install maturin
maturin develop

Requirements

  • Rust 1.70+
  • Ollama (for Psyche, Persona, Cortex, and Shadow LLM features)
brew install ollama && ollama serve
  • macOS, Linux, or Windows (for Ironclad’s process sandbox)
PlatformSandbox Mechanism
macOSFull Seatbelt sandbox (sandbox-exec)
LinuxKernel namespaces + rlimits
WindowsJob Object resource limits + env scrubbing