Skip to main content

1. Set Up the Project

cargo new my-ai-app && cd my-ai-app
Cargo.toml
[dependencies]
laminae = "0.3"
tokio = { version = "1", features = ["full"] }
anyhow = "1"

2. Implement EgoBackend

The EgoBackend trait is how Laminae talks to your LLM:
use laminae::psyche::{PsycheEngine, EgoBackend, PsycheConfig};
use laminae::ollama::OllamaClient;

struct MyEgo;

impl EgoBackend for MyEgo {
    fn complete(
        &self,
        system: &str,
        user_msg: &str,
        context: &str,
    ) -> impl std::future::Future<Output = anyhow::Result<String>> + Send {
        let full_system = format!("{context}\n\n{system}");
        async move {
            // Replace with your actual LLM call
            Ok(format!("Response to: {user_msg}"))
        }
    }
}

3. Run the Pipeline

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let engine = PsycheEngine::new(OllamaClient::new(), MyEgo);
    let response = engine.reply("What is creativity?").await?;
    println!("{response}");
    Ok(())
}

4. Run It

# Make sure Ollama is running first
ollama serve &
cargo run
The Psyche pipeline will:
1

Shape with Id and Superego

Send your message to Id (creative agent) and Superego (safety agent) on Ollama
2

Compress signals

Compress their output into invisible context signals
3

Generate with Ego

Forward everything to your Ego (your LLM) for the final response

What’s Next?