HALETHEIA OSS · Framework

LiiADA Augmented Intelligence Framework

A modular, plug-and-play Python framework for building production-grade AI agents. Hexagonal architecture, multi-LLM support, and native HCP context integration — without the lock-in.

Python 3.12+ Hexagonal Architecture HCP Native AGPL-3.0 Launching May 2026

Why another agent framework?

Most frameworks give you tools to call LLMs. LiiADA gives you a system to build agents that stay coherent across sessions, recover from errors deterministically, and audit every decision they make.

Built for teams that can't afford hallucinations in production. Every agent module is independently testable, swappable, and observable.

🔌

Plug-and-play modules

13 independent modules. Swap LLM providers, memory backends, or RAG engines without touching business logic.

🧠

Persistent context via HCP

Native integration with HCP protocol. Agents read and update structured context automatically — no more session amnesia.

🔍

Observable by default

Every agent decision is logged with full traceability. Built-in metrics and OpenTelemetry-compatible tracing.

13 Independent Modules

Each module is a ports & adapters unit — replaceable without changing the rest of the system.

liiada-core

Base framework and DI container

liiada-llm

Ollama, OpenAI, Anthropic, Gemini

liiada-memory

Episodic + semantic memory

liiada-rags

Multi-domain RAG system

liiada-prompt

Structured prompt engine

liiada-cache

Intelligent caching layer

liiada-observability

Metrics, tracing, logging

liiada-mcp

MCP protocol integration

liiada-tools

Tool registration and execution

liiada-bridge

External system connectors

liiada-session

Session lifecycle management

liiada-audit

Decision audit trail

liiada-hcp

HCP context integration

LiiADA modular architecture

Minimal Agent in 10 Lines

Swap any module without touching the rest.

from liiada.core import AgentBuilder
from liiada.llm import OllamaProvider
from liiada.memory import EpisodicMemory
from liiada.hcp import HCPContextLoader

agent = (
    AgentBuilder()
    .with_llm(OllamaProvider(model="llama3"))
    .with_memory(EpisodicMemory())
    .with_context(HCPContextLoader(vault=".procontext/"))
    .build()
)

response = await agent.run("Implement the user auth endpoint")
# Agent reads .procontext/ → knows stack, decisions, constraints

Any LLM. Your Infrastructure.

No vendor lock-in. Run fully local or hybrid.

Ollama (local)

OpenAI

Anthropic

Google Gemini

Launching May 2026

Star the repo to get notified when LiiADA goes public.

Zoomed image