← Back to writing

Modular AI Systems

Published February 1, 2025 · Updated February 1, 2025

When building AI systems, I’ve found it helpful to separate the interface from the intelligence, and both from the context and connectors. This makes iteration easier.

Separate the UI

The interface can be a chat window, dashboard, or embedded widget. Keeping it thin means you can update the UI without touching the AI backend, and vice versa.

Modular intelligence

LLMs work alongside retrieval systems and deterministic logic. When these are separate modules, you can swap models, test different reasoning approaches, or fall back to simple rules when needed.

Context management

AI agents need memory and knowledge bases. A separate context layer handles vector stores, relational data, and applies retention policies before data reaches the model. This makes compliance and debugging easier.

Connectors for actions

Connectors translate AI decisions into API calls - sending emails, updating calendars, creating tasks. They handle permissions, rate limiting, and logging. Separating them makes the system auditable.

Independent iteration

When layers are separate, they can evolve independently. UI teams work on interface improvements, AI teams test new models, and ops teams audit data without blocking each other.