Skip to content

Architecture Overview

System Layers

HER-Ai is organized into five layers:

  1. Interface layer
  2. Telegram bot interface (her-core/her_telegram/bot.py, her-core/her_telegram/handlers.py)
  3. Admin dashboard (dashboard/app.py)

  4. Orchestration layer

  5. Runtime composition and service boot (her-core/main.py)
  6. CrewAI agents (her-core/agents/*.py)
  7. Task scheduler (her-core/utils/scheduler.py, APScheduler + SQLAlchemy job store, no sleep polling)

  8. Intelligence and tools layer

  9. LLM provider factory (her-core/utils/llm_factory.py)
  10. MCP server manager (her-core/her_mcp/manager.py)
  11. Curated tool wrappers (her-core/her_mcp/tools.py)
  12. Sandbox execution adapters (her-core/her_mcp/sandbox_tools.py)

  13. Memory and state layer

  14. Long-term memory wrapper (her-core/memory/mem0_client.py)
  15. Short-term context cache (her-core/memory/redis_client.py)
  16. DB initialization and schema (her-core/memory/db_init.py, her-core/memory/schemas.sql)
  17. Runtime metrics/decision logs (her-core/utils/metrics.py, her-core/utils/decision_log.py)

  18. Infrastructure layer

  19. Compose stack (docker-compose.yml)
  20. Image definitions (her-core/Dockerfile, dashboard/Dockerfile, sandbox/Dockerfile)
  21. CI/CD (.github/workflows/ci.yml)

Runtime Flow

Telegram message
  -> her_telegram.handlers.MessageHandlers
  -> strict intent classifier (CHAT_MODE default, ACTION_MODE on explicit high-confidence action intent)
  -> autonomy profile update (engagement + initiative + mood persistence)
  -> internal debate (Planner -> Skeptic -> Verifier gate)
  -> context update (Redis)
  -> memory lookup (Mem0/pgvector)
  -> LLM response generation (with optional failover)
  -> validated tool/sandbox action execution (immediate by default; scheduler only on explicit scheduling intent)
  -> metrics + decision logs persisted

Major Modules and Responsibilities

Module Responsibility Primary Files
her-core Main assistant runtime, agents, memory, telegram handling, scheduling her-core/main.py, her-core/her_telegram/handlers.py
her-core/agents CrewAI roles for conversation, reflection, personality, tools her-core/agents/conversation_agent.py, her-core/agents/crew_orchestrator.py
her-core/her_mcp MCP server lifecycle, tool abstraction, sandbox utilities her-core/her_mcp/manager.py, her-core/her_mcp/tools.py
her-core/memory Mem0 integration, context cache, schema compatibility her-core/memory/mem0_client.py, her-core/memory/schemas.sql
dashboard Operational visibility, health and metrics UI dashboard/app.py
tests Runtime guardrails and smoke checks tests/test_runtime_guards.py, tests/test_smoke.py

Service Topology (Compose)

Service Purpose Data
her-bot Core app runtime Redis + PostgreSQL + MCP/sandbox access
postgres Long-term memory and logs persistent volume
redis context and metrics cache persistent AOF volume
ollama + ollama-init local model serving and model pre-pull model volume
sandbox isolated execution tools ephemeral workspace volume
dashboard monitoring and operations UI reads Redis/Postgres

Reference: docker-compose.yml.

Design Notes

  • Config resolution supports runtime volume and fallback defaults via HER_CONFIG_DIR (her-core/utils/config_paths.py).
  • MCP startup is resilient to per-server failures/timeouts (MCP_SERVER_START_TIMEOUT_SECONDS).
  • Memory reads/writes can degrade gracefully when backend is unavailable if MEMORY_STRICT_MODE=false.
  • Scheduler tasks persist to YAML and fallback to Redis override if config path is not writable.
  • Proactive autonomy uses deterministic seeded randomness per user/day with DB-level daily slot caps.
  • Daily autonomy reflection adjusts initiative gradually and writes reflection events for dashboard observability.