Introduction
Architecture & Project Structure¶
The MindFlight AI Server is organized to make it easy to understand, extend, and maintain. The project is divided into logical layers, each with its own responsibility. Let's break down the architecture and explore how everything fits together.
High-level architecture¶
At its core, the server follows a layered architecture:
- API Layer: Receives HTTP requests from Clients.
- Core Logic: Manages workflows, memory, and orchestration.
- Providers Layer: Delegates tasks to external systems.
- Job Manager & Notifications: Handles background jobs and event dispatching.
- Persistence Layer: Stores data (e.g., using PostgreSQL).
Metaphor:
You can think of the server as a theater:
- The audience (Clients) sends requests (tickets).
- The front desk (API Layer) handles the tickets.
- The director (Core Logic) manages the show.
- The actors (Providers) perform specific tasks.
- Behind the scenes, the stage crew (Job Manager & Notifications) ensures everything runs smoothly.
Project structure (folders)¶
Here's a breakdown of the main folders and what they do:
Folder | Purpose |
---|---|
/cmd | Entry points for different commands and services (server, CLI tools, inspectors). |
/internal | The heart of the application: auth, core logic, job manager, providers, memory, notifications. |
/pkg | Modular pieces, mostly Providers (Filesystem, Notion, Unipile, etc.). |
/config | Configuration files and utilities. |
/data | Example data or resources for testing. |
/docker | Docker files for running the server easily. |
/tests | Integration and unit tests, including mocks. |
/docs | Documentation files and specs. |
Mermaid: Folder Structure Diagram¶
This diagram shows the logical organization of the main components:
flowchart TD
A[cmd] --> A1[server]
A --> A2[cli]
A --> A3[inspectors]
B[internal] --> B1[auth]
B --> B2[core]
B --> B3[providers]
B --> B4[jobmanager]
B --> B5[memory]
B --> B6[notifications]
B --> B7[server]
C[pkg] --> C1[providers]
C1 --> C1a[filesystem]
C1 --> C1b[notion]
C1 --> C1c[unipile]
C1 --> C1d[email_draft_preparator]
D[config]
E[data]
F[docker]
G[tests]
H[docs]
This structure makes it easy to find where to add new Providers, manage jobs, or customize server behavior.
How the server starts¶
Here's a simplified view of what happens when you start the server:
- The API server is launched (Fiber framework).
- The configuration is loaded (YAML + environment variables).
- The database connection is established.
- All Providers are registered and initialized.
- The Job Manager starts listening for background tasks.
- The server is ready to accept requests.
Metaphor: _Starting the server is like opening a restaurant:
- Unlock the doors (API).
- Set up the kitchen (configuration & database).
- Bring in the staff (Providers).
- Start cooking (Job Manager)._
Connection to Clients and Providers¶
- Clients communicate via the API, using secure tokens.
- Providers are plugged in to offer extra capabilities (like tools in a toolbox).
- The server sits in the middle, ensuring smooth communication between both.
This separation ensures that each part of the system is modular: You can swap in new Providers, update the API, or add new workflows without breaking the whole system.