Introduction
Mycelium API Gateway sits in front of your backend services and handles authentication, authorization, and routing — so your services don’t have to.
Who is this for?
This documentation is written for three types of readers:
Operator — You’re deploying Mycelium for an organization. You’ll configure tenants, users, and which backend services are reachable through the gateway. Start with Installation and Quick Start.
Backend developer — You’re building a service that sits behind Mycelium. The gateway will handle authentication and then inject the user’s identity into your requests via headers. Start with Downstream APIs.
End user — You’re using a product built on Mycelium. You’ll authenticate via email magic link, or through an alternative identity provider like Telegram. Your experience depends on how the operator has configured their instance.
What does Mycelium do?
Your users
↓
Mycelium API Gateway ← handles login, token validation, and routing decisions
↓
Your backend services ← receive authenticated requests with user identity in headers
When a request arrives, Mycelium checks:
- Who are you? (authentication — via magic link, OAuth2, Telegram, etc.)
- Are you allowed here? (coarse authorization — role checks at the route level)
- Where should this go? (routing — forwards to the right downstream service)
Your backend service receives the request with the user’s identity already resolved and
injected as an HTTP header (x-mycelium-profile). It can then make fine-grained decisions
without doing its own authentication.
Key concepts
Tenant — A company or organization within your Mycelium installation. Users belong to tenants, and access controls are applied per tenant.
Account — The unit of identity in Mycelium. There are several account types: User
(human end users), Staff / Manager (platform administrators), Subscription
(tenant-scoped services or bots), TenantManager (delegated tenant admins), and others.
A User account joins a tenant by being guested into a Subscription account with a
specific role and permission level. See Account Types and Roles
for the full model.
Profile — A snapshot of the authenticated user’s identity at the time of the request: their account, tenant memberships, roles, and access levels. Injected into every downstream request.
Security group — A label on each route that tells Mycelium what level of authentication is
required. Options range from public (anyone) to protectedByRoles (specific roles only).
How authentication works
Mycelium ships with built-in email + magic-link login. No passwords required — the user enters their email, receives a one-time link, and gets a JWT token.
You can also connect external OAuth2 providers (Google, Microsoft, Auth0) or alternative identity providers like Telegram. See Alternative Identity Providers for details.
Next steps
- Installation — Install the gateway binary or Docker image
- Quick Start — Get a running instance in minutes
- Configuration — Full configuration reference
- Downstream APIs — Register your backend services
- Authorization Model — Understand how access decisions are made
Installation
Mycelium API Gateway is distributed as a single binary (myc-api). Pick the installation method
that fits your workflow.
Prerequisites
You need three services running before Mycelium can start:
| Service | Minimum version | Purpose |
|---|---|---|
| PostgreSQL | 14 | Stores users, tenants, roles |
| Redis | 6 | Caching layer |
| Rust toolchain | 1.70 (build from source only) | Compiles the binary |
Install Rust via rustup if you plan to build from source.
Linux system dependencies (Ubuntu/Debian):
sudo apt-get install -y build-essential pkg-config libssl-dev postgresql-client
macOS:
brew install openssl pkg-config postgresql
Option A — Docker (fastest)
docker pull sgelias/mycelium-api:latest
For a full local environment with PostgreSQL and Redis already wired up, see Deploy Locally.
Option B — Install via Cargo
cargo install mycelium-api
This installs the myc-api binary globally. Verify it:
myc-api --version
Option C — Build from source
git clone https://github.com/LepistaBioinformatics/mycelium.git
cd mycelium
cargo build --release
./target/release/myc-api --version
Database setup
Mycelium ships with a SQL script that creates the database, user, and schema:
psql postgres://postgres:postgres@localhost:5432/postgres \
-f postgres/sql/up.sql \
-v db_password='REPLACE_WITH_STRONG_PASSWORD'
This creates a database named mycelium-dev and a user named mycelium-user. To use a
different database name, add -v db_name='my-database'.
Next steps
- Quick Start — Start the gateway with a minimal config
- Deploy Locally — Full Docker Compose setup with all dependencies
Troubleshooting
cargo install fails with SSL errors — Install OpenSSL dev libraries:
sudo apt-get install libssl-dev (Ubuntu) or brew install openssl (macOS).
Database connection fails — Verify PostgreSQL is running: psql --version and
psql postgres://postgres:postgres@localhost:5432/postgres.
Redis not responding — Run redis-cli ping. Expect PONG.
Quick Start
This guide gets Mycelium running with a minimal configuration. By the end you’ll have a gateway that can route requests to a downstream service.
Before starting: complete the Installation guide — you need
PostgreSQL running, Redis running, and the myc-api binary installed.
Step 1 — Create a configuration file
Mycelium reads a single TOML file. Copy the example from the repository or create settings/config.toml with the content below.
Replace
YOUR_DB_PASSWORDwith the password you set during database setup. Replace bothyour-secret-*values with random strings (useopenssl rand -hex 32).
[core.accountLifeCycle]
domainName = "My App"
domainUrl = "http://localhost:8080"
tokenExpiration = 3600
noreplyName = "No-Reply"
noreplyEmail = "noreply@example.com"
supportName = "Support"
supportEmail = "support@example.com"
locale = "en-US"
tokenSecret = "your-secret-key-change-me-in-production"
[core.webhook]
acceptInvalidCertificates = true
consumeIntervalInSecs = 10
consumeBatchSize = 10
maxAttempts = 3
[diesel]
databaseUrl = "postgres://mycelium-user:YOUR_DB_PASSWORD@localhost:5432/mycelium-dev"
[smtp]
host = "smtp.example.com:587"
username = "user@example.com"
password = "your-smtp-password"
[queue]
emailQueueName = "email-queue"
consumeIntervalInSecs = 5
[redis]
protocol = "redis"
hostname = "localhost:6379"
password = ""
[auth]
internal = "enabled"
jwtSecret = "your-jwt-secret-change-me-in-production"
jwtExpiresIn = 86400
tmpExpiresIn = 3600
[api]
serviceIp = "0.0.0.0"
servicePort = 8080
serviceWorkers = 4
gatewayTimeout = 30
healthCheckInterval = 120
maxRetryCount = 3
allowedOrigins = ["*"]
[api.cache]
jwksTtl = 3600
emailTtl = 120
profileTtl = 120
[api.logging]
level = "info"
format = "ansi"
target = "stdout"
Step 2 — Start the gateway
SETTINGS_PATH=settings/config.toml myc-api
With Docker:
docker run -d \
--name mycelium-api \
-p 8080:8080 \
-v $(pwd)/settings:/app/settings \
-e SETTINGS_PATH=settings/config.toml \
sgelias/mycelium-api:latest
Step 3 — Verify it’s running
curl http://localhost:8080/health
A successful response means the gateway is up and connected to the database and Redis.
Step 4 — Register your first downstream service (optional)
Add this block to your config.toml to proxy a backend service:
[api.services]
[[my-service]]
host = "localhost:3000"
protocol = "http"
[[my-service.path]]
group = "public"
path = "/api/*"
methods = ["GET", "POST", "PUT", "DELETE"]
Restart the gateway after any config change.
Next steps
- Configuration — Understand every config option
- Downstream APIs — Add authentication and role checks to your routes
- Deploy Locally — Full Docker Compose environment with all dependencies
- Alternative Identity Providers — Add Telegram or OAuth2 login
Troubleshooting
Gateway won’t start — Check TOML syntax, then verify database and Redis connectivity:
psql postgres://mycelium-user:YOUR_PASSWORD@localhost:5432/mycelium-dev -c "SELECT 1"
redis-cli ping
Port 8080 already in use — Change servicePort in config.toml.
Configuration
Mycelium reads a single TOML file at startup. Tell it where the file is:
SETTINGS_PATH=settings/config.toml myc-api
Three ways to set a value
Every setting can be defined directly in TOML, via an environment variable, or via Vault:
# Directly in the file (fine for development)
tokenSecret = "my-secret"
# From an environment variable
tokenSecret = { env = "MYC_TOKEN_SECRET" }
# From HashiCorp Vault (recommended for production)
tokenSecret = { vault = { path = "myc/core/accountLifeCycle", key = "tokenSecret" } }
Vault values are resolved at runtime — you don’t need to restart after changing a secret in Vault.
What do I actually need to configure?
For a minimal working instance you need:
[diesel].databaseUrl— PostgreSQL connection string.[redis]— Redis host and optional password.[auth].jwtSecret— Random string for signing JWT tokens.[core.accountLifeCycle].tokenSecret— Random string for email verification tokens.[smtp]— Email server (for magic-link login emails).[api].allowedOrigins— Allowed CORS origins for your frontend.
Everything else has sensible defaults. See the Quick Start for a copy-pasteable minimal config.
Section reference
[vault.define] — Secret management
Optional. Required only if you use Vault-sourced values anywhere.
[vault.define]
url = "http://localhost:8200"
versionWithNamespace = "v1/kv"
token = { env = "MYC_VAULT_TOKEN" }
| Field | Description |
|---|---|
url | Vault server URL including port |
versionWithNamespace | API version and KV path prefix (e.g. v1/kv) |
token | Vault auth token. Use env var or Vault for this value in production |
[core.accountLifeCycle] — Identity and email settings
[core.accountLifeCycle]
domainName = "Mycelium"
domainUrl = "https://mycelium.example.com"
tokenExpiration = 3600
noreplyName = "Mycelium No-Reply"
noreplyEmail = "noreply@example.com"
supportName = "Support"
supportEmail = "support@example.com"
locale = "en-US"
tokenSecret = "random-secret"
| Field | Description |
|---|---|
domainName | Human-friendly name shown in emails |
domainUrl | Your frontend URL — used in email links |
tokenExpiration | Email verification token lifetime in seconds |
noreplyEmail | From-address for system emails |
supportEmail | Reply-to address for support |
locale | Email language (e.g. en-US, pt-BR) |
tokenSecret | Secret for signing email verification tokens |
[core.webhook] — Webhook dispatch
[core.webhook]
acceptInvalidCertificates = true
consumeIntervalInSecs = 10
consumeBatchSize = 10
maxAttempts = 3
| Field | Description |
|---|---|
acceptInvalidCertificates | Allow self-signed TLS certs on webhook targets (use true in dev only) |
consumeIntervalInSecs | How often to flush the webhook queue |
consumeBatchSize | Events per flush |
maxAttempts | Retry limit per event |
[diesel] — Database
[diesel]
databaseUrl = "postgres://mycelium-user:password@localhost:5432/mycelium-dev"
Use Vault for the URL in production:
databaseUrl = { vault = { path = "myc/database", key = "url" } }
[smtp] and [queue] — Email
[smtp]
host = "smtp.gmail.com:587"
username = "user@gmail.com"
password = "your-password"
[queue]
emailQueueName = "email-queue"
consumeIntervalInSecs = 5
[redis] — Cache
[redis]
protocol = "redis" # "rediss" for TLS
hostname = "localhost:6379"
password = ""
[auth] — Authentication
Internal login (email + magic link)
[auth]
internal = "enabled"
jwtSecret = "random-secret"
jwtExpiresIn = 86400 # 24 hours
tmpExpiresIn = 3600 # temporary tokens (password reset, account creation)
External OAuth2 providers
Add one block per provider:
# Google
[[auth.external.define]]
issuer = "https://accounts.google.com"
jwksUri = "https://www.googleapis.com/oauth2/v3/certs"
userInfoUrl = "https://www.googleapis.com/oauth2/v3/userinfo"
audience = "your-google-client-id"
# Auth0
[[auth.external.define]]
issuer = "https://your-app.auth0.com/"
jwksUri = "https://your-app.auth0.com/.well-known/jwks.json"
userInfoUrl = "https://your-app.auth0.com/userinfo"
audience = "https://your-app.auth0.com/api/v2/"
| Field | Description |
|---|---|
issuer | Provider’s identity URL |
jwksUri | URL of the provider’s public keys (for JWT verification) |
userInfoUrl | URL to fetch the user’s email and claims |
audience | Client ID or API identifier registered with the provider |
[api] — Server and routing
[api]
serviceIp = "0.0.0.0"
servicePort = 8080
serviceWorkers = 4
gatewayTimeout = 30
healthCheckInterval = 120
maxRetryCount = 3
allowedOrigins = ["http://localhost:3000", "https://app.example.com"]
[api.cache]
jwksTtl = 3600 # cache OAuth2 public keys for 1 hour
emailTtl = 120 # cache resolved emails for 2 minutes
profileTtl = 120 # cache resolved profiles for 2 minutes
| Field | Description |
|---|---|
serviceIp | Bind address. 0.0.0.0 listens on all interfaces |
servicePort | HTTP port |
serviceWorkers | Worker threads. Match to CPU count |
gatewayTimeout | Request timeout in seconds |
allowedOrigins | CORS whitelist. Use ["*"] in dev only |
healthCheckInterval | How often to probe downstream health endpoints (seconds) |
[api.logging] — Log output
[api.logging]
level = "info"
format = "ansi" # "jsonl" for structured logs
target = "stdout"
File target:
target = { file = { path = "logs/api.log" } }
OpenTelemetry collector:
target = { collector = { name = "mycelium-api", host = "otel-collector", protocol = "grpc", port = 4317 } }
[api.tls.define] — TLS (optional)
[api.tls.define]
tlsCert = { vault = { path = "myc/api/tls", key = "tlsCert" } }
tlsKey = { vault = { path = "myc/api/tls", key = "tlsKey" } }
To disable TLS:
tls = "disabled"
[api.services] — Downstream services
Route configuration lives here. See Downstream APIs for the full guide.
Next steps
- Downstream APIs — Configure routes and security
- Deploy Locally — Full environment with Docker Compose
Deploy Locally with Docker Compose
The fastest way to get a complete development environment is Docker Compose — it starts PostgreSQL, Redis, Vault, and the gateway together with one command.
Prerequisites: Docker 20.10+ and Docker Compose 2.0+. (Docker Desktop includes both.)
Step 1 — Clone and configure
git clone https://github.com/LepistaBioinformatics/mycelium.git
cd mycelium
cp settings/config.example.toml settings/config.toml
Open settings/config.toml and update at minimum:
- Database credentials under
[diesel] - SMTP settings under
[smtp](if you need email) - Secrets under
[core.accountLifeCycle]and[auth]
Step 2 — Start everything
docker-compose up -d
This starts:
- postgres — database on port 5432
- redis — cache on port 6379
- vault — secret management on port 8200 (optional)
- mycelium-api — gateway on port 8080
Step 3 — Verify
docker-compose ps # all services should be "Up"
curl http://localhost:8080/health
Common operations
View logs:
docker-compose logs -f mycelium-api
Stop everything:
docker-compose down
Full reset (deletes all data):
docker-compose down -v
Access the database directly:
docker-compose exec postgres psql -U mycelium-user -d mycelium-dev
Using Vault for secrets (optional)
If you’re using Vault, initialize it after starting:
# Initialize and get unseal keys + root token (save these securely)
docker-compose exec vault vault operator init
# Unseal with 3 of the 5 keys
docker-compose exec vault vault operator unseal <KEY1>
docker-compose exec vault vault operator unseal <KEY2>
docker-compose exec vault vault operator unseal <KEY3>
# Store a secret
docker-compose exec vault vault login <ROOT_TOKEN>
docker-compose exec vault vault kv put secret/mycelium/database \
url="postgres://mycelium-user:password@postgres:5432/mycelium"
Then reference it in config.toml:
[vault.define]
url = "http://vault:8200"
versionWithNamespace = "v1/secret"
token = { env = "VAULT_TOKEN" }
[diesel]
databaseUrl = { vault = { path = "mycelium/database", key = "url" } }
Troubleshooting
Gateway can’t connect to Postgres:
docker-compose exec postgres pg_isready
docker-compose exec postgres psql -U postgres -l
Port conflict — change the external port in docker-compose.yaml:
services:
mycelium-api:
ports:
- "8081:8080"
Next steps
- Configuration — Full config reference
- Downstream APIs — Register your backend services
Authorization Model
This page explains how Mycelium decides whether a request is allowed to proceed.
The short version
Mycelium uses a two-layer approach:
-
Gateway layer — coarse checks at the route level (“is this user logged in? do they have the right role?”). If the check fails, the request is rejected before reaching your service.
-
Downstream layer — fine-grained checks inside your service, using the identity that Mycelium injects (“does this user have write access to this specific resource?”).
Think of it like a building: Mycelium is the security desk at the entrance (checks your ID and badge before letting you through the door). Your service is the room inside — it gets to decide what the person can do once they’re in.
The gateway layer
When a request arrives, Mycelium matches it against the configured route. Each route belongs to a security group that defines the minimum requirements:
| Security group | What Mycelium checks |
|---|---|
public | Nothing — anyone can pass |
authenticated | Valid JWT or connection string |
protected | Valid token + resolved profile |
protectedByRoles | Valid token + user has one of the listed roles |
If the check passes, Mycelium forwards the request to your service and injects the user’s identity as HTTP headers.
What gets injected
Depending on the security group, your service receives:
| Header | When injected | Contains |
|---|---|---|
x-mycelium-email | authenticated or higher | The authenticated user’s email |
x-mycelium-profile | protected or higher | Full identity context (see below) |
The profile is a compressed JSON object carrying: account ID, tenant memberships, roles, and access scopes. Your service reads it and can make resource-level decisions without querying the gateway or doing its own authentication.
The downstream layer
Once a request is inside your service, you use the profile to decide what the user can do with a specific resource. The profile exposes a fluent API for narrowing the access context:
profile
.on_tenant(tenant_id) # focus on this tenant
.on_account(account_id) # focus on this account
.with_write_access() # must have write permission
.with_roles(["manager"]) # must have manager role
.get_related_account_or_error() # returns error if no match
Each step narrows — never expands — the set of permissions. If any step finds no match, the chain returns an error and you return 403 to your caller.
This design means:
- Access decisions are explicit and auditable.
- No implicit “superuser” paths that bypass checks.
- Your service never needs to call Mycelium again to validate permissions.
Reference
Formal classification
Mycelium’s model spans three standard paradigms:
- RBAC (Role-Based Access Control) — used declaratively at the gateway level (security groups with role lists).
- ABAC (Attribute-Based Access Control) — the profile carries attributes (tenant, account, scope) used in downstream decisions.
- FBAC (Feature-Based Access Control) — the dominant model; access decisions are made close to the resource using the full contextual chain.
Design principles
- Authentication, identity enrichment, and authorization are strictly separated.
- Capabilities are progressively reduced — the chain never grants more than the token allows.
- No global policies that silently override explicit checks.
- Each authorization decision is a discrete, loggable event (resource, action, context, outcome).
Authentication Flows
This page covers the three ways users authenticate with Mycelium and how to create service-to-service tokens.
Magic link (email login)
Magic link is the default authentication method. No passwords required — the user enters their email and receives a one-time link.
How it works
User enters email
↓
POST /_adm/beginners/users/magic-link/request
↓
Mycelium sends a login email with a one-time link
↓
User clicks the link → GET /_adm/beginners/users/magic-link/display/{token}
↓
POST /_adm/beginners/users/magic-link/verify (with the token)
↓
Response: { "token": "jwt...", "type": "Bearer" }
Enabling it
Internal authentication must be enabled in config.toml:
[auth]
internal = "enabled"
jwtSecret = "your-secret"
jwtExpiresIn = 86400
SMTP must also be configured so Mycelium can send the email. See Configuration.
Using the JWT
After login, include the JWT in every request:
Authorization: Bearer <your-jwt-token>
The JWT is valid for jwtExpiresIn seconds (default: 24 hours).
Two-factor authentication (2FA / TOTP)
Users can add a second factor using any TOTP authenticator app (Google Authenticator, Authy, 1Password, etc.).
Enabling 2FA (user journey)
Step 1 — Start activation:
POST /_adm/beginners/users/totp/enable
Authorization: Bearer <jwt>
Response:
{
"totpUrl": "otpauth://totp/MyApp:user@example.com?secret=BASE32SECRET&issuer=MyApp"
}
The user scans the totpUrl in their authenticator app (or adds the secret manually).
Step 2 — Confirm activation:
POST /_adm/beginners/users/totp/validate-app
Authorization: Bearer <jwt>
Content-Type: application/json
{ "token": "123456" }
The token is the 6-digit code shown in the authenticator app. This confirms that the app
is correctly configured and activates 2FA on the account.
Logging in with 2FA
When 2FA is enabled, the magic link login response includes totp_required: true. The client
must then call:
POST /_adm/beginners/users/totp/check-token
Authorization: Bearer <jwt>
Content-Type: application/json
{ "token": "123456" }
Only after a successful TOTP check does the session have full access.
Disabling 2FA
POST /_adm/beginners/users/totp/disable
Authorization: Bearer <jwt>
Content-Type: application/json
{ "token": "123456" }
Requires a valid TOTP token to confirm the user’s intent.
Connection strings (service tokens)
A connection string is a long-lived API token tied to a specific account, tenant, and role. Use them for:
- Machine-to-machine calls — scripts, cron jobs, or services that don’t have a user session.
- Telegram Mini Apps — the gateway issues a connection string when a user logs in via Telegram (see Alternative Identity Providers).
- Long-running sessions — when the standard JWT expiry is too short.
Creating a connection string
POST /_adm/beginners/tokens
Authorization: Bearer <jwt>
Content-Type: application/json
{
"tenantId": "a3f1e2d0-1234-4abc-8def-000000000001",
"accountId": "b5e2f3a1-5678-4def-9abc-000000000002",
"role": "manager",
"expiresAt": "2027-01-01T00:00:00Z"
}
Response:
{
"connectionString": "acc=<uuid>;tid=<uuid>;r=manager;edt=2027-01-01T00:00:00Z;sig=<hmac>",
"expiresAt": "2027-01-01T00:00:00Z"
}
Listing your connection strings
GET /_adm/beginners/tokens
Authorization: Bearer <jwt>
Using a connection string
Instead of Authorization: Bearer, send the connection string in its own header:
x-mycelium-connection-string: acc=<uuid>;tid=<uuid>;r=manager;edt=...;sig=...
The gateway checks x-mycelium-connection-string first. If absent, it falls back to
Authorization: Bearer. Do not mix them — a connection string sent as Authorization: Bearer
will fail JWT validation.
OAuth2 / external providers
If you configure external OAuth2 providers (Google, Microsoft, Auth0), users authenticate directly with the provider and present the provider’s JWT to Mycelium. Mycelium validates the token’s signature using the provider’s JWKS endpoint.
See Configuration → External Authentication for setup instructions.
Fetching your own profile
Any authenticated user can fetch their full profile:
GET /_adm/beginners/profile
Authorization: Bearer <jwt>
This is the same profile that Mycelium injects as x-mycelium-profile into downstream
requests. Useful for debugging or displaying account/tenant information in a frontend.
Account Types and Roles
Mycelium uses a layered identity model:
- An account is the unit of identity.
- An account type describes the purpose and scope of that account.
- A role (
SystemActor) determines which administrative operations the account can perform. - A guest relationship links a
Useraccount to a tenant-scoped account, granting it contextual access with a specific permission level.
Quick reference — which account type do I need?
| Scenario | Account type |
|---|---|
| Human end user logging in | User |
| Platform operator / superadmin | Staff |
| Delegated platform administrator | Manager |
| Service, bot, or non-human entity within a tenant | Subscription |
| Service account with a built-in administrative role inside a tenant | RoleAssociated |
| Delegated tenant administrator | TenantManager |
| Internal system-level actor | ActorAssociated |
Account types
Every account in Mycelium has an accountType field. Its value determines what the account
can do and which management operations apply to it.
User
"accountType": "user"
A personal account belonging to a human user. The default type for end users who log in via email/password, magic link, OAuth, or any configured external IdP (e.g. Telegram).
- Has no administrative privileges by default.
- Can belong to multiple tenants simultaneously by being guested into tenant-scoped
Subscriptionaccounts (see Tenant membership below). - Managed by the
UsersManagerrole (approval, activation, archival).
Staff
"accountType": "staff"
A platform-level administrative account for operators who control the entire Mycelium instance. Staff accounts can create tenants, manage platform-wide guest roles, and upgrade or downgrade other accounts. They are not tenant-scoped — they act across all tenants.
The first Staff account is created via the CLI (myc-cli accounts create-seed-account).
See CLI Reference.
Manager
"accountType": "manager"
Similar to Staff but intended for delegated platform management. Managers can create system
accounts and manage tenant membership at the platform level without holding the highest-privilege
Staff designation. Suitable for operations teams that need broad but not full superadmin access.
Subscription
"accountType": { "subscription": { "tenantId": "<uuid>" } }
A tenant-scoped account representing a service, bot, or non-human entity (e.g. an external
application, an automated pipeline, an integration). Created by a TenantManager within a
specific tenant.
User accounts join a tenant by being guested into a Subscription account with a
specific guest role and permission level (see below). The subscription account is the anchor
for all tenant-scoped permissions.
Managed by the SubscriptionsManager role (invite guests, update name and flags).
RoleAssociated
"accountType": {
"roleAssociated": {
"tenantId": "<uuid>",
"roleName": "subscriptions-manager",
"readRoleId": "<uuid>",
"writeRoleId": "<uuid>"
}
}
A Subscription-like account that is pinned to a specific named guest role. Used to create
service accounts that carry a built-in administrative role inside a tenant — the canonical
example is a Subscription Manager account, which is a RoleAssociated account bound to
the SubscriptionsManager system actor role.
This account type is created automatically when calling the
tenant-manager/create-subscription-manager-account endpoint. It is a system-managed type;
you rarely create it manually.
TenantManager
"accountType": { "tenantManager": { "tenantId": "<uuid>" } }
A management account scoped to a specific tenant. Created by tenant owners (TenantOwner role)
to delegate tenant-level administrative tasks. Tenant managers can create and delete subscription
accounts, manage tenant tags, and invite subscription managers within their tenant.
ActorAssociated
"accountType": { "actorAssociated": { "actor": "<SystemActor>" } }
An internal account bound to a specific SystemActor role. Used by Mycelium itself to
represent system-level actors that need a persistent identity (e.g. for audit trails).
You do not create these manually — the platform provisions them as needed.
Tenant membership (guest relationships)
An account type alone does not grant access to a tenant’s resources. Access is established through guest relationships:
User account
└── guested into → Subscription account (tenant-scoped)
└── with a GuestRole + Permission level
└── grants access to downstream routes
How a user joins a tenant
- A
TenantManagerorSubscriptionsManagercreates aSubscriptionaccount for the tenant. - The subscription manager invites a user by email (
guest_user_to_subscription_account). - The user receives an invitation email and accepts it.
- The user’s
Useraccount is now a guest of the subscription account, holding aGuestRoleat a specific permission level (ReadorWrite). - When the user sends a request with
x-mycelium-tenant-id, Mycelium resolves their profile to include the tenant-scoped permissions from all subscription accounts they are guested into.
Guesting to child accounts
If a Subscription account has child accounts (set up via RoleAssociated accounts), an
AccountManager can further delegate access: inviting a user into a child account
(guest_to_children_account) so the user operates only within the narrower scope of that
child account rather than the full subscription account.
Permission levels within guest roles
| Permission | What it allows |
|---|---|
Read | Read-only access within the scope of the role |
Write | Read and write access within the scope of the role |
Downstream routes declare their required permission level in the route config:
[group]
protectedByRoles = [{ slug = "editor", permission = "write" }]
Users whose guest role carries only Read permission are rejected with 403 on write routes.
Administrative roles (SystemActor)
Every administrative route and JSON-RPC namespace is guarded by a role. In REST, the role
appears as the path segment immediately after /_adm/.
Role (SystemActor) | URL path | Typical scope |
|---|---|---|
Beginners | /_adm/beginners/ | Any authenticated user — own profile, tokens, invitations |
SubscriptionsManager | /_adm/subscriptions-manager/ | Invite guests, manage subscription accounts within a tenant |
UsersManager | /_adm/users-manager/ | Approve, activate, archive, suspend user accounts (platform) |
AccountManager | /_adm/account-manager/ | Invite guests to child accounts |
GuestsManager | /_adm/guests-manager/ | Create, update, delete guest roles |
GatewayManager | /_adm/gateway-manager/ | Read-only inspection of routes and services |
SystemManager | /_adm/system-manager/ | Error codes, outbound webhooks (platform-wide) |
TenantOwner | /_adm/tenant-owner/ | Ownership-level operations on a specific tenant |
TenantManager | /_adm/tenant-manager/ | Delegated management within a specific tenant |
Staff and Manager accounts additionally access the managers.* JSON-RPC namespace and
REST paths under /_adm/managers/, which sit above the per-tenant role hierarchy.
Beginners is not an administrative role — it is the namespace for self-service operations
any authenticated user may perform.
Role hierarchy
Staff / Manager (platform-wide)
└── create tenants, create system accounts, manage platform guest roles
│
├── TenantOwner (per tenant)
│ ├── create / delete TenantManager accounts
│ ├── manage tenant metadata, archiving, verification
│ └── configure external IdPs (e.g. Telegram bot token)
│
├── TenantManager (per tenant)
│ ├── create / delete Subscription accounts
│ ├── create SubscriptionManager (RoleAssociated) accounts
│ └── manage tenant tags
│
├── SubscriptionsManager (per tenant, via RoleAssociated account)
│ ├── invite guests to Subscription accounts
│ └── create RoleAssociated accounts
│
├── GuestsManager (platform or tenant)
│ └── define guest roles and their permissions
│
├── UsersManager (platform)
│ └── approve / activate / archive User accounts
│
├── AccountManager (per tenant)
│ └── invite guests to child accounts
│
└── GatewayManager (platform)
└── read-only inspection of routes and services
How roles are enforced
When a request arrives at an admin route (e.g. POST /_adm/tenant-owner/...), Mycelium:
- Validates the token (JWT or connection string).
- Resolves the caller’s profile, which includes their account type, tenant memberships, and guest roles.
- Checks whether the resolved profile satisfies the required
SystemActorfor that route. - For tenant-scoped operations, also checks the
x-mycelium-tenant-idheader. - Returns
403if the role or permission is missing; forwards to the use-case layer on match.
Seed account
The first Staff account in a fresh installation is created from the CLI before any user
can log in:
myc-cli accounts create-seed-account \
--name "Platform Admin" \
--email admin@example.com
See CLI Reference for the full command reference.
Downstream APIs
This guide explains how to register a backend service with Mycelium and control who can access it.
All service configuration lives in settings/config.toml under [api.services]. Restart the
gateway after any change.
Registering a service
The simplest possible registration:
[api.services]
[[my-service]]
host = "localhost:3000"
protocol = "http"
[[my-service.path]]
group = "public"
path = "/api/*"
methods = ["GET", "POST"]
This tells Mycelium: “Any GET or POST to /api/* should be forwarded to localhost:3000.”
No authentication required — anyone can reach this route.
The service name (my-service) becomes part of how you identify the service internally. It
does not affect the URL path; the path comes from the path field.
Choosing a security group
The group field on each route controls what Mycelium requires before forwarding the request.
public — No authentication
Anyone can access the route. Use for health checks, public APIs, and Telegram webhooks.
[[my-service.path]]
group = "public"
path = "/health"
methods = ["GET"]
authenticated — Valid login token required
The user must be logged in. Mycelium injects their email in x-mycelium-email.
[[my-service.path]]
group = "authenticated"
path = "/profile"
methods = ["GET", "PUT"]
protected — Full identity required
The user must be logged in and have a resolved profile. Mycelium injects the full profile
in x-mycelium-profile. Use this when your service needs to make fine-grained decisions
(e.g. “can this user access this specific resource?”).
[[my-service.path]]
group = "protected"
path = "/dashboard/*"
methods = ["GET"]
protectedByRoles — Specific roles required
Only users with at least one of the listed roles can pass. All others get 403.
[[my-service.path]]
group = { protectedByRoles = [{ slug = "admin" }, { slug = "super-admin" }] }
path = "/admin/*"
methods = ["ALL"]
You can also require a specific permission level:
[[my-service.path]]
group = { protectedByRoles = [{ slug = "editor", permission = "write" }] }
path = "/content/edit/*"
methods = ["POST", "PUT", "DELETE"]
[[my-service.path]]
group = { protectedByRoles = [{ slug = "viewer", permission = "read" }] }
path = "/content/view/*"
methods = ["GET"]
Multiple routes on one service
Each [[service-name.path]] block adds a route. Mix security groups freely:
[[user-service]]
host = "users.internal:4000"
protocol = "http"
[[user-service.path]]
group = "authenticated"
path = "/users/me"
methods = ["GET", "PUT"]
[[user-service.path]]
group = "protected"
path = "/users/preferences"
methods = ["GET", "POST"]
[[user-service.path]]
group = { protectedByRoles = [{ slug = "admin" }] }
path = "/users/admin/*"
methods = ["ALL"]
Authenticating Mycelium to your service (secrets)
If your downstream service requires a token or API key from the caller, define a secret and reference it on the route.
Query parameter:
[[legacy-api]]
host = "legacy.internal:8080"
protocol = "http"
[[legacy-api.secret]]
name = "api-key"
queryParameter = { name = "token", token = { env = "LEGACY_API_KEY" } }
[[legacy-api.path]]
group = "public"
path = "/legacy/*"
methods = ["GET"]
secretName = "api-key"
Authorization header:
[[protected-api.secret]]
name = "bearer-token"
authorizationHeader = { name = "Authorization", prefix = "Bearer ", token = { vault = { path = "myc/services/api", key = "token" } } }
Load balancing across multiple hosts
[[api-service]]
hosts = ["api-01.example.com:8080", "api-02.example.com:8080"]
protocol = "https"
[[api-service.path]]
group = "protected"
path = "/api/*"
methods = ["ALL"]
Webhook routes — identity from request body
Some callers (like Telegram) don’t send a JWT. Instead, the user’s identity is in the request
body. Use identitySource to handle this.
Requires allowedSources — before parsing the body, Mycelium checks that the Host
header matches an allowed source. This prevents attackers from forging webhook calls.
[[telegram-bot]]
host = "bot-service:3000"
protocol = "http"
allowedSources = ["api.telegram.org"]
[[telegram-bot.path]]
group = "protected"
path = "/telegram/webhook"
methods = ["POST"]
identitySource = "telegram"
With this config, Mycelium extracts the Telegram user ID from the message body, looks up
the linked Mycelium account, and injects x-mycelium-profile before forwarding. If the
user hasn’t linked their account, Mycelium returns 401 and the message is not forwarded.
See Alternative Identity Providers for the full Telegram setup journey.
Service discovery (AI agents)
Set discoverable = true to make a service visible to AI agents and LLM-based tooling:
[[data-service]]
host = "data.internal:5000"
protocol = "http"
discoverable = true
description = "Customer data API"
openapiPath = "/api/openapi.json"
healthCheckPath = "/health"
capabilities = ["customer-search", "order-history"]
serviceType = "rest-api"
What headers does my service receive?
| Security group | x-mycelium-email | x-mycelium-profile |
|---|---|---|
authenticated | Yes | No |
protected | Yes | Yes |
protectedByRoles | Yes | Yes |
The x-mycelium-profile value is a Base64-encoded, ZSTD-compressed JSON object. Use the
Python SDK or decode it manually
to read tenant memberships, roles, and access scopes.
Complete example
[api.services]
# Public health check
[[health]]
host = "localhost:8080"
protocol = "http"
[[health.path]]
group = "public"
path = "/health"
methods = ["GET"]
# User service — mixed access levels
[[user-service]]
host = "users.internal:4000"
protocol = "http"
[[user-service.path]]
group = "authenticated"
path = "/users/me"
methods = ["GET", "PUT"]
[[user-service.path]]
group = { protectedByRoles = [{ slug = "admin" }] }
path = "/users/admin/*"
methods = ["ALL"]
# Telegram webhook
[[support-bot]]
host = "bot.internal:3000"
protocol = "http"
allowedSources = ["api.telegram.org"]
[[support-bot.path]]
group = "protected"
path = "/telegram/webhook"
methods = ["POST"]
identitySource = "telegram"
Troubleshooting
Route not matching — Check that the path has a wildcard (/api/*) if you want to match
subpaths. /api/ only matches that exact path.
401 on a protected route — The JWT or connection string is missing, expired, or invalid.
Check the Authorization: Bearer <token> or x-mycelium-connection-string header.
403 on a role-protected route — The user is authenticated but doesn’t have the required role. Check role assignment in the Mycelium admin panel.
Downstream unreachable — Verify host, protocol, and that the service is actually running.
Use acceptInsecureRouting = true on the route if the downstream uses a self-signed TLS cert.
Reference — service-level fields
| Field | Required | Description |
|---|---|---|
host | Yes (or hosts) | Single downstream host with port |
hosts | Yes (or host) | Multiple hosts for load balancing |
protocol | Yes | "http" or "https" |
allowedSources | Required when identitySource is set | Allowed Host headers (supports wildcards) |
discoverable | No | Expose service to AI agents |
description | No | Human-readable description |
openapiPath | No | Path to OpenAPI spec |
healthCheckPath | No | Health check endpoint |
capabilities | No | Array of capability tags |
serviceType | No | e.g. "rest-api" |
Reference — route-level fields
| Field | Required | Description |
|---|---|---|
group | Yes | Security group (see above) |
path | Yes | URL path pattern, supports wildcards |
methods | Yes | HTTP methods, or ["ALL"] |
secretName | No | Reference to a secret defined at service level |
identitySource | No | Body-based IdP. Currently: "telegram" |
acceptInsecureRouting | No | Allow self-signed TLS certs on downstream |
Alternative Identity Providers
By default, Mycelium authenticates users through email + magic link. Alternative IdPs let users prove who they are using an account they already have on another platform — for example, Telegram.
When authentication succeeds, the downstream service receives the same x-mycelium-profile
header it would receive from any other authentication method. From the downstream’s perspective,
the identity source is irrelevant: a user is a user.
Authentication tokens in Mycelium
Before diving into alternative IdPs, it is important to understand that Mycelium has two different token types, each sent in a different header.
JWT — Authorization: Bearer <jwt>
Issued by email+password login and magic-link verification. A standard JSON Web Token.
Clients send it as Authorization: Bearer <jwt>. The gateway verifies the JWT signature
to extract the caller’s email, then resolves the full profile.
Connection string — x-mycelium-connection-string: <string>
A Mycelium-native token with the format acc=<uuid>;tid=<uuid>;r=<role>;edt=<datetime>;sig=<hmac>.
Issued by Telegram login (and service token generation). Clients send it as the custom header
x-mycelium-connection-string: <string>. The gateway fetches the token from the database and
resolves the profile from the stored scope.
Both token types are accepted on all admin endpoints and downstream routing rules. The gateway
checks for x-mycelium-connection-string first; if absent, it falls back to Authorization: Bearer.
Do not mix them up: a connection string sent as Authorization: Bearer will fail JWT signature
validation and return 401.
How it works — the big picture
Before a user can authenticate via an alternative IdP, two things must be true:
-
The tenant admin has configured the IdP — each IdP requires credentials specific to that tenant (e.g. a Telegram bot token). Without this, authentication is not possible.
-
The user has linked their IdP identity to their Mycelium account — this is a one-time step where the user says “my Telegram account is me”. After linking, the user can authenticate via Telegram any time, in any tenant they belong to.
Think of it like adding a phone number to a bank account: the bank (Mycelium) holds your regular credentials, but you can also prove your identity by receiving an SMS to your registered number (Telegram). You register the number once; after that, it just works.
Concepts
Identity linking vs. authentication
An alternative IdP works in two stages:
-
Linking — The user connects their IdP identity to their Mycelium account. This is done once. The link is stored on the personal account and is global across tenants.
-
Authentication — The user presents their IdP credential. Mycelium verifies it and either issues a connection string (JWT) or, for webhook routes, resolves the profile directly from the incoming request body.
A user who has not linked their IdP identity cannot authenticate via that IdP.
Personal vs. subscription accounts
IdP links are stored on personal accounts. A personal account belongs to a person, not to a specific tenant. When a user belongs to multiple tenants (e.g. a contractor who works for two companies on the same Mycelium installation), they link their Telegram once and it works for all tenants they are a member of.
Subscription accounts are tenant-scoped and cannot hold IdP links.
Per-tenant configuration
Some IdPs (Telegram) require credentials that are specific to a tenant — for example, a Telegram bot token. A bot is created by the company, not by the user. Each company (tenant) has its own bot, so each tenant stores its own credentials.
This configuration is required only for operations that verify Telegram credentials: linking a new identity and issuing login tokens. It is not required for body-based identity resolution in webhook routes — that lookup is global (see What works without tenant config below).
Telegram
Prerequisites
- A Telegram bot created via @BotFather. After creating the bot you
receive a bot token (looks like
7123456789:AAF...). Keep it secret. - A webhook secret — a random string you choose (16–256 characters). This is not a Telegram credential; it is a shared secret between Mycelium and Telegram so that Mycelium can verify that incoming webhook calls really come from Telegram and not from an attacker.
- The Mycelium gateway must be reachable from the internet for the webhook use case. For the login-only use case, it only needs to be reachable from the users’ devices.
Admin Journey: Provisioning the Tenant
Who does this: The tenant owner — the person or team responsible for the company’s Mycelium tenant. This is done once, before any user can link or log in.
Concrete example: Acme Corp runs a Mycelium installation. Their IT admin, Carlos, manages the
tenant. Carlos created a Telegram bot called @AcmeHRBot via BotFather and received the bot token.
He also generated a random webhook secret (openssl rand -hex 32). Now he needs to store these in
Mycelium so the gateway can use them.
Step 1 — Store bot credentials in Mycelium
Carlos calls this endpoint using his JWT (issued when he logged in via magic-link):
POST /_adm/tenant-owner/telegram/config
Authorization: Bearer <carlos-jwt>
x-mycelium-tenant-id: a3f1e2d0-1234-4abc-8def-000000000001
Content-Type: application/json
{
"botToken": "7123456789:AAFexampleBotTokenFromBotFather",
"webhookSecret": "4b9c2e1a8f3d7e0c5b2a9f6e3d1c8b5a"
}
- Response:
204 No Content. - Both values are encrypted with AES-256-GCM before storage. They are never readable again through the API — if lost, they must be re-submitted.
- Only accounts with the tenant-owner role can call this endpoint.
After this step, users can link their Telegram to their Mycelium accounts and log in.
Step 2 — Register the webhook with Telegram (webhook use case only)
Skip this step if you only need the login flow (Use Case A below).
Carlos tells Telegram where to send bot updates. He uses the Telegram Bot API directly:
curl -X POST "https://api.telegram.org/bot7123456789:AAFexampleBotTokenFromBotFather/setWebhook" \
-H "Content-Type: application/json" \
-d '{
"url": "https://gateway.acme.com/auth/telegram/webhook/a3f1e2d0-1234-4abc-8def-000000000001",
"secret_token": "4b9c2e1a8f3d7e0c5b2a9f6e3d1c8b5a"
}'
From this point on, every time someone sends a message to @AcmeHRBot, Telegram will POST the
update to that URL and include the header X-Telegram-Bot-Api-Secret-Token: 4b9c2e1.... Mycelium
verifies that header before doing anything with the update.
User Journey: Linking a Telegram Identity
Who does this: Each end user, once, using a Telegram Mini App built by the company.
Concrete example: Maria is an Acme employee. She has a Mycelium account (registered via magic
link to maria@acme.com) and belongs to Acme’s tenant. Now she wants to use @AcmeHRBot to check
her vacation balance. Before she can do anything with the bot, she needs to link her Telegram
account to her Mycelium account.
Carlos built a Telegram Mini App (a small web page that opens inside Telegram). When Maria opens
it, the app reads the cryptographically signed initData that Telegram injects into every Mini App
session. This initData proves to Mycelium that the Telegram user opening the app is who they claim
to be, because it is signed with the bot token.
The Mini App sends (using Maria’s JWT from her original magic-link login):
POST /auth/telegram/link
Authorization: Bearer <maria-jwt>
x-mycelium-tenant-id: a3f1e2d0-1234-4abc-8def-000000000001
Content-Type: application/json
{
"initData": "query_id=AAH...&user=%7B%22id%22%3A98765432%2C%22username%22%3A%22maria_acme%22...&hash=abc123..."
}
What Mycelium does:
- Verifies the HMAC in
initDatausing Acme’s bot token — confirms this is a real Telegram user. - Extracts Maria’s Telegram user ID (
98765432) frominitData. - Stores
{ id: 98765432, username: "maria_acme" }in Maria’s personal account metadata.
Maria only does this once. The link is stored on her personal account and is valid across all tenants she belongs to.
- Returns
409if Maria already has a Telegram link, or if Telegram ID98765432is already linked to another Mycelium account. - Returns
422if Carlos hasn’t completed admin Step 1.
To remove the link later:
DELETE /auth/telegram/link
Authorization: Bearer <maria-jwt>
Use Case A — Employee Mini App Calling Protected APIs
The problem: Acme built a web app inside Telegram (@AcmeHRBot) where employees can check
their remaining vacation days, submit expense reports, and view assigned tasks. The backend API
that serves this data is protected by Mycelium: it only responds to authenticated users and injects
their profile into every request. The Mini App cannot ask Maria to type her email and password —
she’s already inside Telegram. Telegram is the identity.
The solution: The Mini App exchanges Telegram’s initData for a Mycelium connection string.
That connection string is sent in the x-mycelium-connection-string header for all subsequent
API calls. The backend API sees a normal authenticated request and never knows the user logged
in via Telegram.
Full journey
Maria opens the Mini App inside @AcmeHRBot on her phone
│
│ Telegram injects window.Telegram.WebApp.initData into the Mini App
│ (this string is signed by Telegram and expires in ~24 hours)
│
▼
Mini App calls the login endpoint (no credentials needed — it's public):
POST /auth/telegram/login/a3f1e2d0-1234-4abc-8def-000000000001
{ "initData": "query_id=AAH...&user=...&hash=abc123..." }
│
│ Mycelium:
│ 1. Verifies the HMAC using Acme's bot token → confirms Maria is who she claims
│ 2. Extracts Telegram user ID 98765432
│ 3. Looks up which Mycelium account has this Telegram ID → finds maria@acme.com
│ 4. Issues a connection string scoped to Acme's tenant
│
▼
{ "connectionString": "acc=...;tid=...;sig=...", "expiresAt": "2026-04-21T10:00:00-03:00" }
│
│ Mini App stores the connection string in memory for this session
│
▼
Mini App calls the HR API:
GET /hr-api/vacation-balance
x-mycelium-connection-string: acc=...;tid=...;sig=...
│
│ Mycelium gateway:
│ - Validates the connection string
│ - Resolves Maria's full profile (account, tenant membership, roles)
│ - Injects x-mycelium-profile into the forwarded request
│
▼
HR API service receives:
GET /vacation-balance
x-mycelium-profile: <base64 compressed JSON with Maria's identity, roles, tenant>
│
│ HR API reads the profile, finds Maria's account ID, returns her vacation balance
│
▼
Mini App displays: "You have 12 vacation days remaining."
Login endpoint reference
POST /auth/telegram/login/{tenant_id}
Content-Type: application/json
{
"initData": "<Telegram Mini App initData string>"
}
Response on success:
{
"connectionString": "acc=uuid;tid=uuid;r=user;edt=2026-04-21T10:00:00-03:00;sig=...",
"expiresAt": "2026-04-21T10:00:00-03:00"
}
- Public endpoint — no
Authorizationheader required. - Returns
401ifinitDatais invalid or expired. - Returns
404if Telegram user ID is not linked to any account in this tenant. - Returns
422if the tenant has not completed admin Step 1.
Gateway configuration
No special gateway config is needed for this use case. The HR API uses the same groups as any other service:
[[hr-api]]
host = "hr-service:3000"
protocol = "http"
[[hr-api.path]]
group = "protected" # requires a valid profile
path = "/hr-api/*"
methods = ["ALL"]
Use Case B — Customer Support Bot with Authenticated Messages
The problem: Acme runs a customer support bot (@AcmeSupportBot). When a customer writes
“my order hasn’t arrived”, the support handler needs to know who is writing — their account,
subscription tier, and open tickets. Without identity, the bot can only reply generically.
With identity, it can reply “Hi Maria, your order #4521 shipped yesterday and arrives tomorrow.”
The support handler is a downstream service behind Mycelium. It cannot issue JWTs or do its own authentication — it just wants to receive the message and know who sent it. Mycelium handles the identity resolution transparently.
The solution: Configure a gateway route with identitySource = "telegram". When Telegram
sends an update, Mycelium extracts the sender’s Telegram ID from the message body, looks up
their linked Mycelium account, and injects their profile before forwarding the update to the
support handler.
The support handler never sees unauthenticated messages on this route. If the sender hasn’t
linked their Telegram account, Mycelium returns 401 and the message is not forwarded.
Full journey
Customer (Maria) sends "my order hasn't arrived" to @AcmeSupportBot
│
│ Telegram servers POST the update to Mycelium:
│
▼
POST /auth/telegram/webhook/a3f1e2d0-1234-4abc-8def-000000000001
X-Telegram-Bot-Api-Secret-Token: 4b9c2e1a8f3d7e0c5b2a9f6e3d1c8b5a
{
"update_id": 100000001,
"message": {
"from": { "id": 98765432, "username": "maria_acme" },
"text": "my order hasn't arrived"
}
}
│
│ Mycelium verifies the webhook secret → confirms this really came from Telegram
│ Responds 200 OK immediately (Telegram requires this, or it will retry)
│
▼
Gateway route (identitySource = "telegram") takes over:
│
│ 1. Buffers the request body
│ 2. Extracts from.id = 98765432
│ 3. Looks up which Mycelium account has Telegram ID 98765432 → maria@acme.com
│ 4. Loads Maria's full profile (account, tenant, roles)
│ 5. Injects x-mycelium-profile into the forwarded request
│
▼
Support handler receives:
POST /telegram/webhook
x-mycelium-profile: <base64 compressed JSON>
{
"update_id": 100000001,
"message": { "from": { "id": 98765432, ... }, "text": "my order hasn't arrived" }
}
│
│ Support handler reads the profile → finds Maria's account → fetches her orders
│ Replies via Telegram Bot API: "Hi Maria, your order #4521 shipped yesterday."
Gateway configuration
[[acme-support-bot]]
host = "support-handler:3000"
protocol = "http"
allowedSources = ["api.telegram.org"] # only accept requests from Telegram's servers
[[acme-support-bot.path]]
group = "protected"
path = "/telegram/webhook"
methods = ["POST"]
identitySource = "telegram" # resolve identity from the message body
Two fields are required:
-
allowedSources— before even parsing the body, Mycelium checks theHostheader of the incoming request. Only requests whoseHostmatches this list are processed. This prevents an attacker from POSTing fake updates directly to your service. Supports wildcards:"*.telegram.org","10.0.0.*". -
identitySource = "telegram"— tells Mycelium to extract the sender’s identity from the body (viamessage.from.idor the equivalent field in other update types) instead of looking for anAuthorizationorx-mycelium-connection-stringheader.
If allowedSources is missing and identitySource is set, the gateway rejects the request.
Important constraints
- The user must have previously linked their Telegram account. Messages from users who
haven’t linked return
401. Consider having the bot reply with a link to the Mini App where the user can complete the linking step. - The
groupfield still applies. Useprotectedif all you need is the user’s profile. UseprotectedByRolesif you want to further restrict which users can interact with the bot. - Your support handler receives the full Telegram Update JSON unchanged in the request body. The profile is injected as a header, not embedded in the body.
What works without tenant config
The problem: Acme has two tenants — acme-hr and acme-operations. The HR tenant has
Telegram configured; the operations tenant does not. Maria belongs to both tenants and has
already linked her Telegram via the HR tenant’s Mini App.
What can Maria do in the operations tenant?
| Operation | Requires tenant config? | Maria in acme-operations |
|---|---|---|
| Link Telegram identity | Yes — verifies initData HMAC against the tenant’s bot token | Cannot link via this tenant |
| Login via Telegram | Yes — same HMAC verification | Cannot log in via Telegram to this tenant |
| Appear in webhook route identity resolution | No — global lookup by Telegram user ID | Works — her link from acme-hr is found globally |
The key insight: the link is stored on Maria’s personal account, not on the tenant. When
Mycelium receives a webhook update from operations’ bot and finds from.id = 98765432, it looks
up globally — finds Maria’s personal account — and injects her profile. The operations tenant
never needed to know about Telegram.
acme-hr (Telegram configured) acme-operations (no Telegram config)
│ │
│ Maria links via HR Mini App │ Operations has a support bot
│ → Telegram ID 98765432 │ → route: identitySource = "telegram"
│ stored on Maria's personal account │
│ │ Telegram sends update with from.id 98765432
│ │ → Mycelium finds Maria globally ✓
│ │ → x-mycelium-profile injected ✓
│ │
│ Maria cannot link via operations ✗ │ Maria cannot login via Telegram here ✗
│ Maria cannot login via Telegram here ✗ │ (no bot token to verify initData)
In practice: If you are building a webhook-only integration (Use Case B) for a tenant, you do not need to configure Telegram for that tenant — as long as users have linked in some other tenant. If you need login (Use Case A) for that tenant, the tenant must have its own bot and must complete the admin provisioning step.
Comparison: Use Case A vs. Use Case B
| Use Case A — Login + API calls | Use Case B — Webhook identity resolution | |
|---|---|---|
| Example | Mini App calling an HR API | Support bot knowing who sent a message |
| Who calls Mycelium | Your Mini App / AI agent | Telegram’s servers |
| Authentication mechanism | initData → connection string (JWT) | Webhook secret + sender ID from body |
| Tenant Telegram config required | Yes — every tenant used for login | No — global identity lookup |
| Gateway route config | Standard authenticated/protected | allowedSources + identitySource = "telegram" |
| Webhook registration with Telegram | Not required | Required (setWebhook) |
| User must have linked identity | Yes — via any configured tenant | Yes — via any configured tenant |
| Connection string issued | Yes — reused for the session | No — identity re-resolved per request |
| What the downstream receives | x-mycelium-profile on any route | Full Telegram Update body + x-mycelium-profile |
Troubleshooting
422 telegram_not_configured_for_tenant- The tenant has not completed admin Step 1 (
POST /tenant-owner/telegram/config). This error appears on link and login. It does not appear on webhook routes (identity resolution there is global and does not need tenant config). - User is a guest in a tenant with no Telegram config — what still works?
- Webhook routes with
identitySource = "telegram"still work if the user previously linked their Telegram identity via any other tenant. Login and linking via the unconfigured tenant are not possible until the tenant owner completes admin Step 1. 401 invalid_telegram_init_data- The
initDataHMAC check failed. Causes: wrong bot token stored in Mycelium,initDataexpired (Telegram signsinitDatawith a short-lived timestamp), or the string was modified in transit. Re-readinitDatafromwindow.Telegram.WebApp.initDataand retry immediately. 404on login- The Telegram user ID extracted from
initDatais not linked to any Mycelium account in this tenant. The user must complete the linking step first (open the Mini App that callsPOST /auth/telegram/link). 401on webhook route — message not forwarded- The sender has not linked their Telegram account. The gateway returns
401and does not forward the update to your service. Handle this in your bot by detecting that the webhook call did not reach your service (or implement a fallback public route) and sending the user a message with a link to the linking Mini App. - Webhook
401 invalid_webhook_secret - The
X-Telegram-Bot-Api-Secret-Tokenheader sent by Telegram does not match the secret stored in Mycelium. Verify that thewebhookSecretyou supplied in Step 1 exactly matches thesecret_tokenyou passed tosetWebhook. They must be identical strings. allowedSourcesnot working as expectedallowedSourceschecks theHostheader, notOrigin. Telegram’s servers send requests withHost: api.telegram.org.Originis a browser header sent by CORS preflight requests — Telegram never sends it. CORS (allowedOrigins) is completely independent and irrelevant for webhook routes. See Downstream APIs.
Response Callbacks
After Mycelium forwards a request to a downstream service and receives a response, it can optionally run callbacks — side effects triggered by the response. Use callbacks to log metrics, notify third-party services, or trigger async workflows without modifying the downstream service.
Callbacks run after the response has been returned to the caller. They never block or modify the response.
How it works
Client → Mycelium → Downstream service
↓ response
Mycelium returns response to client
↓ (in parallel or fire-and-forget)
Callbacks execute
Defining a callback
Callbacks are defined in config.toml under [api.callbacks], then referenced by name on
individual routes.
Rhai callback — inline script
Rhai is an embedded scripting language. Write the script directly in the config file. No external files or interpreters required.
[api.callbacks]
[[callback]]
name = "error-monitor"
type = "rhai"
script = """
if status_code >= 500 {
log_error("Server error: " + status_code);
}
if duration_ms > 1000 {
log_warn("Slow response: " + duration_ms + "ms");
}
"""
timeoutMs = 1000
Available variables inside the script: status_code, duration_ms, headers (map),
method, upstream_path. Logging functions: log_info, log_warn, log_error.
HTTP callback — POST the response context to a URL
[[callback]]
name = "audit-log"
type = "http"
url = "https://audit.internal/events"
method = "POST" # default
timeoutMs = 3000
retryCount = 3
retryIntervalMs = 1000
Python callback — run a script
[[callback]]
name = "metrics-push"
type = "python"
scriptPath = "/opt/mycelium/callbacks/push_metrics.py"
pythonPath = "/usr/bin/python3.12"
timeoutMs = 5000
JavaScript callback — run a Node.js script
[[callback]]
name = "slack-notify"
type = "javascript"
scriptPath = "/opt/mycelium/callbacks/notify_slack.js"
nodePath = "/usr/bin/node"
timeoutMs = 3000
Attaching a callback to a route
Reference the callback by name in the route’s callbacks field:
[api.services]
[[my-service]]
host = "localhost:3000"
protocol = "http"
[[my-service.path]]
group = "protected"
path = "/api/*"
methods = ["POST", "PUT"]
callbacks = ["audit-log", "metrics-push"]
Multiple callbacks can be attached to the same route.
Filtering which responses trigger the callback
By default, a callback runs for every response. Use filters to narrow this:
[[callback]]
name = "error-alert"
type = "http"
url = "https://alerts.internal/errors"
# Only trigger on 5xx responses
triggeringStatusCodes = { oneof = [500, 502, 503, 504] }
# Only trigger on POST and DELETE
triggeringMethods = { oneof = ["POST", "DELETE"] }
# Only trigger if the response has a specific header
triggeringHeaders = { oneof = { "X-Error-Code" = "PAYMENT_FAILED" } }
Filter statement types:
oneof— at least one value must matchallof— all values must matchnoneof— none of the values may match
Execution mode
Control how callbacks run globally:
[api]
callbackExecutionMode = "fireAndForget" # default
| Mode | Behavior |
|---|---|
fireAndForget | Callbacks run in background tasks; gateway does not wait for them |
parallel | All callbacks run concurrently; gateway waits for all to finish |
sequential | Callbacks run one after another; gateway waits |
Use fireAndForget (default) when callback latency should not affect response time. Use
sequential when order matters (e.g., log before notify).
What the callback receives
Each callback receives a context object with information about the completed request:
| Field | Description |
|---|---|
status_code | HTTP status code returned by the downstream service |
response_headers | Response headers |
duration_ms | Time from gateway forwarding to downstream response |
upstream_path | The path the client called |
downstream_url | The URL Mycelium forwarded to |
method | HTTP method |
timestamp | ISO 8601 timestamp |
request_id | Value of x-mycelium-request-id if present |
client_ip | Caller’s IP address |
user_info | Authenticated user info (email, account ID) — present when route is authenticated or higher |
security_group | The security group that was applied |
For HTTP callbacks, this context is sent as a JSON POST body. For Python / JavaScript callbacks, the context is passed as a JSON-serialized argument. For Rhai callbacks, these fields are available as global variables in the script.
Reference — callback fields
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Unique name — used to reference the callback from routes |
type | rhai / http / python / javascript | Yes | Callback engine |
timeoutMs | integer | No | Max execution time in ms (default: 5000). Ignored in fireAndForget mode |
retryCount | integer | No | How many times to retry on failure (default: 3) |
retryIntervalMs | integer | No | Wait between retries in ms (default: 1000) |
script | string | Rhai only | Inline Rhai script source |
url | string | HTTP only | Target URL |
method | string | HTTP only | POST, PUT, PATCH, or DELETE (default: POST) |
scriptPath | path | Python / JavaScript only | Path to script file |
pythonPath | path | Python only | Interpreter path (default: system python3) |
nodePath | path | JavaScript only | Node.js path (default: system node) |
triggeringMethods | object | No | Filter by HTTP method (oneof, allof, noneof) |
triggeringStatusCodes | object | No | Filter by response status code |
triggeringHeaders | object | No | Filter by response header key/value |
Outbound Webhooks
Mycelium can push notifications to external systems when specific events occur inside the gateway. This is the outbound webhook system — distinct from the Telegram webhook described in Alternative Identity Providers, which handles inbound calls from Telegram’s servers.
How it works
When an event fires (e.g. a new user account is created), Mycelium delivers a POST request to all registered webhook URLs that are listening for that event. The delivery runs in the background and does not block the original operation.
Event fires inside Mycelium
│
▼
Webhook dispatcher looks up registered listeners for this event type
│
▼
POST <webhook_url>
Content-Type: application/json
{ "event": "userAccount.created", "payload": { ... } }
Delivery is retried up to a configured maximum number of attempts
(core.webhook.maxAttempts in config.toml).
Configuration
Global webhook delivery settings are in config.toml under [core.webhook]:
[core.webhook]
acceptInvalidCertificates = false # set true in dev with self-signed certs
consumeIntervalInSecs = 30 # how often the dispatcher polls for pending deliveries
consumeBatchSize = 25 # how many deliveries to process per poll cycle
maxAttempts = 5 # retry limit before marking a delivery as failed
Registering a webhook
Webhooks are managed through the systemManager.webhooks.* JSON-RPC methods or the equivalent
REST routes under /_adm/system-manager/webhooks/. Requires the system-manager role.
Via JSON-RPC
{
"jsonrpc": "2.0",
"method": "systemManager.webhooks.create",
"params": {
"url": "https://notify.internal/mycelium-events",
"trigger": "userAccount.created",
"isActive": true
},
"id": 1
}
Via REST
POST /_adm/system-manager/webhooks
Authorization: Bearer <jwt>
Content-Type: application/json
{
"url": "https://notify.internal/mycelium-events",
"trigger": "userAccount.created",
"isActive": true
}
Event types
| Event | Fires when |
|---|---|
subscriptionAccount.created | A new subscription account is created within a tenant |
subscriptionAccount.updated | A subscription account’s name or flags are changed |
subscriptionAccount.deleted | A subscription account is deleted |
userAccount.created | A new personal user account is registered |
userAccount.updated | A user account’s name is changed |
userAccount.deleted | A user account is deleted |
Managing webhooks
| JSON-RPC method | REST path | Description |
|---|---|---|
systemManager.webhooks.create | POST /_adm/system-manager/webhooks | Register a new webhook |
systemManager.webhooks.list | GET /_adm/system-manager/webhooks | List registered webhooks |
systemManager.webhooks.update | PATCH /_adm/system-manager/webhooks/{id} | Update URL, trigger, or active status |
systemManager.webhooks.delete | DELETE /_adm/system-manager/webhooks/{id} | Remove a webhook |
Delivery payload
Each POST to the registered URL includes a JSON body with the event type and a payload describing what changed. The exact payload shape depends on the event type.
Example — userAccount.created:
{
"event": "userAccount.created",
"occurredAt": "2026-04-20T14:30:00Z",
"payload": {
"accountId": "a1b2c3d4-...",
"email": "alice@example.com",
"name": "Alice"
}
}
Security
Webhook URLs should use HTTPS. Mycelium does not add a signature header to outbound webhook calls by default. If your endpoint needs to verify the source, place it behind a route that requires a secret (see Downstream APIs).
To accept self-signed certificates during development, set
acceptInvalidCertificates = true in [core.webhook].
Error Codes
Mycelium has a built-in error code registry. Error codes give structured names to domain-specific errors so that clients can handle them programmatically rather than parsing error message strings.
What error codes are
A Mycelium error code is a short, opaque identifier (e.g. MYC00023) paired with a human-readable
message and optional detail text. When a use-case returns a domain error with a code, Mycelium
includes that code in the HTTP response body:
{
"message": "Account not found.",
"code": "MYC00023",
"details": "No account with this ID exists in the requested tenant."
}
Clients that need to distinguish between, say, “account not found” and “account archived” can
switch on code rather than parsing the message string.
Native error codes
The built-in (native) error codes are seeded into the database on first run using the CLI:
myc-cli native-errors init
This command is run once during installation. It populates the database with all error codes that the core domain layer uses internally. Without this step, error responses that carry domain codes will have no human-readable message attached.
See CLI Reference for full usage.
Custom error codes
You can define additional error codes for your own downstream services. This lets you publish a shared error vocabulary between the gateway and the services behind it.
Custom codes are managed through the systemManager.errorCodes.* JSON-RPC methods or the
equivalent REST routes. Requires the system-manager role.
Create a custom error code
{
"jsonrpc": "2.0",
"method": "systemManager.errorCodes.create",
"params": {
"code": "SVC00001",
"message": "User has exceeded their rate limit.",
"details": "The account has made more requests than allowed in the current window."
},
"id": 1
}
List all error codes
{
"jsonrpc": "2.0",
"method": "systemManager.errorCodes.list",
"params": {},
"id": 1
}
Error code API reference
| JSON-RPC method | Description |
|---|---|
systemManager.errorCodes.create | Register a new error code |
systemManager.errorCodes.list | List all error codes (native + custom) |
systemManager.errorCodes.get | Get a single error code by code string |
systemManager.errorCodes.updateMessageAndDetails | Update an existing code’s message and details |
systemManager.errorCodes.delete | Remove a custom error code |
Native error codes (prefixed MYC) cannot be deleted — only their message and details can be
updated if you want to localize them.
MCP — AI Agent Integration
Mycelium exposes a Model Context Protocol (MCP) endpoint that lets AI assistants (Claude, GPT, and any MCP-compatible agent) call your downstream APIs as tools — without needing direct access to your services.
What is MCP?
MCP is an open protocol for connecting AI assistants to external tools and data. With Mycelium’s MCP support, an AI agent can:
- Discover which operations your downstream services expose.
- Call those operations on behalf of an authenticated user.
- Receive the responses and incorporate them into a response to the user.
The AI never bypasses Mycelium’s authentication or authorization. Every tool call is subject to the same security checks as a direct API request from a client.
Prerequisites
For MCP to discover operations, your downstream services must:
- Expose an OpenAPI spec at a known path.
- Be registered in Mycelium’s config with
discoverable = trueandopenapiPathset.
[[my-service]]
host = "api.internal:4000"
protocol = "http"
discoverable = true
openapiPath = "/api/openapi.json"
description = "Customer data API"
capabilities = ["customer-search", "order-history"]
[[my-service.path]]
group = "protected"
path = "/api/*"
methods = ["ALL"]
The MCP endpoint
POST /mcp
This single endpoint handles all MCP JSON-RPC requests. It supports:
| MCP method | What it does |
|---|---|
initialize | Returns Mycelium’s MCP server info and capabilities |
tools/list | Returns all operations discovered from discoverable services |
tools/call | Executes a specific operation through the gateway |
How tool calls work
When an AI calls tools/call, Mycelium:
- Resolves the operation to the registered downstream service.
- Validates the caller’s token (forwarded from the original MCP request header).
- Forwards the call through the gateway — including profile injection and all security checks.
- Returns the downstream response as a tool result.
The AI passes its Authorization: Bearer <jwt> or x-mycelium-connection-string header in
the MCP request, and Mycelium forwards it when calling the downstream service.
Connecting an AI assistant to Mycelium
Point the AI agent’s MCP server URL to your Mycelium instance:
http://your-gateway:8080/mcp
In Claude Desktop’s MCP configuration:
{
"mcpServers": {
"mycelium": {
"url": "http://your-gateway:8080/mcp"
}
}
}
The assistant will authenticate as the user whose token is configured, and will only be able to call operations that user is authorized to access.
Tool naming
Mycelium builds tool names deterministically from the service name and OpenAPI operation path:
{service_name}__{http_method}__{path_slug}.
For example, an operation GET /api/customers/{id} on service customer-api becomes the
tool name customer-api__get__api_customers_id.
Notes
- MCP support requires
discoverable = trueand a validopenapiPathon at least one service. - Operations on non-discoverable services are never exposed to the MCP endpoint.
- The MCP endpoint itself does not require authentication to connect, but individual
tools/callrequests are forwarded with the caller’s token — so unauthorized calls will be rejected with401/403by the downstream route’s security group.
JSON-RPC Interface
Mycelium exposes all administrative operations through a JSON-RPC 2.0 interface at
POST /_adm/rpc. This endpoint accepts both single requests and batched arrays of requests.
The JSON-RPC interface mirrors the REST admin API — every operation available through the REST routes is also available as a JSON-RPC method. Prefer JSON-RPC when building programmatic clients, automation scripts, or AI agent integrations.
Transport
POST /_adm/rpc
Authorization: Bearer <jwt> # or x-mycelium-connection-string
Content-Type: application/json
Single request:
{
"jsonrpc": "2.0",
"method": "beginners.profile.get",
"params": {},
"id": 1
}
Batch request (array):
[
{ "jsonrpc": "2.0", "method": "beginners.profile.get", "params": {}, "id": 1 },
{ "jsonrpc": "2.0", "method": "gatewayManager.routes.list", "params": {}, "id": 2 }
]
Discovery
rpc.discover
Returns the OpenRPC specification for this server — the full list of methods, their parameter schemas, and result schemas.
{ "jsonrpc": "2.0", "method": "rpc.discover", "params": {}, "id": 1 }
Method Reference
Methods are organized by namespace. The namespace corresponds to the role required to call them (see Account Types and Roles).
managers — Platform-wide operations
Requires: staff or manager role.
| Method | Description |
|---|---|
managers.accounts.createSystemAccount | Create a system-level account |
managers.guestRoles.createSystemRoles | Create platform-wide guest roles |
managers.tenants.create | Create a new tenant |
managers.tenants.list | List all tenants |
managers.tenants.delete | Delete a tenant |
managers.tenants.includeTenantOwner | Assign a tenant owner to a tenant |
managers.tenants.excludeTenantOwner | Remove a tenant owner from a tenant |
accountManager — Account manager operations
Requires: accounts-manager role.
| Method | Description |
|---|---|
accountManager.guests.guestToChildrenAccount | Invite a user as a guest to a child account |
accountManager.guestRoles.listGuestRoles | List available guest roles |
accountManager.guestRoles.fetchGuestRoleDetails | Get details of a specific guest role |
gatewayManager — Gateway inspection
Requires: gateway-manager role.
| Method | Description |
|---|---|
gatewayManager.routes.list | List all registered routes |
gatewayManager.services.list | List all registered downstream services |
gatewayManager.tools.list | List all discoverable tools exposed to AI agents |
beginners — Self-service / user operations
Requires: authenticated (any logged-in user). No admin role needed.
Accounts
| Method | Description |
|---|---|
beginners.accounts.create | Create a personal account |
beginners.accounts.get | Get own account details |
beginners.accounts.updateName | Update own display name |
beginners.accounts.delete | Delete own account |
Profile
| Method | Description |
|---|---|
beginners.profile.get | Get own resolved profile (tenants, roles, access scopes) |
Tenants
| Method | Description |
|---|---|
beginners.tenants.getPublicInfo | Get public metadata for a tenant |
Guests
| Method | Description |
|---|---|
beginners.guests.acceptInvitation | Accept a guest invitation to a tenant |
Tokens
| Method | Description |
|---|---|
beginners.tokens.create | Create a connection string token |
beginners.tokens.list | List own active tokens |
beginners.tokens.revoke | Revoke a specific token |
beginners.tokens.delete | Delete a token |
Metadata
| Method | Description |
|---|---|
beginners.meta.create | Create account metadata entry |
beginners.meta.update | Update account metadata entry |
beginners.meta.delete | Delete account metadata entry |
Users and authentication
| Method | Description |
|---|---|
beginners.users.create | Register a new user credential |
beginners.users.checkTokenAndActivateUser | Activate user via email token |
beginners.users.startPasswordRedefinition | Start password reset flow |
beginners.users.checkTokenAndResetPassword | Complete password reset |
beginners.users.checkEmailPasswordValidity | Validate credentials |
beginners.users.totpStartActivation | Begin TOTP 2FA setup |
beginners.users.totpFinishActivation | Complete TOTP 2FA setup |
beginners.users.totpCheckToken | Verify a TOTP code |
beginners.users.totpDisable | Disable TOTP 2FA |
guestManager — Guest role management
Requires: guests-manager role.
| Method | Description |
|---|---|
guestManager.guestRoles.create | Create a guest role |
guestManager.guestRoles.list | List guest roles |
guestManager.guestRoles.delete | Delete a guest role |
guestManager.guestRoles.updateNameAndDescription | Update role name and description |
guestManager.guestRoles.updatePermission | Update role permission level |
guestManager.guestRoles.insertRoleChild | Add a child role to a parent |
guestManager.guestRoles.removeRoleChild | Remove a child role |
systemManager — System-level management
Requires: system-manager role.
Error codes
| Method | Description |
|---|---|
systemManager.errorCodes.create | Create a custom error code |
systemManager.errorCodes.list | List all error codes |
systemManager.errorCodes.get | Get a specific error code |
systemManager.errorCodes.updateMessageAndDetails | Update error code message and details |
systemManager.errorCodes.delete | Delete an error code |
Webhooks
| Method | Description |
|---|---|
systemManager.webhooks.create | Register an outbound webhook |
systemManager.webhooks.list | List registered webhooks |
systemManager.webhooks.update | Update a webhook |
systemManager.webhooks.delete | Delete a webhook |
subscriptionsManager — Subscription account management
Requires: subscriptions-manager role.
Accounts
| Method | Description |
|---|---|
subscriptionsManager.accounts.createSubscriptionAccount | Create a subscription account |
subscriptionsManager.accounts.createRoleAssociatedAccount | Create a role-associated account |
subscriptionsManager.accounts.list | List subscription accounts |
subscriptionsManager.accounts.get | Get a specific subscription account |
subscriptionsManager.accounts.updateNameAndFlags | Update account name and flags |
subscriptionsManager.accounts.propagateSubscriptionAccount | Propagate account across tenants |
Guests
| Method | Description |
|---|---|
subscriptionsManager.guests.listLicensedAccountsOfEmail | List licensed accounts for an email |
subscriptionsManager.guests.guestUserToSubscriptionAccount | Invite a user to a subscription account |
subscriptionsManager.guests.updateFlagsFromSubscriptionAccount | Update guest flags |
subscriptionsManager.guests.revokeUserGuestToSubscriptionAccount | Revoke a guest invitation |
subscriptionsManager.guests.listGuestOnSubscriptionAccount | List guests on a subscription account |
Guest roles
| Method | Description |
|---|---|
subscriptionsManager.guestRoles.list | List guest roles for a subscription |
subscriptionsManager.guestRoles.get | Get a specific guest role |
Tags
| Method | Description |
|---|---|
subscriptionsManager.tags.create | Create a tag |
subscriptionsManager.tags.update | Update a tag |
subscriptionsManager.tags.delete | Delete a tag |
tenantManager — Tenant internal management
Requires: tenant-manager role.
Accounts
| Method | Description |
|---|---|
tenantManager.accounts.createSubscriptionManagerAccount | Create a subscription manager account within the tenant |
tenantManager.accounts.deleteSubscriptionAccount | Delete a subscription account |
Guests
| Method | Description |
|---|---|
tenantManager.guests.guestUserToSubscriptionManagerAccount | Invite a user as subscription manager |
tenantManager.guests.revokeUserGuestToSubscriptionManagerAccount | Revoke subscription manager invitation |
Tags
| Method | Description |
|---|---|
tenantManager.tags.create | Create a tenant tag |
tenantManager.tags.update | Update a tenant tag |
tenantManager.tags.delete | Delete a tenant tag |
Tenant
| Method | Description |
|---|---|
tenantManager.tenant.get | Get tenant details |
tenantOwner — Tenant ownership operations
Requires: tenant-owner role.
Accounts
| Method | Description |
|---|---|
tenantOwner.accounts.createManagementAccount | Create a management account for the tenant |
tenantOwner.accounts.deleteTenantManagerAccount | Delete a tenant manager account |
Metadata
| Method | Description |
|---|---|
tenantOwner.meta.create | Create tenant metadata |
tenantOwner.meta.delete | Delete tenant metadata |
Ownership
| Method | Description |
|---|---|
tenantOwner.owner.guest | Add a co-owner to the tenant |
tenantOwner.owner.revoke | Remove a co-owner from the tenant |
Tenant
| Method | Description |
|---|---|
tenantOwner.tenant.updateNameAndDescription | Update tenant name and description |
tenantOwner.tenant.updateArchivingStatus | Archive or unarchive the tenant |
tenantOwner.tenant.updateTrashingStatus | Trash or restore the tenant |
tenantOwner.tenant.updateVerifyingStatus | Mark the tenant as verified or unverified |
userManager — User account lifecycle (platform admins)
Requires: users-manager role.
| Method | Description |
|---|---|
userManager.account.approve | Approve a user account registration |
userManager.account.disapprove | Disapprove a user account registration |
userManager.account.activate | Re-activate a suspended user account |
userManager.account.deactivate | Suspend a user account |
userManager.account.archive | Archive a user account |
userManager.account.unarchive | Restore an archived user account |
service — Service discovery
Requires: authenticated.
| Method | Description |
|---|---|
service.listDiscoverableServices | List all services marked as discoverable for AI agent use |
staff — Privilege escalation
Requires: staff role.
| Method | Description |
|---|---|
staff.accounts.upgradePrivileges | Upgrade an account to staff privileges |
staff.accounts.downgradePrivileges | Downgrade a staff account to standard privileges |
Error responses
JSON-RPC errors follow the standard format:
{
"jsonrpc": "2.0",
"error": {
"code": -32600,
"message": "Invalid request"
},
"id": null
}
Standard error codes:
| Code | Meaning |
|---|---|
-32700 | Parse error — invalid JSON |
-32600 | Invalid request — not a valid JSON-RPC 2.0 object |
-32601 | Method not found |
-32602 | Invalid params |
-32603 | Internal error |
Authentication failures return HTTP 401 before the JSON-RPC layer is reached. Authorization
failures (wrong role) return a JSON-RPC error with code -32603 and a domain-specific message.
SDK Integration Guide
Mycelium injects a compressed, encoded identity context into every downstream request via
the x-mycelium-profile header. This header carries the authenticated user’s full profile:
account ID, tenant memberships, roles, and access scopes.
The Python SDK (mycelium-http-tools) decodes this header and provides a fluent filtering
API for making access decisions inside your service.
Installation
pip install mycelium-http-tools
PyPI package: mycelium-http-tools
Source: github.com/LepistaBioinformatics/mycelium-sdk-py
What the header contains
When a request passes through a protected or protectedByRoles route, the gateway:
- Resolves the caller’s identity (from JWT or connection string).
- Constructs a
Profileobject: account ID, email, tenant memberships, roles, access scopes. - ZSTD-compresses the JSON-serialized profile.
- Base64-encodes the result.
- Injects it as
x-mycelium-profilein the forwarded request.
Your service never sees unauthenticated requests on protected routes. The SDK decodes the
header back into a typed Profile object.
Using the SDK with FastAPI
The SDK ships a FastAPI middleware and dependency injectors:
from fastapi import FastAPI, Depends
from myc_http_tools.fastapi import get_profile, MyceliumProfileMiddleware
from myc_http_tools.models.profile import Profile
app = FastAPI()
app.add_middleware(MyceliumProfileMiddleware)
@app.get("/dashboard")
async def dashboard(profile: Profile = Depends(get_profile)):
# profile is already decoded and validated
return {"account_id": str(profile.acc_id)}
Making access decisions
The Profile object provides a fluent filtering chain. Each step narrows — never expands —
the set of permissions:
from myc_http_tools.models.profile import Profile
from myc_http_tools.exceptions import InsufficientPrivilegesError
def get_tenant_account(profile: Profile, tenant_id: UUID, account_id: UUID):
try:
account = (
profile
.on_tenant(tenant_id) # focus on this tenant
.on_account(account_id) # focus on this account
.with_write_access() # must have write permission
.with_roles(["manager"]) # must have manager role
.get_related_account_or_error()
)
return account
except InsufficientPrivilegesError:
raise HTTPException(status_code=403)
If any step in the chain finds no match (wrong tenant, no write access, missing role), it raises
InsufficientPrivilegesError. Your handler catches it and returns 403.
Available filter methods
| Method | Effect |
|---|---|
.on_tenant(tenant_id) | Filter to a specific tenant membership |
.on_account(account_id) | Filter to a specific account within the tenant |
.with_read_access() | Require at least read permission |
.with_write_access() | Require write permission |
.with_roles(["role1", "role2"]) | Require at least one of the listed roles |
.get_related_account_or_error() | Return the matched account or raise |
Header constants
The SDK exports the same header key constants as the gateway. Use them instead of raw strings to prevent mismatches if header names change:
from myc_http_tools.settings import (
DEFAULT_PROFILE_KEY, # "x-mycelium-profile"
DEFAULT_EMAIL_KEY, # "x-mycelium-email"
DEFAULT_SCOPE_KEY, # "x-mycelium-scope"
DEFAULT_MYCELIUM_ROLE_KEY, # "x-mycelium-role"
DEFAULT_REQUEST_ID_KEY, # "x-mycelium-request-id"
DEFAULT_CONNECTION_STRING_KEY, # "x-mycelium-connection-string"
DEFAULT_TENANT_ID_KEY, # "x-mycelium-tenant-id"
)
Manual decoding (any language)
If you are not using Python, decode x-mycelium-profile manually:
Base64-decode → ZSTD-decompress → JSON-parse → access Profile fields
Example in shell (for debugging):
echo "<header-value>" | base64 -d | zstd -d | python3 -m json.tool
The resulting JSON has the same structure as the Rust Profile struct:
acc_id, email, tenants (array of tenant memberships with roles and accounts).
Keeping the SDK in sync with the gateway
The Profile Pydantic model in the SDK mirrors the Rust Profile struct in the gateway.
When the gateway changes the Profile struct (new fields, renamed fields), the SDK must be
updated to match. If your service receives a profile that does not match the SDK’s model,
deserialization will fail with a validation error.
Check the SDK changelog when upgrading the gateway.
CLI Reference
The myc-cli binary provides commands that must be run directly against the database. These
operations cannot be performed through the HTTP API because they are bootstrapping steps that
execute before any admin account exists.
Installation
myc-cli is built alongside the gateway. After building from source:
cargo build --release
# Binary at: target/release/myc-cli
Or install directly:
cargo install --path ports/cli
Database connection
All CLI commands connect directly to PostgreSQL. Provide the connection URL by setting the
DATABASE_URL environment variable:
export DATABASE_URL="postgres://user:pass@localhost:5432/mycelium"
If DATABASE_URL is not set, the CLI prompts you to enter it interactively (input is hidden).
Commands
accounts create-seed-account
Creates the first Staff account in a fresh installation. This account is used to log in and perform all subsequent provisioning (create tenants, invite admins, etc.).
myc-cli accounts create-seed-account <email> <account_name> <first_name> <last_name>
Arguments:
| Argument | Description |
|---|---|
email | Email address for the new account (used to log in) |
account_name | Display name for the account (e.g. the organization name) |
first_name | User’s first name |
last_name | User’s last name |
Interactive prompt: After the positional arguments, the CLI prompts for a password (hidden input).
Example:
myc-cli accounts create-seed-account \
admin@acme.com \
"ACME Platform" \
Alice \
Smith
# Password: (hidden)
Notes:
- If a seed staff account already exists, the command exits with an informational message and does not create a duplicate.
- The created account has the
Stafftype (platform-wide administrative privileges). - After creation, use the magic-link or password login flow to authenticate and start provisioning tenants.
native-errors init
Seeds the database with all native Mycelium error codes. These are the error codes that the
core domain layer emits internally (prefixed MYC). Without this step, error responses that
carry domain codes will have no human-readable message.
myc-cli native-errors init
No arguments. The command reads DATABASE_URL (or prompts interactively) and inserts all
built-in error codes. Codes that already exist are skipped; only new codes are inserted.
When to run: Once, during initial installation, and again after upgrading to a new version of Mycelium that introduces new error codes.
Example:
DATABASE_URL="postgres://user:pass@localhost:5432/mycelium" myc-cli native-errors init
# INFO: 42 native error codes registered
Typical installation order
# 1. Apply the database schema
psql "$DATABASE_URL" -f postgres/sql/up.sql
# 2. Seed native error codes
myc-cli native-errors init
# 3. Create the first admin account
myc-cli accounts create-seed-account admin@example.com "My Platform" Admin User
# 4. Start the API server
SETTINGS_PATH=settings/config.toml myc-api
After step 4, log in with admin@example.com and the password you set in step 3.
Encryption Inventory
This page lists every field that Mycelium stores in an encrypted or hashed form, together with the mechanism used and its migration status relative to the envelope encryption rollout (Phases 1 and 2).
Fields encrypted with AES-256-GCM
These fields hold reversible ciphertexts. Before Phase 1 they were all
encrypted with the global KEK directly (v1 format). After Phase 1 they use
per-tenant DEKs wrapped by the KEK (v2 format). The two formats are
distinguished by a v2: prefix in the stored value.
| Field | Table / column | Mechanism before Phase 1 | DEK scope | Migration phase |
|---|---|---|---|---|
Totp::Enabled.secret | user.mfa (JSONB) | Totp::encrypt_me — KEK direct | system (UUID nil) | Phase 1 |
HttpSecret.token (webhook) | webhook.secret (JSONB) | WebHook::new_encrypted → HttpSecret::encrypt_me — KEK direct | system (UUID nil) | Phase 1 |
TelegramBotToken | tenant.meta (JSONB key) | encrypt_string — KEK direct | per-tenant | Phase 1 |
TelegramWebhookSecret | tenant.meta (JSONB key) | encrypt_string — KEK direct | per-tenant | Phase 1 |
phone_number, telegram_user | account.meta (JSONB) | plaintext | per-tenant | Phase 2 |
tenant.meta (general keys) | tenant.meta (JSONB) | plaintext | per-tenant | Phase 2 |
| Subscription / TenantManager metadata | account.meta (JSONB) | plaintext | per-tenant | Phase 2 |
TOTP is user identity (user, manager, staff) and is never tenant-scoped;
every call site passes tenant_id = None, so the secret is encrypted under
the system DEK.
DEK storage
Each tenant row in the tenant table now carries two additional columns:
| Column | Type | Description |
|---|---|---|
encrypted_dek | TEXT (nullable) | AES-256-GCM ciphertext of the 32-byte DEK, wrapped by the KEK. NULL means the DEK has not been provisioned yet (lazy on first use). |
kek_version | INTEGER NOT NULL DEFAULT 1 | Tracks which KEK generation was used to wrap the DEK. Used during KEK rotation. |
The system tenant row (id = 00000000-0000-0000-0000-000000000000) stores the
DEK used for system-level secrets (webhook HTTP secrets, all TOTP).
Fields hashed with Argon2 — outside encryption scope
These fields are one-way hashes. There is no plaintext to recover or re-encrypt. They are unaffected by envelope encryption migration.
| Field | Table / column | Note |
|---|---|---|
password_hash | identity_provider | Argon2id — verification only, no decryption |
| Email confirmation token | UserRelatedMeta.token (logical) | Argon2 one-way hash |
Ciphertext format versions
| Version | Format | When written | How detected |
|---|---|---|---|
| v1 (legacy) | base64(nonce₁₂ ‖ ciphertext ‖ tag₁₆) | Before Phase 1 | No prefix |
| v2 (envelope) | v2:base64(nonce₁₂ ‖ ciphertext ‖ tag₁₆) | After Phase 1 | Starts with v2: |
Decrypt functions detect the prefix automatically and route to the correct decryption path, so v1 and v2 data can coexist in the same deployment without downtime.
AAD (Authenticated Additional Data)
AAD prevents ciphertexts from being transplanted between tenants or between fields. The AAD scheme is:
aad = tenant_id.as_bytes() || field_name_bytes
| Field constant | Bytes |
|---|---|
AAD_FIELD_TOTP_SECRET | b"totp_secret" |
AAD_FIELD_TELEGRAM_BOT_TOKEN | b"telegram_bot_token" |
AAD_FIELD_TELEGRAM_WEBHOOK_SECRET | b"telegram_webhook_secret" |
AAD_FIELD_HTTP_SECRET | b"http_secret" |
DEK wrap/unwrap uses only tenant_id.as_bytes() as AAD (no field suffix).
token_secret is multi-purpose — rotation has side-effects
The token_secret configured in AccountLifeCycle is not only the KEK
source. Its bytes are also consumed directly by non-envelope code paths:
| Consumer | Role | Rotation impact |
|---|---|---|
AccountLifeCycle::derive_kek_bytes | KEK for wrap/unwrap of all DEKs | Re-wrap DEKs via myc-cli rotate-kek (TODO). |
encrypt_string::build_aes_key (v1 legacy path) | KEK for ciphertexts written before Phase 1 | Stays readable only while token_secret is unchanged; migrate to v2 before rotating. |
HttpSecret::decrypt_me (v1 branch) | Indirect — routes through the legacy path | Same as above. |
Totp::decrypt_me (v1 branch) | Indirect — routes through the legacy path | Same as above. |
UserAccountScope::sign_token | HMAC-SHA512 key for connection-string signatures | No re-signing path. All currently-issued connection strings are invalidated on rotation — treat as revoked. |
Rotate token_secret only after:
migrate-dek --dry-runreports zerov1fields remaining, and- The operational impact of invalidating every live connection-string signature is understood and accepted.
See Envelope Encryption Migration Guide for step-by-step operator instructions.
Envelope Encryption Migration Guide
This guide is for operators running Mycelium with the global token_secret key
who need to migrate to the envelope encryption scheme (KEK/DEK per tenant).
Overview
| Before | After |
|---|---|
| All secrets encrypted with a single global key (direct KEK) | Each tenant has a random DEK, encrypted by the KEK and stored in the database |
| Rotating the KEK invalidates all encrypted data | Rotating the KEK re-encrypts only the DEKs — O(number of tenants), not O(number of records) |
| No ciphertext versioning | The v2: prefix identifies data in the new scheme; the legacy format (v1) continues to be read |
The new version is fully backward-compatible: data encrypted in the old format continues to be decrypted correctly. Migration is optional at first and can be done incrementally.
Prerequisites
- Mycelium API Gateway updated to the version with envelope encryption support
- Access to the PostgreSQL database
- Access to the configured
token_secret(via env, Vault, or config file) — do not change this value before completing the migration myc-cliavailable in PATH
Migration steps
1. Verify compatibility (no downtime required)
The new version supports reading both v1 (old format) and v2 (new format)
data simultaneously. It is safe to deploy the new version while the database is
still in v1 format.
# Confirm the installed version supports envelope encryption
myc-cli --version
2. Run the SQL migration
psql $DATABASE_URL < path/to/20260421_01_envelope_encryption.sql
This adds the encrypted_dek and kek_version columns to the tenant table.
Both are nullable — existing tenants will have NULL until the next step.
3. Simulate the migration (dry-run)
SETTINGS_PATH=settings/config.toml myc-cli migrate-dek --dry-run
Expected output: a list of tenants with the count of v1 fields to migrate.
No writes are performed.
4. Run the migration
SETTINGS_PATH=settings/config.toml myc-cli migrate-dek
The command is idempotent and resumable:
- Fields already in
v2format are skipped. - It can be interrupted at any point and re-run without duplicating work.
- To migrate only a specific tenant:
--tenant-id <uuid>
5. Validate completion
SETTINGS_PATH=settings/config.toml myc-cli migrate-dek --dry-run
Should report 0 v1 fields remaining across all tenants.
KEK rotation (optional, post-migration)
After completing the migration to v2, the KEK can be rotated without touching
encrypted data records:
# Increment kek_version in config and make the new key available
# Then run:
SETTINGS_PATH=settings/config.toml myc-cli rotate-kek \
--from-version 1 \
--to-version 2
This re-encrypts only the encrypted_dek of each tenant with the new KEK. The
data records (user.mfa, tenant.meta, webhook.secret) are not
modified.
After a successful rotation, the v1 KEK can be discarded.
Side-effect — connection strings are invalidated.
token_secretis also used as the HMAC key for connection-string signatures (UserAccountScope::sign_token). Rotating the KEK therefore invalidates every signature issued under the old secret. There is no re-signing path — treat all active connection strings as revoked and plan the rotation accordingly. See the Encryption Inventory for the full list oftoken_secretconsumers.
Rollback
If you need to roll back before the migration is complete:
- Roll back the deployment to the previous gateway version.
- Any
v2data already written is unreadable by the old version (which does not know thev2format).
Therefore: do not interrupt a migration mid-way in production. Use
--dry-runto validate first, and run in a maintenance window if in doubt.
If the migration is complete and you need to roll back the SQL schema:
ALTER TABLE tenant DROP COLUMN IF EXISTS encrypted_dek;
ALTER TABLE tenant DROP COLUMN IF EXISTS kek_version;
This is only safe if no v2 data was written. If v2 writes have already
occurred, rolling back the schema will cause loss of access to those records.
Frequently asked questions
Do I need downtime to migrate?
No. The new version reads both v1 and v2. Deploy first, then run
migrate-dek with the service running.
Can I keep v1 data indefinitely?
Yes, as long as token_secret does not change. If the global key is rotated,
v1 data becomes unreadable. Migrating to v2 protects against this.
What about Argon2 hashes (passwords)?
Argon2 hashes in identity_provider.password_hash are one-way — there is no
plaintext to re-encrypt. They are unaffected by this migration and continue to
work normally.
What happens to new tenants created after the deploy? Tenants created after the deploy receive a DEK automatically on first use. No manual action is required.
See Encryption Inventory for the complete field classification table.
Running Tests
Tests require PostgreSQL and Redis running. Use Docker Compose to start them:
docker-compose up -d postgres redis
Run all tests
From modules/mycelium-api-gateway/:
cargo test
With logs visible:
RUST_LOG=debug cargo test -- --nocapture
Filtering tests
cargo test auth # all tests with "auth" in the name
cargo test -p mycelium-base # specific workspace package
cargo test --workspace # every package
Pre-commit checks
These must all pass before merging:
cargo fmt --all -- --check
cargo build --workspace
cargo test --workspace --all
Coverage (optional)
cargo install cargo-tarpaulin
cargo tarpaulin --out Html
Test database setup
psql postgres://postgres:postgres@localhost:5432/postgres \
-c "CREATE DATABASE mycelium_test;"
Set the env var before running tests that need a separate database:
export TEST_DATABASE_URL="postgres://postgres:postgres@localhost:5432/mycelium_test"
Troubleshooting
Tests hang — run with --test-threads=1 to serialize them.
Flaky tests — run the failing test in isolation: cargo test <test_name> -- --nocapture.
Port conflict — stop any running myc-api process or Docker containers on the same ports.
Release Process
This guide explains how to create and manage releases for Mycelium using cargo-release and git-cliff.
Overview
Mycelium follows Semantic Versioning and uses automated tools to manage releases:
- cargo-release: Manages version bumping, tagging, and publishing
- git-cliff: Generates changelogs from conventional commits
Prerequisites
Install the required tools:
cargo install cargo-release
cargo install git-cliff
Version Semantics
Mycelium follows Semantic Versioning (SemVer):
| Version Type | Format | When to Use | Example |
|---|---|---|---|
| MAJOR | X.0.0 | Breaking changes or incompatible API changes | 8.0.0 → 9.0.0 |
| MINOR | x.Y.0 | New features (backward-compatible) | 8.3.0 → 8.4.0 |
| PATCH | x.y.Z | Bug fixes (backward-compatible) | 8.3.0 → 8.3.1 |
Pre-release Workflow
Pre-releases follow a specific progression through stages:
1. Alpha Stage
Purpose: Early development and testing
Characteristics:
- Unstable, frequent changes
- Used for initial feature testing
- Not recommended for production
Creating an alpha release:
# First alpha
cargo release alpha --execute # Creates x.y.z-alpha.1
# Subsequent alphas
cargo release alpha --execute # Creates x.y.z-alpha.2, etc.
Example progression: 8.3.0-alpha.1 → 8.3.0-alpha.2 → 8.3.0-alpha.3
2. Beta Stage
Purpose: Feature-complete version ready for broader testing
Characteristics:
- Features are complete
- API should be relatively stable
- May still have bugs
- Used for wider testing and feedback
Moving to beta:
# First beta
cargo release beta --execute # Creates x.y.z-beta.1
# Subsequent betas
cargo release beta --execute # Creates x.y.z-beta.2, etc.
Example progression: 8.3.0-beta.1 → 8.3.0-beta.2 → 8.3.0-beta.3
3. Release Candidate (RC) Stage
Purpose: Production-ready candidate for final validation
Characteristics:
- Final testing before stable release
- Only critical bug fixes allowed
- Ready for production testing
- Last chance to catch issues
Creating release candidates:
# First RC
cargo release rc --execute # Creates x.y.z-rc.1
# Subsequent RCs (if needed)
cargo release rc --execute # Creates x.y.z-rc.2, etc.
Example progression: 8.3.0-rc.1 → 8.3.0-rc.2
4. Stable Release
Purpose: Production-ready version
Creating the stable release:
cargo release release --execute # Creates x.y.z
Example: 8.3.0-rc.2 → 8.3.0
Version Increment Commands
Patch Release
For bug fixes on existing stable releases:
cargo release patch --execute
Example: 8.3.0 → 8.3.1
Minor Release
For new features (backward-compatible):
cargo release minor --execute
Example: 8.3.1 → 8.4.0
Major Release
For breaking changes:
cargo release major --execute
Example: 8.4.0 → 9.0.0
Complete Release Cycle Example
Here’s a complete example of releasing version 8.3.0:
# Alpha stage - initial testing
cargo release alpha --execute # 8.3.0-alpha.1
# ... make changes, test ...
cargo release alpha --execute # 8.3.0-alpha.2
# ... more changes, testing ...
cargo release alpha --execute # 8.3.0-alpha.3
# Beta stage - features complete
cargo release beta --execute # 8.3.0-beta.1
# ... wider testing, bug fixes ...
cargo release beta --execute # 8.3.0-beta.2
# Release candidate - final validation
cargo release rc --execute # 8.3.0-rc.1
# ... production testing ...
cargo release rc --execute # 8.3.0-rc.2
# Stable release
cargo release release --execute # 8.3.0
# Later patch releases
cargo release patch --execute # 8.3.1
cargo release patch --execute # 8.3.2
# Next minor release
cargo release minor --execute # 8.4.0
Changelog Management
Mycelium uses git-cliff to automatically generate changelogs from conventional commits.
Conventional Commit Format
All commits should follow the Conventional Commits specification:
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
Supported types:
| Type | Description | Changelog Section |
|---|---|---|
feat | New feature | 🚀 Features |
fix | Bug fix | 🐛 Bug Fixes |
docs | Documentation | 📚 Documentation |
perf | Performance improvement | ⚡ Performance |
refactor | Code refactoring | 🚜 Refactor |
style | Code style changes | 🎨 Styling |
test | Test changes | 🧪 Testing |
chore | Maintenance tasks | ⚙️ Miscellaneous Tasks |
Examples:
# Feature commit
git commit -m "feat(auth): add passwordless authentication
Implements magic link authentication flow for users.
Users can now sign in by clicking a link sent to their email.
Fixes #110"
# Bug fix commit
git commit -m "fix(api): resolve null pointer in user endpoint
Fixes #123"
# Breaking change
git commit -m "feat(core): redesign authentication API
BREAKING CHANGE: The authentication API has been completely redesigned.
See migration guide for details.
Fixes #150"
Generating Changelogs
Before creating a release, update the changelog:
# Preview unreleased changes
git-cliff --unreleased
# Update CHANGELOG.md with unreleased changes
git-cliff --unreleased --prepend CHANGELOG.md
# Generate changelog for a specific version
git-cliff --tag v8.3.0 --prepend CHANGELOG.md
Changelog Configuration
The changelog format is configured in cliff.toml at the repository root. This file defines:
- Commit parsing rules
- Grouping and sorting
- Output format
- Template customization
Dry Run (Recommended)
Always preview release changes before executing:
# Dry run (default - no --execute flag)
cargo release alpha
# Review the output carefully:
# - Version changes
# - Files that will be modified
# - Git commands that will run
# - Tags that will be created
# If everything looks correct, execute
cargo release alpha --execute
Release Checklist
Use this checklist before creating a stable release:
- All tests pass:
cargo test - Code is properly formatted:
cargo fmt - No security vulnerabilities:
cargo audit - Documentation is up-to-date
- All commits follow conventional commit format
- Changelog is updated:
git-cliff --unreleased --prepend CHANGELOG.md - All CI checks pass
- Team review is complete (for major/minor releases)
- Release notes are prepared
- Dry run reviewed:
cargo release <level>
Release Configuration
The project’s release behavior is configured in release.toml at the repository root.
Key configurations include:
- Pre-release hooks: Run tests and builds before releasing
- Version bumping: Control how versions are incremented
- Git operations: Tag format, commit messages
- Changelog integration: Automatic changelog generation with git-cliff
- Publishing: Control what gets published and where
Best Practices
- Test thoroughly: Run full test suite before any release
- Use dry runs: Always preview changes before executing
- Follow the progression: Don’t skip stages (alpha → beta → rc → release)
- Write good commits: Use conventional commits for automatic changelog generation
- Update changelog: Generate changelog before each release
- Coordinate releases: Communicate with team for major/minor releases
- Tag properly: Let cargo-release handle tagging automatically
- Document changes: Include migration guides for breaking changes
Common Workflows
Hotfix Release
For urgent bug fixes on a stable release:
# On main branch with stable release 8.3.0
git checkout -b hotfix/critical-bug
# ... fix the bug ...
git commit -m "fix: resolve critical security issue"
# Merge to main
git checkout main
git merge hotfix/critical-bug
# Create patch release
git-cliff --unreleased --prepend CHANGELOG.md
git add CHANGELOG.md
git commit -m "docs: update changelog for 8.3.1"
cargo release patch --execute # 8.3.0 → 8.3.1
Feature Release
For a new feature release:
# On develop branch
git checkout -b feature/new-capability
# ... implement feature ...
git commit -m "feat: add new capability"
# Merge to develop
git checkout develop
git merge feature/new-capability
# Start pre-release cycle
cargo release alpha --execute # 8.4.0-alpha.1
# ... test, fix, repeat ...
cargo release beta --execute # 8.4.0-beta.1
# ... wider testing ...
cargo release rc --execute # 8.4.0-rc.1
# ... final validation ...
# Merge to main and release
git checkout main
git merge develop
git-cliff --unreleased --prepend CHANGELOG.md
git add CHANGELOG.md
git commit -m "docs: update changelog for 8.4.0"
cargo release release --execute # 8.4.0
Troubleshooting
Release fails due to uncommitted changes
# Ensure working directory is clean
git status
# Commit or stash changes
git add .
git commit -m "chore: prepare for release"
Changelog not generating correctly
# Verify conventional commit format
git log --oneline -n 10
# Test cliff configuration
git-cliff --unreleased
# Check cliff.toml configuration
cat cliff.toml
Wrong version incremented
# Use dry run first to verify
cargo release <level>
# If wrong level used, manually fix:
# Edit Cargo.toml files
# Delete incorrect git tag: git tag -d vX.Y.Z
# Try again with correct level
Additional Resources
- Semantic Versioning Specification
- Conventional Commits Specification
- cargo-release Documentation
- git-cliff Documentation
- Contributing Guide
Log Tests for Cache and Identity Flow
This guide explains how to run log-based validation for the security group, identity, and profile resolution flow—including cache behaviour (JWKS, email, and profile caches). The validation script checks that the expected sequence of stages appears in the logs and helps you verify cache hits and misses.
Overview
The API emits structured log events for:
- identity.jwks: Resolution of JSON Web Key Sets (from cache or via URI)
- identity.email: Resolution of user email (from cache, from token, or via userinfo)
- identity.profile: Resolution of user profile (from cache or via datastore)
- router.*: Security group check and identity/profile injection
Each event includes a stage and, when applicable, an outcome (e.g. from_cache, resolved, from_token) and cache_hit (true/false). The validation script parses these events and checks that every “start” stage is followed by a matching “outcome” event.
Prerequisites
- Python 3 (3.9 or higher recommended)
- Log output from a running Mycelium API (tracing format with timestamps and levels)
No extra Python packages are required; the script uses only the standard library.
Step 1: Save Logs to a File
To validate logs, you need to capture the API’s stdout (and optionally stderr) into a file while the API is running.
Option A: Redirect stdout when starting the API
If you start the API from a shell, redirect stdout to a file:
# Run the API and append all stdout to a log file
cargo run -p mycelium-api --bin myc_api -- --config settings/config.dev.for-docker.toml >> api.log 2>&1
To overwrite the file instead of appending, use a single >:
cargo run -p mycelium-api --bin myc_api -- --config settings/config.dev.for-docker.toml > api.log 2>&1
Option B: Redirect stdout of an already running process
If the API is already running (e.g. in Docker or another terminal), you can capture logs by attaching to the process or by configuring your runtime to write logs to a file. For example, with Docker:
docker compose logs -f mycelium-api > api.log 2>&1
Stop capturing when you have enough requests (e.g. after a few authenticated/protected calls).
Option C: Use a log level that includes INFO
The validation script relies on INFO-level events for stage and outcome. Ensure the API is not running with a log level that filters them out (e.g. RUST_LOG=info or RUST_LOG=myc_api=info). Default or info level is usually sufficient.
Example with explicit log level:
RUST_LOG=info cargo run -p mycelium-api --bin myc_api -- --config settings/config.dev.for-docker.toml >> api.log 2>&1
After reproducing the flows you care about (authenticated and/or protected requests), stop the capture. The resulting file (e.g. api.log) is the input for the validation script.
Step 2: Run the Validation Script
The script lives in the repository at scripts/python/evaluate_security_group_logs.py.
Basic usage
Pass the log file path as the first argument:
python3 scripts/python/evaluate_security_group_logs.py api.log
Or using an absolute path:
python3 /path/to/mycelium/scripts/python/evaluate_security_group_logs.py /path/to/api.log
Verbose output (recommended for interpretation)
To see the full sequence of stages and how they are grouped into cycles, use --verbose or -v:
python3 scripts/python/evaluate_security_group_logs.py api.log --verbose
Verbose mode prints:
- The total number of stage events found
- For each cycle, a header like
--- Cycle N (M events) ---and the list of events in that cycle (timestamp, stage, outcome, cache_hit) - A blank line between cycles for readability
Exit codes
- 0: All sequences validated successfully (OK).
- 1: One or more sequence violations (e.g. a stage started but no matching outcome found).
- 2: Usage error (e.g. missing log file path).
You can use the exit code in scripts or CI:
python3 scripts/python/evaluate_security_group_logs.py api.log
if [ $? -eq 0 ]; then
echo "Log validation passed"
else
echo "Log validation failed"
exit 1
fi
Step 3: How to Interpret the Output
Summary block
At the top, the script prints:
- Total stage events found: Number of log lines that contained a
stage=field. This is the number of events used for validation and (in verbose mode) for the cycle listing.
Result
- Result: OK – Every “start” stage (e.g.
identity.jwks,identity.external,identity.profile) had a matching “outcome” event in the expected order. No violations were reported. - Result: FAIL – The script lists violations. Each violation message includes the timestamp and a short description (e.g. “identity.jwks started but no identity.jwks outcome (from_cache/resolved) found”). Fix by ensuring the API actually completes that step and emits the corresponding log.
Verbose output: cycles
When you use --verbose, events are grouped into cycles. A new cycle starts at each identity.external (start of identity resolution for a request). So:
- One cycle corresponds to one logical “identity + profile resolution” flow (e.g. one authenticated or protected request).
- Within a cycle you see the order of stages, for example:
identity.external(start)identity.jwks→ thenidentity.jwks outcome=from_cacheoroutcome=resolvedidentity.email→ then optionalidentity.email.cache cache_hit=true/false→ thenidentity.email outcome=from_cacheoroutcome=resolvedidentity.external outcome=okidentity.profile→ optionalidentity.profile.cache cache_hit=true/false→identity.profile outcome=from_cacheoroutcome=resolved
Interpreting cache behaviour
-
JWKS
identity.jwks outcome=from_cache: Keys were loaded from the JWKS cache.identity.jwks outcome=resolved: Keys were fetched from the provider URI and then cached.
-
Email
identity.email.cache cache_hit=truethenidentity.email outcome=from_cache: Email was taken from cache.identity.email.cache cache_hit=falsethenidentity.email outcome=resolved: Email was fetched via userinfo and then cached.
-
Profile
identity.profile.cache cache_hit=truethenidentity.profile outcome=from_cache: Profile was taken from cache.identity.profile.cache cache_hit=falsethenidentity.profile outcome=resolved: Profile was loaded from the datastore and then cached.
Comparing cycles (e.g. first request vs later requests) shows whether later requests use the cache (more from_cache and cache_hit=true), which is expected after the first successful resolution.
Example Workflow
-
Start the API with logs redirected to a file:
RUST_LOG=info cargo run -p mycelium-api --bin myc_api -- --config settings/config.dev.for-docker.toml >> api.log 2>&1 -
Send a few authenticated or protected requests (e.g. with a bearer token) so that identity and profile resolution (and cache) are exercised.
-
Stop the API (or stop redirecting logs).
-
Run the validator:
python3 scripts/python/evaluate_security_group_logs.py api.log --verbose -
Check the result:
- If OK, the log sequence is valid; use the verbose listing to confirm cache behaviour per cycle.
- If FAIL, read the violation messages and fix the flow or logging so that every started stage has a matching outcome in the logs.
Troubleshooting
- No stage events found: Ensure the log file contains lines with
stage=(INFO level). Check that you are capturing stdout and that the log level is not too restrictive. - “identity.external started but no identity.external outcome=ok”: The request may have failed before completing identity resolution (e.g. invalid token, network error). Check API and network logs.
- “identity.jwks started but no identity.jwks outcome”: JWKS fetch or parse may have failed. Look for ERROR/WARN lines around that timestamp in the raw log.
- Multiple cycles: Expected when you send multiple requests. Each cycle is one request’s identity/profile flow; use them to compare first request (often more
resolved) vs later requests (often morefrom_cache).