Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Introduction

Mycelium API Gateway sits in front of your backend services and handles authentication, authorization, and routing — so your services don’t have to.


Who is this for?

This documentation is written for three types of readers:

Operator — You’re deploying Mycelium for an organization. You’ll configure tenants, users, and which backend services are reachable through the gateway. Start with Installation and Quick Start.

Backend developer — You’re building a service that sits behind Mycelium. The gateway will handle authentication and then inject the user’s identity into your requests via headers. Start with Downstream APIs.

End user — You’re using a product built on Mycelium. You’ll authenticate via email magic link, or through an alternative identity provider like Telegram. Your experience depends on how the operator has configured their instance.


What does Mycelium do?

Your users
    ↓
Mycelium API Gateway   ← handles login, token validation, and routing decisions
    ↓
Your backend services  ← receive authenticated requests with user identity in headers

When a request arrives, Mycelium checks:

  1. Who are you? (authentication — via magic link, OAuth2, Telegram, etc.)
  2. Are you allowed here? (coarse authorization — role checks at the route level)
  3. Where should this go? (routing — forwards to the right downstream service)

Your backend service receives the request with the user’s identity already resolved and injected as an HTTP header (x-mycelium-profile). It can then make fine-grained decisions without doing its own authentication.


Key concepts

Tenant — A company or organization within your Mycelium installation. Users belong to tenants, and access controls are applied per tenant.

Account — The unit of identity in Mycelium. There are several account types: User (human end users), Staff / Manager (platform administrators), Subscription (tenant-scoped services or bots), TenantManager (delegated tenant admins), and others. A User account joins a tenant by being guested into a Subscription account with a specific role and permission level. See Account Types and Roles for the full model.

Profile — A snapshot of the authenticated user’s identity at the time of the request: their account, tenant memberships, roles, and access levels. Injected into every downstream request.

Security group — A label on each route that tells Mycelium what level of authentication is required. Options range from public (anyone) to protectedByRoles (specific roles only).


How authentication works

Mycelium ships with built-in email + magic-link login. No passwords required — the user enters their email, receives a one-time link, and gets a JWT token.

You can also connect external OAuth2 providers (Google, Microsoft, Auth0) or alternative identity providers like Telegram. See Alternative Identity Providers for details.


Next steps

Installation

Mycelium API Gateway is distributed as a single binary (myc-api). Pick the installation method that fits your workflow.


Prerequisites

You need three services running before Mycelium can start:

ServiceMinimum versionPurpose
PostgreSQL14Stores users, tenants, roles
Redis6Caching layer
Rust toolchain1.70 (build from source only)Compiles the binary

Install Rust via rustup if you plan to build from source.

Linux system dependencies (Ubuntu/Debian):

sudo apt-get install -y build-essential pkg-config libssl-dev postgresql-client

macOS:

brew install openssl pkg-config postgresql

Option A — Docker (fastest)

docker pull sgelias/mycelium-api:latest

For a full local environment with PostgreSQL and Redis already wired up, see Deploy Locally.


Option B — Install via Cargo

cargo install mycelium-api

This installs the myc-api binary globally. Verify it:

myc-api --version

Option C — Build from source

git clone https://github.com/LepistaBioinformatics/mycelium.git
cd mycelium
cargo build --release
./target/release/myc-api --version

Database setup

Mycelium ships with a SQL script that creates the database, user, and schema:

psql postgres://postgres:postgres@localhost:5432/postgres \
  -f postgres/sql/up.sql \
  -v db_password='REPLACE_WITH_STRONG_PASSWORD'

This creates a database named mycelium-dev and a user named mycelium-user. To use a different database name, add -v db_name='my-database'.


Next steps


Troubleshooting

cargo install fails with SSL errors — Install OpenSSL dev libraries: sudo apt-get install libssl-dev (Ubuntu) or brew install openssl (macOS).

Database connection fails — Verify PostgreSQL is running: psql --version and psql postgres://postgres:postgres@localhost:5432/postgres.

Redis not responding — Run redis-cli ping. Expect PONG.

Quick Start

This guide gets Mycelium running with a minimal configuration. By the end you’ll have a gateway that can route requests to a downstream service.

Before starting: complete the Installation guide — you need PostgreSQL running, Redis running, and the myc-api binary installed.


Step 1 — Create a configuration file

Mycelium reads a single TOML file. Copy the example from the repository or create settings/config.toml with the content below.

Replace YOUR_DB_PASSWORD with the password you set during database setup. Replace both your-secret-* values with random strings (use openssl rand -hex 32).

[core.accountLifeCycle]
domainName = "My App"
domainUrl = "http://localhost:8080"
tokenExpiration = 3600
noreplyName = "No-Reply"
noreplyEmail = "noreply@example.com"
supportName = "Support"
supportEmail = "support@example.com"
locale = "en-US"
tokenSecret = "your-secret-key-change-me-in-production"

[core.webhook]
acceptInvalidCertificates = true
consumeIntervalInSecs = 10
consumeBatchSize = 10
maxAttempts = 3

[diesel]
databaseUrl = "postgres://mycelium-user:YOUR_DB_PASSWORD@localhost:5432/mycelium-dev"

[smtp]
host = "smtp.example.com:587"
username = "user@example.com"
password = "your-smtp-password"

[queue]
emailQueueName = "email-queue"
consumeIntervalInSecs = 5

[redis]
protocol = "redis"
hostname = "localhost:6379"
password = ""

[auth]
internal = "enabled"
jwtSecret = "your-jwt-secret-change-me-in-production"
jwtExpiresIn = 86400
tmpExpiresIn = 3600

[api]
serviceIp = "0.0.0.0"
servicePort = 8080
serviceWorkers = 4
gatewayTimeout = 30
healthCheckInterval = 120
maxRetryCount = 3
allowedOrigins = ["*"]

[api.cache]
jwksTtl = 3600
emailTtl = 120
profileTtl = 120

[api.logging]
level = "info"
format = "ansi"
target = "stdout"

Step 2 — Start the gateway

SETTINGS_PATH=settings/config.toml myc-api

With Docker:

docker run -d \
  --name mycelium-api \
  -p 8080:8080 \
  -v $(pwd)/settings:/app/settings \
  -e SETTINGS_PATH=settings/config.toml \
  sgelias/mycelium-api:latest

Step 3 — Verify it’s running

curl http://localhost:8080/health

A successful response means the gateway is up and connected to the database and Redis.


Step 4 — Register your first downstream service (optional)

Add this block to your config.toml to proxy a backend service:

[api.services]

[[my-service]]
host = "localhost:3000"
protocol = "http"

[[my-service.path]]
group = "public"
path = "/api/*"
methods = ["GET", "POST", "PUT", "DELETE"]

Restart the gateway after any config change.


Next steps


Troubleshooting

Gateway won’t start — Check TOML syntax, then verify database and Redis connectivity:

psql postgres://mycelium-user:YOUR_PASSWORD@localhost:5432/mycelium-dev -c "SELECT 1"
redis-cli ping

Port 8080 already in use — Change servicePort in config.toml.

Configuration

Mycelium reads a single TOML file at startup. Tell it where the file is:

SETTINGS_PATH=settings/config.toml myc-api

Three ways to set a value

Every setting can be defined directly in TOML, via an environment variable, or via Vault:

# Directly in the file (fine for development)
tokenSecret = "my-secret"

# From an environment variable
tokenSecret = { env = "MYC_TOKEN_SECRET" }

# From HashiCorp Vault (recommended for production)
tokenSecret = { vault = { path = "myc/core/accountLifeCycle", key = "tokenSecret" } }

Vault values are resolved at runtime — you don’t need to restart after changing a secret in Vault.


What do I actually need to configure?

For a minimal working instance you need:

  1. [diesel].databaseUrl — PostgreSQL connection string.
  2. [redis] — Redis host and optional password.
  3. [auth].jwtSecret — Random string for signing JWT tokens.
  4. [core.accountLifeCycle].tokenSecret — Random string for email verification tokens.
  5. [smtp] — Email server (for magic-link login emails).
  6. [api].allowedOrigins — Allowed CORS origins for your frontend.

Everything else has sensible defaults. See the Quick Start for a copy-pasteable minimal config.


Section reference

[vault.define] — Secret management

Optional. Required only if you use Vault-sourced values anywhere.

[vault.define]
url = "http://localhost:8200"
versionWithNamespace = "v1/kv"
token = { env = "MYC_VAULT_TOKEN" }
FieldDescription
urlVault server URL including port
versionWithNamespaceAPI version and KV path prefix (e.g. v1/kv)
tokenVault auth token. Use env var or Vault for this value in production

[core.accountLifeCycle] — Identity and email settings

[core.accountLifeCycle]
domainName = "Mycelium"
domainUrl = "https://mycelium.example.com"
tokenExpiration = 3600
noreplyName = "Mycelium No-Reply"
noreplyEmail = "noreply@example.com"
supportName = "Support"
supportEmail = "support@example.com"
locale = "en-US"
tokenSecret = "random-secret"
FieldDescription
domainNameHuman-friendly name shown in emails
domainUrlYour frontend URL — used in email links
tokenExpirationEmail verification token lifetime in seconds
noreplyEmailFrom-address for system emails
supportEmailReply-to address for support
localeEmail language (e.g. en-US, pt-BR)
tokenSecretSecret for signing email verification tokens

[core.webhook] — Webhook dispatch

[core.webhook]
acceptInvalidCertificates = true
consumeIntervalInSecs = 10
consumeBatchSize = 10
maxAttempts = 3
FieldDescription
acceptInvalidCertificatesAllow self-signed TLS certs on webhook targets (use true in dev only)
consumeIntervalInSecsHow often to flush the webhook queue
consumeBatchSizeEvents per flush
maxAttemptsRetry limit per event

[diesel] — Database

[diesel]
databaseUrl = "postgres://mycelium-user:password@localhost:5432/mycelium-dev"

Use Vault for the URL in production:

databaseUrl = { vault = { path = "myc/database", key = "url" } }

[smtp] and [queue] — Email

[smtp]
host = "smtp.gmail.com:587"
username = "user@gmail.com"
password = "your-password"

[queue]
emailQueueName = "email-queue"
consumeIntervalInSecs = 5

[redis] — Cache

[redis]
protocol = "redis"       # "rediss" for TLS
hostname = "localhost:6379"
password = ""

[auth] — Authentication

[auth]
internal = "enabled"
jwtSecret = "random-secret"
jwtExpiresIn = 86400     # 24 hours
tmpExpiresIn = 3600      # temporary tokens (password reset, account creation)

External OAuth2 providers

Add one block per provider:

# Google
[[auth.external.define]]
issuer = "https://accounts.google.com"
jwksUri = "https://www.googleapis.com/oauth2/v3/certs"
userInfoUrl = "https://www.googleapis.com/oauth2/v3/userinfo"
audience = "your-google-client-id"

# Auth0
[[auth.external.define]]
issuer = "https://your-app.auth0.com/"
jwksUri = "https://your-app.auth0.com/.well-known/jwks.json"
userInfoUrl = "https://your-app.auth0.com/userinfo"
audience = "https://your-app.auth0.com/api/v2/"
FieldDescription
issuerProvider’s identity URL
jwksUriURL of the provider’s public keys (for JWT verification)
userInfoUrlURL to fetch the user’s email and claims
audienceClient ID or API identifier registered with the provider

[api] — Server and routing

[api]
serviceIp = "0.0.0.0"
servicePort = 8080
serviceWorkers = 4
gatewayTimeout = 30
healthCheckInterval = 120
maxRetryCount = 3
allowedOrigins = ["http://localhost:3000", "https://app.example.com"]

[api.cache]
jwksTtl = 3600     # cache OAuth2 public keys for 1 hour
emailTtl = 120     # cache resolved emails for 2 minutes
profileTtl = 120   # cache resolved profiles for 2 minutes
FieldDescription
serviceIpBind address. 0.0.0.0 listens on all interfaces
servicePortHTTP port
serviceWorkersWorker threads. Match to CPU count
gatewayTimeoutRequest timeout in seconds
allowedOriginsCORS whitelist. Use ["*"] in dev only
healthCheckIntervalHow often to probe downstream health endpoints (seconds)

[api.logging] — Log output

[api.logging]
level = "info"
format = "ansi"    # "jsonl" for structured logs
target = "stdout"

File target:

target = { file = { path = "logs/api.log" } }

OpenTelemetry collector:

target = { collector = { name = "mycelium-api", host = "otel-collector", protocol = "grpc", port = 4317 } }

[api.tls.define] — TLS (optional)

[api.tls.define]
tlsCert = { vault = { path = "myc/api/tls", key = "tlsCert" } }
tlsKey = { vault = { path = "myc/api/tls", key = "tlsKey" } }

To disable TLS:

tls = "disabled"

[api.services] — Downstream services

Route configuration lives here. See Downstream APIs for the full guide.


Next steps

Deploy Locally with Docker Compose

The fastest way to get a complete development environment is Docker Compose — it starts PostgreSQL, Redis, Vault, and the gateway together with one command.

Prerequisites: Docker 20.10+ and Docker Compose 2.0+. (Docker Desktop includes both.)


Step 1 — Clone and configure

git clone https://github.com/LepistaBioinformatics/mycelium.git
cd mycelium
cp settings/config.example.toml settings/config.toml

Open settings/config.toml and update at minimum:

  • Database credentials under [diesel]
  • SMTP settings under [smtp] (if you need email)
  • Secrets under [core.accountLifeCycle] and [auth]

Step 2 — Start everything

docker-compose up -d

This starts:

  • postgres — database on port 5432
  • redis — cache on port 6379
  • vault — secret management on port 8200 (optional)
  • mycelium-api — gateway on port 8080

Step 3 — Verify

docker-compose ps        # all services should be "Up"
curl http://localhost:8080/health

Common operations

View logs:

docker-compose logs -f mycelium-api

Stop everything:

docker-compose down

Full reset (deletes all data):

docker-compose down -v

Access the database directly:

docker-compose exec postgres psql -U mycelium-user -d mycelium-dev

Using Vault for secrets (optional)

If you’re using Vault, initialize it after starting:

# Initialize and get unseal keys + root token (save these securely)
docker-compose exec vault vault operator init

# Unseal with 3 of the 5 keys
docker-compose exec vault vault operator unseal <KEY1>
docker-compose exec vault vault operator unseal <KEY2>
docker-compose exec vault vault operator unseal <KEY3>

# Store a secret
docker-compose exec vault vault login <ROOT_TOKEN>
docker-compose exec vault vault kv put secret/mycelium/database \
  url="postgres://mycelium-user:password@postgres:5432/mycelium"

Then reference it in config.toml:

[vault.define]
url = "http://vault:8200"
versionWithNamespace = "v1/secret"
token = { env = "VAULT_TOKEN" }

[diesel]
databaseUrl = { vault = { path = "mycelium/database", key = "url" } }

Troubleshooting

Gateway can’t connect to Postgres:

docker-compose exec postgres pg_isready
docker-compose exec postgres psql -U postgres -l

Port conflict — change the external port in docker-compose.yaml:

services:
  mycelium-api:
    ports:
      - "8081:8080"

Next steps

Authorization Model

This page explains how Mycelium decides whether a request is allowed to proceed.


The short version

Mycelium uses a two-layer approach:

  1. Gateway layer — coarse checks at the route level (“is this user logged in? do they have the right role?”). If the check fails, the request is rejected before reaching your service.

  2. Downstream layer — fine-grained checks inside your service, using the identity that Mycelium injects (“does this user have write access to this specific resource?”).

Think of it like a building: Mycelium is the security desk at the entrance (checks your ID and badge before letting you through the door). Your service is the room inside — it gets to decide what the person can do once they’re in.


The gateway layer

When a request arrives, Mycelium matches it against the configured route. Each route belongs to a security group that defines the minimum requirements:

Security groupWhat Mycelium checks
publicNothing — anyone can pass
authenticatedValid JWT or connection string
protectedValid token + resolved profile
protectedByRolesValid token + user has one of the listed roles

If the check passes, Mycelium forwards the request to your service and injects the user’s identity as HTTP headers.


What gets injected

Depending on the security group, your service receives:

HeaderWhen injectedContains
x-mycelium-emailauthenticated or higherThe authenticated user’s email
x-mycelium-profileprotected or higherFull identity context (see below)

The profile is a compressed JSON object carrying: account ID, tenant memberships, roles, and access scopes. Your service reads it and can make resource-level decisions without querying the gateway or doing its own authentication.


The downstream layer

Once a request is inside your service, you use the profile to decide what the user can do with a specific resource. The profile exposes a fluent API for narrowing the access context:

profile
  .on_tenant(tenant_id)     # focus on this tenant
  .on_account(account_id)   # focus on this account
  .with_write_access()      # must have write permission
  .with_roles(["manager"])  # must have manager role
  .get_related_account_or_error()  # returns error if no match

Each step narrows — never expands — the set of permissions. If any step finds no match, the chain returns an error and you return 403 to your caller.

This design means:

  • Access decisions are explicit and auditable.
  • No implicit “superuser” paths that bypass checks.
  • Your service never needs to call Mycelium again to validate permissions.

Reference

Formal classification

Mycelium’s model spans three standard paradigms:

  • RBAC (Role-Based Access Control) — used declaratively at the gateway level (security groups with role lists).
  • ABAC (Attribute-Based Access Control) — the profile carries attributes (tenant, account, scope) used in downstream decisions.
  • FBAC (Feature-Based Access Control) — the dominant model; access decisions are made close to the resource using the full contextual chain.

Design principles

  • Authentication, identity enrichment, and authorization are strictly separated.
  • Capabilities are progressively reduced — the chain never grants more than the token allows.
  • No global policies that silently override explicit checks.
  • Each authorization decision is a discrete, loggable event (resource, action, context, outcome).

Authentication Flows

This page covers the three ways users authenticate with Mycelium and how to create service-to-service tokens.


Magic link is the default authentication method. No passwords required — the user enters their email and receives a one-time link.

How it works

User enters email
    ↓
POST /_adm/beginners/users/magic-link/request
    ↓
Mycelium sends a login email with a one-time link
    ↓
User clicks the link → GET /_adm/beginners/users/magic-link/display/{token}
    ↓
POST /_adm/beginners/users/magic-link/verify  (with the token)
    ↓
Response: { "token": "jwt...", "type": "Bearer" }

Enabling it

Internal authentication must be enabled in config.toml:

[auth]
internal = "enabled"
jwtSecret = "your-secret"
jwtExpiresIn = 86400

SMTP must also be configured so Mycelium can send the email. See Configuration.

Using the JWT

After login, include the JWT in every request:

Authorization: Bearer <your-jwt-token>

The JWT is valid for jwtExpiresIn seconds (default: 24 hours).


Two-factor authentication (2FA / TOTP)

Users can add a second factor using any TOTP authenticator app (Google Authenticator, Authy, 1Password, etc.).

Enabling 2FA (user journey)

Step 1 — Start activation:

POST /_adm/beginners/users/totp/enable
Authorization: Bearer <jwt>

Response:

{
  "totpUrl": "otpauth://totp/MyApp:user@example.com?secret=BASE32SECRET&issuer=MyApp"
}

The user scans the totpUrl in their authenticator app (or adds the secret manually).

Step 2 — Confirm activation:

POST /_adm/beginners/users/totp/validate-app
Authorization: Bearer <jwt>
Content-Type: application/json

{ "token": "123456" }

The token is the 6-digit code shown in the authenticator app. This confirms that the app is correctly configured and activates 2FA on the account.

Logging in with 2FA

When 2FA is enabled, the magic link login response includes totp_required: true. The client must then call:

POST /_adm/beginners/users/totp/check-token
Authorization: Bearer <jwt>
Content-Type: application/json

{ "token": "123456" }

Only after a successful TOTP check does the session have full access.

Disabling 2FA

POST /_adm/beginners/users/totp/disable
Authorization: Bearer <jwt>
Content-Type: application/json

{ "token": "123456" }

Requires a valid TOTP token to confirm the user’s intent.


Connection strings (service tokens)

A connection string is a long-lived API token tied to a specific account, tenant, and role. Use them for:

  • Machine-to-machine calls — scripts, cron jobs, or services that don’t have a user session.
  • Telegram Mini Apps — the gateway issues a connection string when a user logs in via Telegram (see Alternative Identity Providers).
  • Long-running sessions — when the standard JWT expiry is too short.

Creating a connection string

POST /_adm/beginners/tokens
Authorization: Bearer <jwt>
Content-Type: application/json

{
  "tenantId": "a3f1e2d0-1234-4abc-8def-000000000001",
  "accountId": "b5e2f3a1-5678-4def-9abc-000000000002",
  "role": "manager",
  "expiresAt": "2027-01-01T00:00:00Z"
}

Response:

{
  "connectionString": "acc=<uuid>;tid=<uuid>;r=manager;edt=2027-01-01T00:00:00Z;sig=<hmac>",
  "expiresAt": "2027-01-01T00:00:00Z"
}

Listing your connection strings

GET /_adm/beginners/tokens
Authorization: Bearer <jwt>

Using a connection string

Instead of Authorization: Bearer, send the connection string in its own header:

x-mycelium-connection-string: acc=<uuid>;tid=<uuid>;r=manager;edt=...;sig=...

The gateway checks x-mycelium-connection-string first. If absent, it falls back to Authorization: Bearer. Do not mix them — a connection string sent as Authorization: Bearer will fail JWT validation.


OAuth2 / external providers

If you configure external OAuth2 providers (Google, Microsoft, Auth0), users authenticate directly with the provider and present the provider’s JWT to Mycelium. Mycelium validates the token’s signature using the provider’s JWKS endpoint.

See Configuration → External Authentication for setup instructions.


Fetching your own profile

Any authenticated user can fetch their full profile:

GET /_adm/beginners/profile
Authorization: Bearer <jwt>

This is the same profile that Mycelium injects as x-mycelium-profile into downstream requests. Useful for debugging or displaying account/tenant information in a frontend.

Account Types and Roles

Mycelium uses a layered identity model:

  • An account is the unit of identity.
  • An account type describes the purpose and scope of that account.
  • A role (SystemActor) determines which administrative operations the account can perform.
  • A guest relationship links a User account to a tenant-scoped account, granting it contextual access with a specific permission level.

Quick reference — which account type do I need?

ScenarioAccount type
Human end user logging inUser
Platform operator / superadminStaff
Delegated platform administratorManager
Service, bot, or non-human entity within a tenantSubscription
Service account with a built-in administrative role inside a tenantRoleAssociated
Delegated tenant administratorTenantManager
Internal system-level actorActorAssociated

Account types

Every account in Mycelium has an accountType field. Its value determines what the account can do and which management operations apply to it.

User

"accountType": "user"

A personal account belonging to a human user. The default type for end users who log in via email/password, magic link, OAuth, or any configured external IdP (e.g. Telegram).

  • Has no administrative privileges by default.
  • Can belong to multiple tenants simultaneously by being guested into tenant-scoped Subscription accounts (see Tenant membership below).
  • Managed by the UsersManager role (approval, activation, archival).

Staff

"accountType": "staff"

A platform-level administrative account for operators who control the entire Mycelium instance. Staff accounts can create tenants, manage platform-wide guest roles, and upgrade or downgrade other accounts. They are not tenant-scoped — they act across all tenants.

The first Staff account is created via the CLI (myc-cli accounts create-seed-account). See CLI Reference.

Manager

"accountType": "manager"

Similar to Staff but intended for delegated platform management. Managers can create system accounts and manage tenant membership at the platform level without holding the highest-privilege Staff designation. Suitable for operations teams that need broad but not full superadmin access.

Subscription

"accountType": { "subscription": { "tenantId": "<uuid>" } }

A tenant-scoped account representing a service, bot, or non-human entity (e.g. an external application, an automated pipeline, an integration). Created by a TenantManager within a specific tenant.

User accounts join a tenant by being guested into a Subscription account with a specific guest role and permission level (see below). The subscription account is the anchor for all tenant-scoped permissions.

Managed by the SubscriptionsManager role (invite guests, update name and flags).

RoleAssociated

"accountType": {
  "roleAssociated": {
    "tenantId": "<uuid>",
    "roleName": "subscriptions-manager",
    "readRoleId": "<uuid>",
    "writeRoleId": "<uuid>"
  }
}

A Subscription-like account that is pinned to a specific named guest role. Used to create service accounts that carry a built-in administrative role inside a tenant — the canonical example is a Subscription Manager account, which is a RoleAssociated account bound to the SubscriptionsManager system actor role.

This account type is created automatically when calling the tenant-manager/create-subscription-manager-account endpoint. It is a system-managed type; you rarely create it manually.

TenantManager

"accountType": { "tenantManager": { "tenantId": "<uuid>" } }

A management account scoped to a specific tenant. Created by tenant owners (TenantOwner role) to delegate tenant-level administrative tasks. Tenant managers can create and delete subscription accounts, manage tenant tags, and invite subscription managers within their tenant.

ActorAssociated

"accountType": { "actorAssociated": { "actor": "<SystemActor>" } }

An internal account bound to a specific SystemActor role. Used by Mycelium itself to represent system-level actors that need a persistent identity (e.g. for audit trails). You do not create these manually — the platform provisions them as needed.


Tenant membership (guest relationships)

An account type alone does not grant access to a tenant’s resources. Access is established through guest relationships:

User account
  └── guested into → Subscription account (tenant-scoped)
                          └── with a GuestRole + Permission level
                                  └── grants access to downstream routes

How a user joins a tenant

  1. A TenantManager or SubscriptionsManager creates a Subscription account for the tenant.
  2. The subscription manager invites a user by email (guest_user_to_subscription_account).
  3. The user receives an invitation email and accepts it.
  4. The user’s User account is now a guest of the subscription account, holding a GuestRole at a specific permission level (Read or Write).
  5. When the user sends a request with x-mycelium-tenant-id, Mycelium resolves their profile to include the tenant-scoped permissions from all subscription accounts they are guested into.

Guesting to child accounts

If a Subscription account has child accounts (set up via RoleAssociated accounts), an AccountManager can further delegate access: inviting a user into a child account (guest_to_children_account) so the user operates only within the narrower scope of that child account rather than the full subscription account.

Permission levels within guest roles

PermissionWhat it allows
ReadRead-only access within the scope of the role
WriteRead and write access within the scope of the role

Downstream routes declare their required permission level in the route config:

[group]
protectedByRoles = [{ slug = "editor", permission = "write" }]

Users whose guest role carries only Read permission are rejected with 403 on write routes.


Administrative roles (SystemActor)

Every administrative route and JSON-RPC namespace is guarded by a role. In REST, the role appears as the path segment immediately after /_adm/.

Role (SystemActor)URL pathTypical scope
Beginners/_adm/beginners/Any authenticated user — own profile, tokens, invitations
SubscriptionsManager/_adm/subscriptions-manager/Invite guests, manage subscription accounts within a tenant
UsersManager/_adm/users-manager/Approve, activate, archive, suspend user accounts (platform)
AccountManager/_adm/account-manager/Invite guests to child accounts
GuestsManager/_adm/guests-manager/Create, update, delete guest roles
GatewayManager/_adm/gateway-manager/Read-only inspection of routes and services
SystemManager/_adm/system-manager/Error codes, outbound webhooks (platform-wide)
TenantOwner/_adm/tenant-owner/Ownership-level operations on a specific tenant
TenantManager/_adm/tenant-manager/Delegated management within a specific tenant

Staff and Manager accounts additionally access the managers.* JSON-RPC namespace and REST paths under /_adm/managers/, which sit above the per-tenant role hierarchy.

Beginners is not an administrative role — it is the namespace for self-service operations any authenticated user may perform.


Role hierarchy

Staff / Manager  (platform-wide)
  └── create tenants, create system accounts, manage platform guest roles
        │
        ├── TenantOwner  (per tenant)
        │     ├── create / delete TenantManager accounts
        │     ├── manage tenant metadata, archiving, verification
        │     └── configure external IdPs (e.g. Telegram bot token)
        │
        ├── TenantManager  (per tenant)
        │     ├── create / delete Subscription accounts
        │     ├── create SubscriptionManager (RoleAssociated) accounts
        │     └── manage tenant tags
        │
        ├── SubscriptionsManager  (per tenant, via RoleAssociated account)
        │     ├── invite guests to Subscription accounts
        │     └── create RoleAssociated accounts
        │
        ├── GuestsManager  (platform or tenant)
        │     └── define guest roles and their permissions
        │
        ├── UsersManager  (platform)
        │     └── approve / activate / archive User accounts
        │
        ├── AccountManager  (per tenant)
        │     └── invite guests to child accounts
        │
        └── GatewayManager  (platform)
              └── read-only inspection of routes and services

How roles are enforced

When a request arrives at an admin route (e.g. POST /_adm/tenant-owner/...), Mycelium:

  1. Validates the token (JWT or connection string).
  2. Resolves the caller’s profile, which includes their account type, tenant memberships, and guest roles.
  3. Checks whether the resolved profile satisfies the required SystemActor for that route.
  4. For tenant-scoped operations, also checks the x-mycelium-tenant-id header.
  5. Returns 403 if the role or permission is missing; forwards to the use-case layer on match.

Seed account

The first Staff account in a fresh installation is created from the CLI before any user can log in:

myc-cli accounts create-seed-account \
  --name "Platform Admin" \
  --email admin@example.com

See CLI Reference for the full command reference.

Downstream APIs

This guide explains how to register a backend service with Mycelium and control who can access it.

All service configuration lives in settings/config.toml under [api.services]. Restart the gateway after any change.


Registering a service

The simplest possible registration:

[api.services]

[[my-service]]
host = "localhost:3000"
protocol = "http"

[[my-service.path]]
group = "public"
path = "/api/*"
methods = ["GET", "POST"]

This tells Mycelium: “Any GET or POST to /api/* should be forwarded to localhost:3000.” No authentication required — anyone can reach this route.

The service name (my-service) becomes part of how you identify the service internally. It does not affect the URL path; the path comes from the path field.


Choosing a security group

The group field on each route controls what Mycelium requires before forwarding the request.

public — No authentication

Anyone can access the route. Use for health checks, public APIs, and Telegram webhooks.

[[my-service.path]]
group = "public"
path = "/health"
methods = ["GET"]

authenticated — Valid login token required

The user must be logged in. Mycelium injects their email in x-mycelium-email.

[[my-service.path]]
group = "authenticated"
path = "/profile"
methods = ["GET", "PUT"]

protected — Full identity required

The user must be logged in and have a resolved profile. Mycelium injects the full profile in x-mycelium-profile. Use this when your service needs to make fine-grained decisions (e.g. “can this user access this specific resource?”).

[[my-service.path]]
group = "protected"
path = "/dashboard/*"
methods = ["GET"]

protectedByRoles — Specific roles required

Only users with at least one of the listed roles can pass. All others get 403.

[[my-service.path]]
group = { protectedByRoles = [{ slug = "admin" }, { slug = "super-admin" }] }
path = "/admin/*"
methods = ["ALL"]

You can also require a specific permission level:

[[my-service.path]]
group = { protectedByRoles = [{ slug = "editor", permission = "write" }] }
path = "/content/edit/*"
methods = ["POST", "PUT", "DELETE"]

[[my-service.path]]
group = { protectedByRoles = [{ slug = "viewer", permission = "read" }] }
path = "/content/view/*"
methods = ["GET"]

Multiple routes on one service

Each [[service-name.path]] block adds a route. Mix security groups freely:

[[user-service]]
host = "users.internal:4000"
protocol = "http"

[[user-service.path]]
group = "authenticated"
path = "/users/me"
methods = ["GET", "PUT"]

[[user-service.path]]
group = "protected"
path = "/users/preferences"
methods = ["GET", "POST"]

[[user-service.path]]
group = { protectedByRoles = [{ slug = "admin" }] }
path = "/users/admin/*"
methods = ["ALL"]

Authenticating Mycelium to your service (secrets)

If your downstream service requires a token or API key from the caller, define a secret and reference it on the route.

Query parameter:

[[legacy-api]]
host = "legacy.internal:8080"
protocol = "http"

[[legacy-api.secret]]
name = "api-key"
queryParameter = { name = "token", token = { env = "LEGACY_API_KEY" } }

[[legacy-api.path]]
group = "public"
path = "/legacy/*"
methods = ["GET"]
secretName = "api-key"

Authorization header:

[[protected-api.secret]]
name = "bearer-token"
authorizationHeader = { name = "Authorization", prefix = "Bearer ", token = { vault = { path = "myc/services/api", key = "token" } } }

Load balancing across multiple hosts

[[api-service]]
hosts = ["api-01.example.com:8080", "api-02.example.com:8080"]
protocol = "https"

[[api-service.path]]
group = "protected"
path = "/api/*"
methods = ["ALL"]

Webhook routes — identity from request body

Some callers (like Telegram) don’t send a JWT. Instead, the user’s identity is in the request body. Use identitySource to handle this.

Requires allowedSources — before parsing the body, Mycelium checks that the Host header matches an allowed source. This prevents attackers from forging webhook calls.

[[telegram-bot]]
host = "bot-service:3000"
protocol = "http"
allowedSources = ["api.telegram.org"]

[[telegram-bot.path]]
group = "protected"
path = "/telegram/webhook"
methods = ["POST"]
identitySource = "telegram"

With this config, Mycelium extracts the Telegram user ID from the message body, looks up the linked Mycelium account, and injects x-mycelium-profile before forwarding. If the user hasn’t linked their account, Mycelium returns 401 and the message is not forwarded.

See Alternative Identity Providers for the full Telegram setup journey.


Service discovery (AI agents)

Set discoverable = true to make a service visible to AI agents and LLM-based tooling:

[[data-service]]
host = "data.internal:5000"
protocol = "http"
discoverable = true
description = "Customer data API"
openapiPath = "/api/openapi.json"
healthCheckPath = "/health"
capabilities = ["customer-search", "order-history"]
serviceType = "rest-api"

What headers does my service receive?

Security groupx-mycelium-emailx-mycelium-profile
authenticatedYesNo
protectedYesYes
protectedByRolesYesYes

The x-mycelium-profile value is a Base64-encoded, ZSTD-compressed JSON object. Use the Python SDK or decode it manually to read tenant memberships, roles, and access scopes.


Complete example

[api.services]

# Public health check
[[health]]
host = "localhost:8080"
protocol = "http"

[[health.path]]
group = "public"
path = "/health"
methods = ["GET"]

# User service — mixed access levels
[[user-service]]
host = "users.internal:4000"
protocol = "http"

[[user-service.path]]
group = "authenticated"
path = "/users/me"
methods = ["GET", "PUT"]

[[user-service.path]]
group = { protectedByRoles = [{ slug = "admin" }] }
path = "/users/admin/*"
methods = ["ALL"]

# Telegram webhook
[[support-bot]]
host = "bot.internal:3000"
protocol = "http"
allowedSources = ["api.telegram.org"]

[[support-bot.path]]
group = "protected"
path = "/telegram/webhook"
methods = ["POST"]
identitySource = "telegram"

Troubleshooting

Route not matching — Check that the path has a wildcard (/api/*) if you want to match subpaths. /api/ only matches that exact path.

401 on a protected route — The JWT or connection string is missing, expired, or invalid. Check the Authorization: Bearer <token> or x-mycelium-connection-string header.

403 on a role-protected route — The user is authenticated but doesn’t have the required role. Check role assignment in the Mycelium admin panel.

Downstream unreachable — Verify host, protocol, and that the service is actually running. Use acceptInsecureRouting = true on the route if the downstream uses a self-signed TLS cert.


Reference — service-level fields

FieldRequiredDescription
hostYes (or hosts)Single downstream host with port
hostsYes (or host)Multiple hosts for load balancing
protocolYes"http" or "https"
allowedSourcesRequired when identitySource is setAllowed Host headers (supports wildcards)
discoverableNoExpose service to AI agents
descriptionNoHuman-readable description
openapiPathNoPath to OpenAPI spec
healthCheckPathNoHealth check endpoint
capabilitiesNoArray of capability tags
serviceTypeNoe.g. "rest-api"

Reference — route-level fields

FieldRequiredDescription
groupYesSecurity group (see above)
pathYesURL path pattern, supports wildcards
methodsYesHTTP methods, or ["ALL"]
secretNameNoReference to a secret defined at service level
identitySourceNoBody-based IdP. Currently: "telegram"
acceptInsecureRoutingNoAllow self-signed TLS certs on downstream

Alternative Identity Providers

By default, Mycelium authenticates users through email + magic link. Alternative IdPs let users prove who they are using an account they already have on another platform — for example, Telegram.

When authentication succeeds, the downstream service receives the same x-mycelium-profile header it would receive from any other authentication method. From the downstream’s perspective, the identity source is irrelevant: a user is a user.


Authentication tokens in Mycelium

Before diving into alternative IdPs, it is important to understand that Mycelium has two different token types, each sent in a different header.

JWT — Authorization: Bearer <jwt>

Issued by email+password login and magic-link verification. A standard JSON Web Token. Clients send it as Authorization: Bearer <jwt>. The gateway verifies the JWT signature to extract the caller’s email, then resolves the full profile.

Connection string — x-mycelium-connection-string: <string>

A Mycelium-native token with the format acc=<uuid>;tid=<uuid>;r=<role>;edt=<datetime>;sig=<hmac>. Issued by Telegram login (and service token generation). Clients send it as the custom header x-mycelium-connection-string: <string>. The gateway fetches the token from the database and resolves the profile from the stored scope.

Both token types are accepted on all admin endpoints and downstream routing rules. The gateway checks for x-mycelium-connection-string first; if absent, it falls back to Authorization: Bearer.

Do not mix them up: a connection string sent as Authorization: Bearer will fail JWT signature validation and return 401.


How it works — the big picture

Before a user can authenticate via an alternative IdP, two things must be true:

  1. The tenant admin has configured the IdP — each IdP requires credentials specific to that tenant (e.g. a Telegram bot token). Without this, authentication is not possible.

  2. The user has linked their IdP identity to their Mycelium account — this is a one-time step where the user says “my Telegram account is me”. After linking, the user can authenticate via Telegram any time, in any tenant they belong to.

Think of it like adding a phone number to a bank account: the bank (Mycelium) holds your regular credentials, but you can also prove your identity by receiving an SMS to your registered number (Telegram). You register the number once; after that, it just works.


Concepts

Identity linking vs. authentication

An alternative IdP works in two stages:

  1. Linking — The user connects their IdP identity to their Mycelium account. This is done once. The link is stored on the personal account and is global across tenants.

  2. Authentication — The user presents their IdP credential. Mycelium verifies it and either issues a connection string (JWT) or, for webhook routes, resolves the profile directly from the incoming request body.

A user who has not linked their IdP identity cannot authenticate via that IdP.

Personal vs. subscription accounts

IdP links are stored on personal accounts. A personal account belongs to a person, not to a specific tenant. When a user belongs to multiple tenants (e.g. a contractor who works for two companies on the same Mycelium installation), they link their Telegram once and it works for all tenants they are a member of.

Subscription accounts are tenant-scoped and cannot hold IdP links.

Per-tenant configuration

Some IdPs (Telegram) require credentials that are specific to a tenant — for example, a Telegram bot token. A bot is created by the company, not by the user. Each company (tenant) has its own bot, so each tenant stores its own credentials.

This configuration is required only for operations that verify Telegram credentials: linking a new identity and issuing login tokens. It is not required for body-based identity resolution in webhook routes — that lookup is global (see What works without tenant config below).


Telegram

Prerequisites

  • A Telegram bot created via @BotFather. After creating the bot you receive a bot token (looks like 7123456789:AAF...). Keep it secret.
  • A webhook secret — a random string you choose (16–256 characters). This is not a Telegram credential; it is a shared secret between Mycelium and Telegram so that Mycelium can verify that incoming webhook calls really come from Telegram and not from an attacker.
  • The Mycelium gateway must be reachable from the internet for the webhook use case. For the login-only use case, it only needs to be reachable from the users’ devices.

Admin Journey: Provisioning the Tenant

Who does this: The tenant owner — the person or team responsible for the company’s Mycelium tenant. This is done once, before any user can link or log in.

Concrete example: Acme Corp runs a Mycelium installation. Their IT admin, Carlos, manages the tenant. Carlos created a Telegram bot called @AcmeHRBot via BotFather and received the bot token. He also generated a random webhook secret (openssl rand -hex 32). Now he needs to store these in Mycelium so the gateway can use them.

Step 1 — Store bot credentials in Mycelium

Carlos calls this endpoint using his JWT (issued when he logged in via magic-link):

POST /_adm/tenant-owner/telegram/config
Authorization: Bearer <carlos-jwt>
x-mycelium-tenant-id: a3f1e2d0-1234-4abc-8def-000000000001
Content-Type: application/json

{
  "botToken": "7123456789:AAFexampleBotTokenFromBotFather",
  "webhookSecret": "4b9c2e1a8f3d7e0c5b2a9f6e3d1c8b5a"
}
  • Response: 204 No Content.
  • Both values are encrypted with AES-256-GCM before storage. They are never readable again through the API — if lost, they must be re-submitted.
  • Only accounts with the tenant-owner role can call this endpoint.

After this step, users can link their Telegram to their Mycelium accounts and log in.

Step 2 — Register the webhook with Telegram (webhook use case only)

Skip this step if you only need the login flow (Use Case A below).

Carlos tells Telegram where to send bot updates. He uses the Telegram Bot API directly:

curl -X POST "https://api.telegram.org/bot7123456789:AAFexampleBotTokenFromBotFather/setWebhook" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://gateway.acme.com/auth/telegram/webhook/a3f1e2d0-1234-4abc-8def-000000000001",
    "secret_token": "4b9c2e1a8f3d7e0c5b2a9f6e3d1c8b5a"
  }'

From this point on, every time someone sends a message to @AcmeHRBot, Telegram will POST the update to that URL and include the header X-Telegram-Bot-Api-Secret-Token: 4b9c2e1.... Mycelium verifies that header before doing anything with the update.


User Journey: Linking a Telegram Identity

Who does this: Each end user, once, using a Telegram Mini App built by the company.

Concrete example: Maria is an Acme employee. She has a Mycelium account (registered via magic link to maria@acme.com) and belongs to Acme’s tenant. Now she wants to use @AcmeHRBot to check her vacation balance. Before she can do anything with the bot, she needs to link her Telegram account to her Mycelium account.

Carlos built a Telegram Mini App (a small web page that opens inside Telegram). When Maria opens it, the app reads the cryptographically signed initData that Telegram injects into every Mini App session. This initData proves to Mycelium that the Telegram user opening the app is who they claim to be, because it is signed with the bot token.

The Mini App sends (using Maria’s JWT from her original magic-link login):

POST /auth/telegram/link
Authorization: Bearer <maria-jwt>
x-mycelium-tenant-id: a3f1e2d0-1234-4abc-8def-000000000001
Content-Type: application/json

{
  "initData": "query_id=AAH...&user=%7B%22id%22%3A98765432%2C%22username%22%3A%22maria_acme%22...&hash=abc123..."
}

What Mycelium does:

  1. Verifies the HMAC in initData using Acme’s bot token — confirms this is a real Telegram user.
  2. Extracts Maria’s Telegram user ID (98765432) from initData.
  3. Stores { id: 98765432, username: "maria_acme" } in Maria’s personal account metadata.

Maria only does this once. The link is stored on her personal account and is valid across all tenants she belongs to.

  • Returns 409 if Maria already has a Telegram link, or if Telegram ID 98765432 is already linked to another Mycelium account.
  • Returns 422 if Carlos hasn’t completed admin Step 1.

To remove the link later:

DELETE /auth/telegram/link
Authorization: Bearer <maria-jwt>

Use Case A — Employee Mini App Calling Protected APIs

The problem: Acme built a web app inside Telegram (@AcmeHRBot) where employees can check their remaining vacation days, submit expense reports, and view assigned tasks. The backend API that serves this data is protected by Mycelium: it only responds to authenticated users and injects their profile into every request. The Mini App cannot ask Maria to type her email and password — she’s already inside Telegram. Telegram is the identity.

The solution: The Mini App exchanges Telegram’s initData for a Mycelium connection string. That connection string is sent in the x-mycelium-connection-string header for all subsequent API calls. The backend API sees a normal authenticated request and never knows the user logged in via Telegram.

Full journey

Maria opens the Mini App inside @AcmeHRBot on her phone
  │
  │  Telegram injects window.Telegram.WebApp.initData into the Mini App
  │  (this string is signed by Telegram and expires in ~24 hours)
  │
  ▼
Mini App calls the login endpoint (no credentials needed — it's public):

  POST /auth/telegram/login/a3f1e2d0-1234-4abc-8def-000000000001
  { "initData": "query_id=AAH...&user=...&hash=abc123..." }
  │
  │  Mycelium:
  │    1. Verifies the HMAC using Acme's bot token → confirms Maria is who she claims
  │    2. Extracts Telegram user ID 98765432
  │    3. Looks up which Mycelium account has this Telegram ID → finds maria@acme.com
  │    4. Issues a connection string scoped to Acme's tenant
  │
  ▼
  { "connectionString": "acc=...;tid=...;sig=...", "expiresAt": "2026-04-21T10:00:00-03:00" }
  │
  │  Mini App stores the connection string in memory for this session
  │
  ▼
Mini App calls the HR API:

  GET /hr-api/vacation-balance
  x-mycelium-connection-string: acc=...;tid=...;sig=...
  │
  │  Mycelium gateway:
  │    - Validates the connection string
  │    - Resolves Maria's full profile (account, tenant membership, roles)
  │    - Injects x-mycelium-profile into the forwarded request
  │
  ▼
HR API service receives:
  GET /vacation-balance
  x-mycelium-profile: <base64 compressed JSON with Maria's identity, roles, tenant>
  │
  │  HR API reads the profile, finds Maria's account ID, returns her vacation balance
  │
  ▼
Mini App displays: "You have 12 vacation days remaining."

Login endpoint reference

POST /auth/telegram/login/{tenant_id}
Content-Type: application/json

{
  "initData": "<Telegram Mini App initData string>"
}

Response on success:

{
  "connectionString": "acc=uuid;tid=uuid;r=user;edt=2026-04-21T10:00:00-03:00;sig=...",
  "expiresAt": "2026-04-21T10:00:00-03:00"
}
  • Public endpoint — no Authorization header required.
  • Returns 401 if initData is invalid or expired.
  • Returns 404 if Telegram user ID is not linked to any account in this tenant.
  • Returns 422 if the tenant has not completed admin Step 1.

Gateway configuration

No special gateway config is needed for this use case. The HR API uses the same groups as any other service:

[[hr-api]]
host = "hr-service:3000"
protocol = "http"

[[hr-api.path]]
group = "protected"            # requires a valid profile
path = "/hr-api/*"
methods = ["ALL"]

Use Case B — Customer Support Bot with Authenticated Messages

The problem: Acme runs a customer support bot (@AcmeSupportBot). When a customer writes “my order hasn’t arrived”, the support handler needs to know who is writing — their account, subscription tier, and open tickets. Without identity, the bot can only reply generically. With identity, it can reply “Hi Maria, your order #4521 shipped yesterday and arrives tomorrow.”

The support handler is a downstream service behind Mycelium. It cannot issue JWTs or do its own authentication — it just wants to receive the message and know who sent it. Mycelium handles the identity resolution transparently.

The solution: Configure a gateway route with identitySource = "telegram". When Telegram sends an update, Mycelium extracts the sender’s Telegram ID from the message body, looks up their linked Mycelium account, and injects their profile before forwarding the update to the support handler.

The support handler never sees unauthenticated messages on this route. If the sender hasn’t linked their Telegram account, Mycelium returns 401 and the message is not forwarded.

Full journey

Customer (Maria) sends "my order hasn't arrived" to @AcmeSupportBot
  │
  │  Telegram servers POST the update to Mycelium:
  │
  ▼
POST /auth/telegram/webhook/a3f1e2d0-1234-4abc-8def-000000000001
X-Telegram-Bot-Api-Secret-Token: 4b9c2e1a8f3d7e0c5b2a9f6e3d1c8b5a
{
  "update_id": 100000001,
  "message": {
    "from": { "id": 98765432, "username": "maria_acme" },
    "text": "my order hasn't arrived"
  }
}
  │
  │  Mycelium verifies the webhook secret → confirms this really came from Telegram
  │  Responds 200 OK immediately (Telegram requires this, or it will retry)
  │
  ▼
Gateway route (identitySource = "telegram") takes over:
  │
  │  1. Buffers the request body
  │  2. Extracts from.id = 98765432
  │  3. Looks up which Mycelium account has Telegram ID 98765432 → maria@acme.com
  │  4. Loads Maria's full profile (account, tenant, roles)
  │  5. Injects x-mycelium-profile into the forwarded request
  │
  ▼
Support handler receives:
  POST /telegram/webhook
  x-mycelium-profile: <base64 compressed JSON>
  {
    "update_id": 100000001,
    "message": { "from": { "id": 98765432, ... }, "text": "my order hasn't arrived" }
  }
  │
  │  Support handler reads the profile → finds Maria's account → fetches her orders
  │  Replies via Telegram Bot API: "Hi Maria, your order #4521 shipped yesterday."

Gateway configuration

[[acme-support-bot]]
host = "support-handler:3000"
protocol = "http"
allowedSources = ["api.telegram.org"]    # only accept requests from Telegram's servers

[[acme-support-bot.path]]
group = "protected"
path = "/telegram/webhook"
methods = ["POST"]
identitySource = "telegram"             # resolve identity from the message body

Two fields are required:

  • allowedSources — before even parsing the body, Mycelium checks the Host header of the incoming request. Only requests whose Host matches this list are processed. This prevents an attacker from POSTing fake updates directly to your service. Supports wildcards: "*.telegram.org", "10.0.0.*".

  • identitySource = "telegram" — tells Mycelium to extract the sender’s identity from the body (via message.from.id or the equivalent field in other update types) instead of looking for an Authorization or x-mycelium-connection-string header.

If allowedSources is missing and identitySource is set, the gateway rejects the request.

Important constraints

  • The user must have previously linked their Telegram account. Messages from users who haven’t linked return 401. Consider having the bot reply with a link to the Mini App where the user can complete the linking step.
  • The group field still applies. Use protected if all you need is the user’s profile. Use protectedByRoles if you want to further restrict which users can interact with the bot.
  • Your support handler receives the full Telegram Update JSON unchanged in the request body. The profile is injected as a header, not embedded in the body.

What works without tenant config

The problem: Acme has two tenants — acme-hr and acme-operations. The HR tenant has Telegram configured; the operations tenant does not. Maria belongs to both tenants and has already linked her Telegram via the HR tenant’s Mini App.

What can Maria do in the operations tenant?

OperationRequires tenant config?Maria in acme-operations
Link Telegram identityYes — verifies initData HMAC against the tenant’s bot tokenCannot link via this tenant
Login via TelegramYes — same HMAC verificationCannot log in via Telegram to this tenant
Appear in webhook route identity resolutionNo — global lookup by Telegram user IDWorks — her link from acme-hr is found globally

The key insight: the link is stored on Maria’s personal account, not on the tenant. When Mycelium receives a webhook update from operations’ bot and finds from.id = 98765432, it looks up globally — finds Maria’s personal account — and injects her profile. The operations tenant never needed to know about Telegram.

acme-hr  (Telegram configured)          acme-operations  (no Telegram config)
   │                                            │
   │  Maria links via HR Mini App              │  Operations has a support bot
   │  → Telegram ID 98765432                   │  → route: identitySource = "telegram"
   │    stored on Maria's personal account     │
   │                                            │  Telegram sends update with from.id 98765432
   │                                            │  → Mycelium finds Maria globally ✓
   │                                            │  → x-mycelium-profile injected ✓
   │                                            │
   │  Maria cannot link via operations ✗       │  Maria cannot login via Telegram here ✗
   │  Maria cannot login via Telegram here ✗   │  (no bot token to verify initData)

In practice: If you are building a webhook-only integration (Use Case B) for a tenant, you do not need to configure Telegram for that tenant — as long as users have linked in some other tenant. If you need login (Use Case A) for that tenant, the tenant must have its own bot and must complete the admin provisioning step.


Comparison: Use Case A vs. Use Case B

Use Case A — Login + API callsUse Case B — Webhook identity resolution
ExampleMini App calling an HR APISupport bot knowing who sent a message
Who calls MyceliumYour Mini App / AI agentTelegram’s servers
Authentication mechanisminitData → connection string (JWT)Webhook secret + sender ID from body
Tenant Telegram config requiredYes — every tenant used for loginNo — global identity lookup
Gateway route configStandard authenticated/protectedallowedSources + identitySource = "telegram"
Webhook registration with TelegramNot requiredRequired (setWebhook)
User must have linked identityYes — via any configured tenantYes — via any configured tenant
Connection string issuedYes — reused for the sessionNo — identity re-resolved per request
What the downstream receivesx-mycelium-profile on any routeFull Telegram Update body + x-mycelium-profile

Troubleshooting

422 telegram_not_configured_for_tenant
The tenant has not completed admin Step 1 (POST /tenant-owner/telegram/config). This error appears on link and login. It does not appear on webhook routes (identity resolution there is global and does not need tenant config).
User is a guest in a tenant with no Telegram config — what still works?
Webhook routes with identitySource = "telegram" still work if the user previously linked their Telegram identity via any other tenant. Login and linking via the unconfigured tenant are not possible until the tenant owner completes admin Step 1.
401 invalid_telegram_init_data
The initData HMAC check failed. Causes: wrong bot token stored in Mycelium, initData expired (Telegram signs initData with a short-lived timestamp), or the string was modified in transit. Re-read initData from window.Telegram.WebApp.initData and retry immediately.
404 on login
The Telegram user ID extracted from initData is not linked to any Mycelium account in this tenant. The user must complete the linking step first (open the Mini App that calls POST /auth/telegram/link).
401 on webhook route — message not forwarded
The sender has not linked their Telegram account. The gateway returns 401 and does not forward the update to your service. Handle this in your bot by detecting that the webhook call did not reach your service (or implement a fallback public route) and sending the user a message with a link to the linking Mini App.
Webhook 401 invalid_webhook_secret
The X-Telegram-Bot-Api-Secret-Token header sent by Telegram does not match the secret stored in Mycelium. Verify that the webhookSecret you supplied in Step 1 exactly matches the secret_token you passed to setWebhook. They must be identical strings.
allowedSources not working as expected
allowedSources checks the Host header, not Origin. Telegram’s servers send requests with Host: api.telegram.org. Origin is a browser header sent by CORS preflight requests — Telegram never sends it. CORS (allowedOrigins) is completely independent and irrelevant for webhook routes. See Downstream APIs.

Response Callbacks

After Mycelium forwards a request to a downstream service and receives a response, it can optionally run callbacks — side effects triggered by the response. Use callbacks to log metrics, notify third-party services, or trigger async workflows without modifying the downstream service.

Callbacks run after the response has been returned to the caller. They never block or modify the response.


How it works

Client → Mycelium → Downstream service
                         ↓ response
Mycelium returns response to client
       ↓ (in parallel or fire-and-forget)
Callbacks execute

Defining a callback

Callbacks are defined in config.toml under [api.callbacks], then referenced by name on individual routes.

Rhai callback — inline script

Rhai is an embedded scripting language. Write the script directly in the config file. No external files or interpreters required.

[api.callbacks]

[[callback]]
name = "error-monitor"
type = "rhai"
script = """
if status_code >= 500 {
    log_error("Server error: " + status_code);
}
if duration_ms > 1000 {
    log_warn("Slow response: " + duration_ms + "ms");
}
"""
timeoutMs = 1000

Available variables inside the script: status_code, duration_ms, headers (map), method, upstream_path. Logging functions: log_info, log_warn, log_error.

HTTP callback — POST the response context to a URL

[[callback]]
name = "audit-log"
type = "http"
url = "https://audit.internal/events"
method = "POST"          # default
timeoutMs = 3000
retryCount = 3
retryIntervalMs = 1000

Python callback — run a script

[[callback]]
name = "metrics-push"
type = "python"
scriptPath = "/opt/mycelium/callbacks/push_metrics.py"
pythonPath = "/usr/bin/python3.12"
timeoutMs = 5000

JavaScript callback — run a Node.js script

[[callback]]
name = "slack-notify"
type = "javascript"
scriptPath = "/opt/mycelium/callbacks/notify_slack.js"
nodePath = "/usr/bin/node"
timeoutMs = 3000

Attaching a callback to a route

Reference the callback by name in the route’s callbacks field:

[api.services]

[[my-service]]
host = "localhost:3000"
protocol = "http"

[[my-service.path]]
group = "protected"
path = "/api/*"
methods = ["POST", "PUT"]
callbacks = ["audit-log", "metrics-push"]

Multiple callbacks can be attached to the same route.


Filtering which responses trigger the callback

By default, a callback runs for every response. Use filters to narrow this:

[[callback]]
name = "error-alert"
type = "http"
url = "https://alerts.internal/errors"

# Only trigger on 5xx responses
triggeringStatusCodes = { oneof = [500, 502, 503, 504] }

# Only trigger on POST and DELETE
triggeringMethods = { oneof = ["POST", "DELETE"] }

# Only trigger if the response has a specific header
triggeringHeaders = { oneof = { "X-Error-Code" = "PAYMENT_FAILED" } }

Filter statement types:

  • oneof — at least one value must match
  • allof — all values must match
  • noneof — none of the values may match

Execution mode

Control how callbacks run globally:

[api]
callbackExecutionMode = "fireAndForget"  # default
ModeBehavior
fireAndForgetCallbacks run in background tasks; gateway does not wait for them
parallelAll callbacks run concurrently; gateway waits for all to finish
sequentialCallbacks run one after another; gateway waits

Use fireAndForget (default) when callback latency should not affect response time. Use sequential when order matters (e.g., log before notify).


What the callback receives

Each callback receives a context object with information about the completed request:

FieldDescription
status_codeHTTP status code returned by the downstream service
response_headersResponse headers
duration_msTime from gateway forwarding to downstream response
upstream_pathThe path the client called
downstream_urlThe URL Mycelium forwarded to
methodHTTP method
timestampISO 8601 timestamp
request_idValue of x-mycelium-request-id if present
client_ipCaller’s IP address
user_infoAuthenticated user info (email, account ID) — present when route is authenticated or higher
security_groupThe security group that was applied

For HTTP callbacks, this context is sent as a JSON POST body. For Python / JavaScript callbacks, the context is passed as a JSON-serialized argument. For Rhai callbacks, these fields are available as global variables in the script.


Reference — callback fields

FieldTypeRequiredDescription
namestringYesUnique name — used to reference the callback from routes
typerhai / http / python / javascriptYesCallback engine
timeoutMsintegerNoMax execution time in ms (default: 5000). Ignored in fireAndForget mode
retryCountintegerNoHow many times to retry on failure (default: 3)
retryIntervalMsintegerNoWait between retries in ms (default: 1000)
scriptstringRhai onlyInline Rhai script source
urlstringHTTP onlyTarget URL
methodstringHTTP onlyPOST, PUT, PATCH, or DELETE (default: POST)
scriptPathpathPython / JavaScript onlyPath to script file
pythonPathpathPython onlyInterpreter path (default: system python3)
nodePathpathJavaScript onlyNode.js path (default: system node)
triggeringMethodsobjectNoFilter by HTTP method (oneof, allof, noneof)
triggeringStatusCodesobjectNoFilter by response status code
triggeringHeadersobjectNoFilter by response header key/value

Outbound Webhooks

Mycelium can push notifications to external systems when specific events occur inside the gateway. This is the outbound webhook system — distinct from the Telegram webhook described in Alternative Identity Providers, which handles inbound calls from Telegram’s servers.


How it works

When an event fires (e.g. a new user account is created), Mycelium delivers a POST request to all registered webhook URLs that are listening for that event. The delivery runs in the background and does not block the original operation.

Event fires inside Mycelium
  │
  ▼
Webhook dispatcher looks up registered listeners for this event type
  │
  ▼
POST <webhook_url>
Content-Type: application/json
{ "event": "userAccount.created", "payload": { ... } }

Delivery is retried up to a configured maximum number of attempts (core.webhook.maxAttempts in config.toml).


Configuration

Global webhook delivery settings are in config.toml under [core.webhook]:

[core.webhook]
acceptInvalidCertificates = false    # set true in dev with self-signed certs
consumeIntervalInSecs = 30           # how often the dispatcher polls for pending deliveries
consumeBatchSize = 25                # how many deliveries to process per poll cycle
maxAttempts = 5                      # retry limit before marking a delivery as failed

Registering a webhook

Webhooks are managed through the systemManager.webhooks.* JSON-RPC methods or the equivalent REST routes under /_adm/system-manager/webhooks/. Requires the system-manager role.

Via JSON-RPC

{
  "jsonrpc": "2.0",
  "method": "systemManager.webhooks.create",
  "params": {
    "url": "https://notify.internal/mycelium-events",
    "trigger": "userAccount.created",
    "isActive": true
  },
  "id": 1
}

Via REST

POST /_adm/system-manager/webhooks
Authorization: Bearer <jwt>
Content-Type: application/json

{
  "url": "https://notify.internal/mycelium-events",
  "trigger": "userAccount.created",
  "isActive": true
}

Event types

EventFires when
subscriptionAccount.createdA new subscription account is created within a tenant
subscriptionAccount.updatedA subscription account’s name or flags are changed
subscriptionAccount.deletedA subscription account is deleted
userAccount.createdA new personal user account is registered
userAccount.updatedA user account’s name is changed
userAccount.deletedA user account is deleted

Managing webhooks

JSON-RPC methodREST pathDescription
systemManager.webhooks.createPOST /_adm/system-manager/webhooksRegister a new webhook
systemManager.webhooks.listGET /_adm/system-manager/webhooksList registered webhooks
systemManager.webhooks.updatePATCH /_adm/system-manager/webhooks/{id}Update URL, trigger, or active status
systemManager.webhooks.deleteDELETE /_adm/system-manager/webhooks/{id}Remove a webhook

Delivery payload

Each POST to the registered URL includes a JSON body with the event type and a payload describing what changed. The exact payload shape depends on the event type.

Example — userAccount.created:

{
  "event": "userAccount.created",
  "occurredAt": "2026-04-20T14:30:00Z",
  "payload": {
    "accountId": "a1b2c3d4-...",
    "email": "alice@example.com",
    "name": "Alice"
  }
}

Security

Webhook URLs should use HTTPS. Mycelium does not add a signature header to outbound webhook calls by default. If your endpoint needs to verify the source, place it behind a route that requires a secret (see Downstream APIs).

To accept self-signed certificates during development, set acceptInvalidCertificates = true in [core.webhook].

Error Codes

Mycelium has a built-in error code registry. Error codes give structured names to domain-specific errors so that clients can handle them programmatically rather than parsing error message strings.


What error codes are

A Mycelium error code is a short, opaque identifier (e.g. MYC00023) paired with a human-readable message and optional detail text. When a use-case returns a domain error with a code, Mycelium includes that code in the HTTP response body:

{
  "message": "Account not found.",
  "code": "MYC00023",
  "details": "No account with this ID exists in the requested tenant."
}

Clients that need to distinguish between, say, “account not found” and “account archived” can switch on code rather than parsing the message string.


Native error codes

The built-in (native) error codes are seeded into the database on first run using the CLI:

myc-cli native-errors init

This command is run once during installation. It populates the database with all error codes that the core domain layer uses internally. Without this step, error responses that carry domain codes will have no human-readable message attached.

See CLI Reference for full usage.


Custom error codes

You can define additional error codes for your own downstream services. This lets you publish a shared error vocabulary between the gateway and the services behind it.

Custom codes are managed through the systemManager.errorCodes.* JSON-RPC methods or the equivalent REST routes. Requires the system-manager role.

Create a custom error code

{
  "jsonrpc": "2.0",
  "method": "systemManager.errorCodes.create",
  "params": {
    "code": "SVC00001",
    "message": "User has exceeded their rate limit.",
    "details": "The account has made more requests than allowed in the current window."
  },
  "id": 1
}

List all error codes

{
  "jsonrpc": "2.0",
  "method": "systemManager.errorCodes.list",
  "params": {},
  "id": 1
}

Error code API reference

JSON-RPC methodDescription
systemManager.errorCodes.createRegister a new error code
systemManager.errorCodes.listList all error codes (native + custom)
systemManager.errorCodes.getGet a single error code by code string
systemManager.errorCodes.updateMessageAndDetailsUpdate an existing code’s message and details
systemManager.errorCodes.deleteRemove a custom error code

Native error codes (prefixed MYC) cannot be deleted — only their message and details can be updated if you want to localize them.

MCP — AI Agent Integration

Mycelium exposes a Model Context Protocol (MCP) endpoint that lets AI assistants (Claude, GPT, and any MCP-compatible agent) call your downstream APIs as tools — without needing direct access to your services.


What is MCP?

MCP is an open protocol for connecting AI assistants to external tools and data. With Mycelium’s MCP support, an AI agent can:

  • Discover which operations your downstream services expose.
  • Call those operations on behalf of an authenticated user.
  • Receive the responses and incorporate them into a response to the user.

The AI never bypasses Mycelium’s authentication or authorization. Every tool call is subject to the same security checks as a direct API request from a client.


Prerequisites

For MCP to discover operations, your downstream services must:

  1. Expose an OpenAPI spec at a known path.
  2. Be registered in Mycelium’s config with discoverable = true and openapiPath set.
[[my-service]]
host = "api.internal:4000"
protocol = "http"
discoverable = true
openapiPath = "/api/openapi.json"
description = "Customer data API"
capabilities = ["customer-search", "order-history"]

[[my-service.path]]
group = "protected"
path = "/api/*"
methods = ["ALL"]

The MCP endpoint

POST /mcp

This single endpoint handles all MCP JSON-RPC requests. It supports:

MCP methodWhat it does
initializeReturns Mycelium’s MCP server info and capabilities
tools/listReturns all operations discovered from discoverable services
tools/callExecutes a specific operation through the gateway

How tool calls work

When an AI calls tools/call, Mycelium:

  1. Resolves the operation to the registered downstream service.
  2. Validates the caller’s token (forwarded from the original MCP request header).
  3. Forwards the call through the gateway — including profile injection and all security checks.
  4. Returns the downstream response as a tool result.

The AI passes its Authorization: Bearer <jwt> or x-mycelium-connection-string header in the MCP request, and Mycelium forwards it when calling the downstream service.


Connecting an AI assistant to Mycelium

Point the AI agent’s MCP server URL to your Mycelium instance:

http://your-gateway:8080/mcp

In Claude Desktop’s MCP configuration:

{
  "mcpServers": {
    "mycelium": {
      "url": "http://your-gateway:8080/mcp"
    }
  }
}

The assistant will authenticate as the user whose token is configured, and will only be able to call operations that user is authorized to access.


Tool naming

Mycelium builds tool names deterministically from the service name and OpenAPI operation path: {service_name}__{http_method}__{path_slug}.

For example, an operation GET /api/customers/{id} on service customer-api becomes the tool name customer-api__get__api_customers_id.


Notes

  • MCP support requires discoverable = true and a valid openapiPath on at least one service.
  • Operations on non-discoverable services are never exposed to the MCP endpoint.
  • The MCP endpoint itself does not require authentication to connect, but individual tools/call requests are forwarded with the caller’s token — so unauthorized calls will be rejected with 401/403 by the downstream route’s security group.

JSON-RPC Interface

Mycelium exposes all administrative operations through a JSON-RPC 2.0 interface at POST /_adm/rpc. This endpoint accepts both single requests and batched arrays of requests.

The JSON-RPC interface mirrors the REST admin API — every operation available through the REST routes is also available as a JSON-RPC method. Prefer JSON-RPC when building programmatic clients, automation scripts, or AI agent integrations.


Transport

POST /_adm/rpc
Authorization: Bearer <jwt>          # or x-mycelium-connection-string
Content-Type: application/json

Single request:

{
  "jsonrpc": "2.0",
  "method": "beginners.profile.get",
  "params": {},
  "id": 1
}

Batch request (array):

[
  { "jsonrpc": "2.0", "method": "beginners.profile.get", "params": {}, "id": 1 },
  { "jsonrpc": "2.0", "method": "gatewayManager.routes.list", "params": {}, "id": 2 }
]

Discovery

rpc.discover

Returns the OpenRPC specification for this server — the full list of methods, their parameter schemas, and result schemas.

{ "jsonrpc": "2.0", "method": "rpc.discover", "params": {}, "id": 1 }

Method Reference

Methods are organized by namespace. The namespace corresponds to the role required to call them (see Account Types and Roles).

managers — Platform-wide operations

Requires: staff or manager role.

MethodDescription
managers.accounts.createSystemAccountCreate a system-level account
managers.guestRoles.createSystemRolesCreate platform-wide guest roles
managers.tenants.createCreate a new tenant
managers.tenants.listList all tenants
managers.tenants.deleteDelete a tenant
managers.tenants.includeTenantOwnerAssign a tenant owner to a tenant
managers.tenants.excludeTenantOwnerRemove a tenant owner from a tenant

accountManager — Account manager operations

Requires: accounts-manager role.

MethodDescription
accountManager.guests.guestToChildrenAccountInvite a user as a guest to a child account
accountManager.guestRoles.listGuestRolesList available guest roles
accountManager.guestRoles.fetchGuestRoleDetailsGet details of a specific guest role

gatewayManager — Gateway inspection

Requires: gateway-manager role.

MethodDescription
gatewayManager.routes.listList all registered routes
gatewayManager.services.listList all registered downstream services
gatewayManager.tools.listList all discoverable tools exposed to AI agents

beginners — Self-service / user operations

Requires: authenticated (any logged-in user). No admin role needed.

Accounts

MethodDescription
beginners.accounts.createCreate a personal account
beginners.accounts.getGet own account details
beginners.accounts.updateNameUpdate own display name
beginners.accounts.deleteDelete own account

Profile

MethodDescription
beginners.profile.getGet own resolved profile (tenants, roles, access scopes)

Tenants

MethodDescription
beginners.tenants.getPublicInfoGet public metadata for a tenant

Guests

MethodDescription
beginners.guests.acceptInvitationAccept a guest invitation to a tenant

Tokens

MethodDescription
beginners.tokens.createCreate a connection string token
beginners.tokens.listList own active tokens
beginners.tokens.revokeRevoke a specific token
beginners.tokens.deleteDelete a token

Metadata

MethodDescription
beginners.meta.createCreate account metadata entry
beginners.meta.updateUpdate account metadata entry
beginners.meta.deleteDelete account metadata entry

Users and authentication

MethodDescription
beginners.users.createRegister a new user credential
beginners.users.checkTokenAndActivateUserActivate user via email token
beginners.users.startPasswordRedefinitionStart password reset flow
beginners.users.checkTokenAndResetPasswordComplete password reset
beginners.users.checkEmailPasswordValidityValidate credentials
beginners.users.totpStartActivationBegin TOTP 2FA setup
beginners.users.totpFinishActivationComplete TOTP 2FA setup
beginners.users.totpCheckTokenVerify a TOTP code
beginners.users.totpDisableDisable TOTP 2FA

guestManager — Guest role management

Requires: guests-manager role.

MethodDescription
guestManager.guestRoles.createCreate a guest role
guestManager.guestRoles.listList guest roles
guestManager.guestRoles.deleteDelete a guest role
guestManager.guestRoles.updateNameAndDescriptionUpdate role name and description
guestManager.guestRoles.updatePermissionUpdate role permission level
guestManager.guestRoles.insertRoleChildAdd a child role to a parent
guestManager.guestRoles.removeRoleChildRemove a child role

systemManager — System-level management

Requires: system-manager role.

Error codes

MethodDescription
systemManager.errorCodes.createCreate a custom error code
systemManager.errorCodes.listList all error codes
systemManager.errorCodes.getGet a specific error code
systemManager.errorCodes.updateMessageAndDetailsUpdate error code message and details
systemManager.errorCodes.deleteDelete an error code

Webhooks

MethodDescription
systemManager.webhooks.createRegister an outbound webhook
systemManager.webhooks.listList registered webhooks
systemManager.webhooks.updateUpdate a webhook
systemManager.webhooks.deleteDelete a webhook

subscriptionsManager — Subscription account management

Requires: subscriptions-manager role.

Accounts

MethodDescription
subscriptionsManager.accounts.createSubscriptionAccountCreate a subscription account
subscriptionsManager.accounts.createRoleAssociatedAccountCreate a role-associated account
subscriptionsManager.accounts.listList subscription accounts
subscriptionsManager.accounts.getGet a specific subscription account
subscriptionsManager.accounts.updateNameAndFlagsUpdate account name and flags
subscriptionsManager.accounts.propagateSubscriptionAccountPropagate account across tenants

Guests

MethodDescription
subscriptionsManager.guests.listLicensedAccountsOfEmailList licensed accounts for an email
subscriptionsManager.guests.guestUserToSubscriptionAccountInvite a user to a subscription account
subscriptionsManager.guests.updateFlagsFromSubscriptionAccountUpdate guest flags
subscriptionsManager.guests.revokeUserGuestToSubscriptionAccountRevoke a guest invitation
subscriptionsManager.guests.listGuestOnSubscriptionAccountList guests on a subscription account

Guest roles

MethodDescription
subscriptionsManager.guestRoles.listList guest roles for a subscription
subscriptionsManager.guestRoles.getGet a specific guest role

Tags

MethodDescription
subscriptionsManager.tags.createCreate a tag
subscriptionsManager.tags.updateUpdate a tag
subscriptionsManager.tags.deleteDelete a tag

tenantManager — Tenant internal management

Requires: tenant-manager role.

Accounts

MethodDescription
tenantManager.accounts.createSubscriptionManagerAccountCreate a subscription manager account within the tenant
tenantManager.accounts.deleteSubscriptionAccountDelete a subscription account

Guests

MethodDescription
tenantManager.guests.guestUserToSubscriptionManagerAccountInvite a user as subscription manager
tenantManager.guests.revokeUserGuestToSubscriptionManagerAccountRevoke subscription manager invitation

Tags

MethodDescription
tenantManager.tags.createCreate a tenant tag
tenantManager.tags.updateUpdate a tenant tag
tenantManager.tags.deleteDelete a tenant tag

Tenant

MethodDescription
tenantManager.tenant.getGet tenant details

tenantOwner — Tenant ownership operations

Requires: tenant-owner role.

Accounts

MethodDescription
tenantOwner.accounts.createManagementAccountCreate a management account for the tenant
tenantOwner.accounts.deleteTenantManagerAccountDelete a tenant manager account

Metadata

MethodDescription
tenantOwner.meta.createCreate tenant metadata
tenantOwner.meta.deleteDelete tenant metadata

Ownership

MethodDescription
tenantOwner.owner.guestAdd a co-owner to the tenant
tenantOwner.owner.revokeRemove a co-owner from the tenant

Tenant

MethodDescription
tenantOwner.tenant.updateNameAndDescriptionUpdate tenant name and description
tenantOwner.tenant.updateArchivingStatusArchive or unarchive the tenant
tenantOwner.tenant.updateTrashingStatusTrash or restore the tenant
tenantOwner.tenant.updateVerifyingStatusMark the tenant as verified or unverified

userManager — User account lifecycle (platform admins)

Requires: users-manager role.

MethodDescription
userManager.account.approveApprove a user account registration
userManager.account.disapproveDisapprove a user account registration
userManager.account.activateRe-activate a suspended user account
userManager.account.deactivateSuspend a user account
userManager.account.archiveArchive a user account
userManager.account.unarchiveRestore an archived user account

service — Service discovery

Requires: authenticated.

MethodDescription
service.listDiscoverableServicesList all services marked as discoverable for AI agent use

staff — Privilege escalation

Requires: staff role.

MethodDescription
staff.accounts.upgradePrivilegesUpgrade an account to staff privileges
staff.accounts.downgradePrivilegesDowngrade a staff account to standard privileges

Error responses

JSON-RPC errors follow the standard format:

{
  "jsonrpc": "2.0",
  "error": {
    "code": -32600,
    "message": "Invalid request"
  },
  "id": null
}

Standard error codes:

CodeMeaning
-32700Parse error — invalid JSON
-32600Invalid request — not a valid JSON-RPC 2.0 object
-32601Method not found
-32602Invalid params
-32603Internal error

Authentication failures return HTTP 401 before the JSON-RPC layer is reached. Authorization failures (wrong role) return a JSON-RPC error with code -32603 and a domain-specific message.

SDK Integration Guide

Mycelium injects a compressed, encoded identity context into every downstream request via the x-mycelium-profile header. This header carries the authenticated user’s full profile: account ID, tenant memberships, roles, and access scopes.

The Python SDK (mycelium-http-tools) decodes this header and provides a fluent filtering API for making access decisions inside your service.


Installation

pip install mycelium-http-tools

PyPI package: mycelium-http-tools

Source: github.com/LepistaBioinformatics/mycelium-sdk-py


What the header contains

When a request passes through a protected or protectedByRoles route, the gateway:

  1. Resolves the caller’s identity (from JWT or connection string).
  2. Constructs a Profile object: account ID, email, tenant memberships, roles, access scopes.
  3. ZSTD-compresses the JSON-serialized profile.
  4. Base64-encodes the result.
  5. Injects it as x-mycelium-profile in the forwarded request.

Your service never sees unauthenticated requests on protected routes. The SDK decodes the header back into a typed Profile object.


Using the SDK with FastAPI

The SDK ships a FastAPI middleware and dependency injectors:

from fastapi import FastAPI, Depends
from myc_http_tools.fastapi import get_profile, MyceliumProfileMiddleware
from myc_http_tools.models.profile import Profile

app = FastAPI()
app.add_middleware(MyceliumProfileMiddleware)

@app.get("/dashboard")
async def dashboard(profile: Profile = Depends(get_profile)):
    # profile is already decoded and validated
    return {"account_id": str(profile.acc_id)}

Making access decisions

The Profile object provides a fluent filtering chain. Each step narrows — never expands — the set of permissions:

from myc_http_tools.models.profile import Profile
from myc_http_tools.exceptions import InsufficientPrivilegesError

def get_tenant_account(profile: Profile, tenant_id: UUID, account_id: UUID):
    try:
        account = (
            profile
            .on_tenant(tenant_id)       # focus on this tenant
            .on_account(account_id)     # focus on this account
            .with_write_access()        # must have write permission
            .with_roles(["manager"])    # must have manager role
            .get_related_account_or_error()
        )
        return account
    except InsufficientPrivilegesError:
        raise HTTPException(status_code=403)

If any step in the chain finds no match (wrong tenant, no write access, missing role), it raises InsufficientPrivilegesError. Your handler catches it and returns 403.


Available filter methods

MethodEffect
.on_tenant(tenant_id)Filter to a specific tenant membership
.on_account(account_id)Filter to a specific account within the tenant
.with_read_access()Require at least read permission
.with_write_access()Require write permission
.with_roles(["role1", "role2"])Require at least one of the listed roles
.get_related_account_or_error()Return the matched account or raise

Header constants

The SDK exports the same header key constants as the gateway. Use them instead of raw strings to prevent mismatches if header names change:

from myc_http_tools.settings import (
    DEFAULT_PROFILE_KEY,        # "x-mycelium-profile"
    DEFAULT_EMAIL_KEY,          # "x-mycelium-email"
    DEFAULT_SCOPE_KEY,          # "x-mycelium-scope"
    DEFAULT_MYCELIUM_ROLE_KEY,  # "x-mycelium-role"
    DEFAULT_REQUEST_ID_KEY,     # "x-mycelium-request-id"
    DEFAULT_CONNECTION_STRING_KEY,  # "x-mycelium-connection-string"
    DEFAULT_TENANT_ID_KEY,      # "x-mycelium-tenant-id"
)

Manual decoding (any language)

If you are not using Python, decode x-mycelium-profile manually:

Base64-decode → ZSTD-decompress → JSON-parse → access Profile fields

Example in shell (for debugging):

echo "<header-value>" | base64 -d | zstd -d | python3 -m json.tool

The resulting JSON has the same structure as the Rust Profile struct: acc_id, email, tenants (array of tenant memberships with roles and accounts).


Keeping the SDK in sync with the gateway

The Profile Pydantic model in the SDK mirrors the Rust Profile struct in the gateway. When the gateway changes the Profile struct (new fields, renamed fields), the SDK must be updated to match. If your service receives a profile that does not match the SDK’s model, deserialization will fail with a validation error.

Check the SDK changelog when upgrading the gateway.

CLI Reference

The myc-cli binary provides commands that must be run directly against the database. These operations cannot be performed through the HTTP API because they are bootstrapping steps that execute before any admin account exists.


Installation

myc-cli is built alongside the gateway. After building from source:

cargo build --release
# Binary at: target/release/myc-cli

Or install directly:

cargo install --path ports/cli

Database connection

All CLI commands connect directly to PostgreSQL. Provide the connection URL by setting the DATABASE_URL environment variable:

export DATABASE_URL="postgres://user:pass@localhost:5432/mycelium"

If DATABASE_URL is not set, the CLI prompts you to enter it interactively (input is hidden).


Commands

accounts create-seed-account

Creates the first Staff account in a fresh installation. This account is used to log in and perform all subsequent provisioning (create tenants, invite admins, etc.).

myc-cli accounts create-seed-account <email> <account_name> <first_name> <last_name>

Arguments:

ArgumentDescription
emailEmail address for the new account (used to log in)
account_nameDisplay name for the account (e.g. the organization name)
first_nameUser’s first name
last_nameUser’s last name

Interactive prompt: After the positional arguments, the CLI prompts for a password (hidden input).

Example:

myc-cli accounts create-seed-account \
  admin@acme.com \
  "ACME Platform" \
  Alice \
  Smith
# Password: (hidden)

Notes:

  • If a seed staff account already exists, the command exits with an informational message and does not create a duplicate.
  • The created account has the Staff type (platform-wide administrative privileges).
  • After creation, use the magic-link or password login flow to authenticate and start provisioning tenants.

native-errors init

Seeds the database with all native Mycelium error codes. These are the error codes that the core domain layer emits internally (prefixed MYC). Without this step, error responses that carry domain codes will have no human-readable message.

myc-cli native-errors init

No arguments. The command reads DATABASE_URL (or prompts interactively) and inserts all built-in error codes. Codes that already exist are skipped; only new codes are inserted.

When to run: Once, during initial installation, and again after upgrading to a new version of Mycelium that introduces new error codes.

Example:

DATABASE_URL="postgres://user:pass@localhost:5432/mycelium" myc-cli native-errors init
# INFO: 42 native error codes registered

Typical installation order

# 1. Apply the database schema
psql "$DATABASE_URL" -f postgres/sql/up.sql

# 2. Seed native error codes
myc-cli native-errors init

# 3. Create the first admin account
myc-cli accounts create-seed-account admin@example.com "My Platform" Admin User

# 4. Start the API server
SETTINGS_PATH=settings/config.toml myc-api

After step 4, log in with admin@example.com and the password you set in step 3.

Encryption Inventory

This page lists every field that Mycelium stores in an encrypted or hashed form, together with the mechanism used and its migration status relative to the envelope encryption rollout (Phases 1 and 2).


Fields encrypted with AES-256-GCM

These fields hold reversible ciphertexts. Before Phase 1 they were all encrypted with the global KEK directly (v1 format). After Phase 1 they use per-tenant DEKs wrapped by the KEK (v2 format). The two formats are distinguished by a v2: prefix in the stored value.

FieldTable / columnMechanism before Phase 1DEK scopeMigration phase
Totp::Enabled.secretuser.mfa (JSONB)Totp::encrypt_me — KEK directsystem (UUID nil)Phase 1
HttpSecret.token (webhook)webhook.secret (JSONB)WebHook::new_encryptedHttpSecret::encrypt_me — KEK directsystem (UUID nil)Phase 1
TelegramBotTokentenant.meta (JSONB key)encrypt_string — KEK directper-tenantPhase 1
TelegramWebhookSecrettenant.meta (JSONB key)encrypt_string — KEK directper-tenantPhase 1
phone_number, telegram_useraccount.meta (JSONB)plaintextper-tenantPhase 2
tenant.meta (general keys)tenant.meta (JSONB)plaintextper-tenantPhase 2
Subscription / TenantManager metadataaccount.meta (JSONB)plaintextper-tenantPhase 2

TOTP is user identity (user, manager, staff) and is never tenant-scoped; every call site passes tenant_id = None, so the secret is encrypted under the system DEK.

DEK storage

Each tenant row in the tenant table now carries two additional columns:

ColumnTypeDescription
encrypted_dekTEXT (nullable)AES-256-GCM ciphertext of the 32-byte DEK, wrapped by the KEK. NULL means the DEK has not been provisioned yet (lazy on first use).
kek_versionINTEGER NOT NULL DEFAULT 1Tracks which KEK generation was used to wrap the DEK. Used during KEK rotation.

The system tenant row (id = 00000000-0000-0000-0000-000000000000) stores the DEK used for system-level secrets (webhook HTTP secrets, all TOTP).


Fields hashed with Argon2 — outside encryption scope

These fields are one-way hashes. There is no plaintext to recover or re-encrypt. They are unaffected by envelope encryption migration.

FieldTable / columnNote
password_hashidentity_providerArgon2id — verification only, no decryption
Email confirmation tokenUserRelatedMeta.token (logical)Argon2 one-way hash

Ciphertext format versions

VersionFormatWhen writtenHow detected
v1 (legacy)base64(nonce₁₂ ‖ ciphertext ‖ tag₁₆)Before Phase 1No prefix
v2 (envelope)v2:base64(nonce₁₂ ‖ ciphertext ‖ tag₁₆)After Phase 1Starts with v2:

Decrypt functions detect the prefix automatically and route to the correct decryption path, so v1 and v2 data can coexist in the same deployment without downtime.


AAD (Authenticated Additional Data)

AAD prevents ciphertexts from being transplanted between tenants or between fields. The AAD scheme is:

aad = tenant_id.as_bytes() || field_name_bytes
Field constantBytes
AAD_FIELD_TOTP_SECRETb"totp_secret"
AAD_FIELD_TELEGRAM_BOT_TOKENb"telegram_bot_token"
AAD_FIELD_TELEGRAM_WEBHOOK_SECRETb"telegram_webhook_secret"
AAD_FIELD_HTTP_SECRETb"http_secret"

DEK wrap/unwrap uses only tenant_id.as_bytes() as AAD (no field suffix).


token_secret is multi-purpose — rotation has side-effects

The token_secret configured in AccountLifeCycle is not only the KEK source. Its bytes are also consumed directly by non-envelope code paths:

ConsumerRoleRotation impact
AccountLifeCycle::derive_kek_bytesKEK for wrap/unwrap of all DEKsRe-wrap DEKs via myc-cli rotate-kek (TODO).
encrypt_string::build_aes_key (v1 legacy path)KEK for ciphertexts written before Phase 1Stays readable only while token_secret is unchanged; migrate to v2 before rotating.
HttpSecret::decrypt_me (v1 branch)Indirect — routes through the legacy pathSame as above.
Totp::decrypt_me (v1 branch)Indirect — routes through the legacy pathSame as above.
UserAccountScope::sign_tokenHMAC-SHA512 key for connection-string signaturesNo re-signing path. All currently-issued connection strings are invalidated on rotation — treat as revoked.

Rotate token_secret only after:

  1. migrate-dek --dry-run reports zero v1 fields remaining, and
  2. The operational impact of invalidating every live connection-string signature is understood and accepted.

See Envelope Encryption Migration Guide for step-by-step operator instructions.

Envelope Encryption Migration Guide

This guide is for operators running Mycelium with the global token_secret key who need to migrate to the envelope encryption scheme (KEK/DEK per tenant).


Overview

BeforeAfter
All secrets encrypted with a single global key (direct KEK)Each tenant has a random DEK, encrypted by the KEK and stored in the database
Rotating the KEK invalidates all encrypted dataRotating the KEK re-encrypts only the DEKs — O(number of tenants), not O(number of records)
No ciphertext versioningThe v2: prefix identifies data in the new scheme; the legacy format (v1) continues to be read

The new version is fully backward-compatible: data encrypted in the old format continues to be decrypted correctly. Migration is optional at first and can be done incrementally.


Prerequisites

  • Mycelium API Gateway updated to the version with envelope encryption support
  • Access to the PostgreSQL database
  • Access to the configured token_secret (via env, Vault, or config file) — do not change this value before completing the migration
  • myc-cli available in PATH

Migration steps

1. Verify compatibility (no downtime required)

The new version supports reading both v1 (old format) and v2 (new format) data simultaneously. It is safe to deploy the new version while the database is still in v1 format.

# Confirm the installed version supports envelope encryption
myc-cli --version

2. Run the SQL migration

psql $DATABASE_URL < path/to/20260421_01_envelope_encryption.sql

This adds the encrypted_dek and kek_version columns to the tenant table. Both are nullable — existing tenants will have NULL until the next step.

3. Simulate the migration (dry-run)

SETTINGS_PATH=settings/config.toml myc-cli migrate-dek --dry-run

Expected output: a list of tenants with the count of v1 fields to migrate. No writes are performed.

4. Run the migration

SETTINGS_PATH=settings/config.toml myc-cli migrate-dek

The command is idempotent and resumable:

  • Fields already in v2 format are skipped.
  • It can be interrupted at any point and re-run without duplicating work.
  • To migrate only a specific tenant: --tenant-id <uuid>

5. Validate completion

SETTINGS_PATH=settings/config.toml myc-cli migrate-dek --dry-run

Should report 0 v1 fields remaining across all tenants.


KEK rotation (optional, post-migration)

After completing the migration to v2, the KEK can be rotated without touching encrypted data records:

# Increment kek_version in config and make the new key available
# Then run:
SETTINGS_PATH=settings/config.toml myc-cli rotate-kek \
  --from-version 1 \
  --to-version 2

This re-encrypts only the encrypted_dek of each tenant with the new KEK. The data records (user.mfa, tenant.meta, webhook.secret) are not modified.

After a successful rotation, the v1 KEK can be discarded.

Side-effect — connection strings are invalidated. token_secret is also used as the HMAC key for connection-string signatures (UserAccountScope::sign_token). Rotating the KEK therefore invalidates every signature issued under the old secret. There is no re-signing path — treat all active connection strings as revoked and plan the rotation accordingly. See the Encryption Inventory for the full list of token_secret consumers.


Rollback

If you need to roll back before the migration is complete:

  1. Roll back the deployment to the previous gateway version.
  2. Any v2 data already written is unreadable by the old version (which does not know the v2 format).

Therefore: do not interrupt a migration mid-way in production. Use --dry-run to validate first, and run in a maintenance window if in doubt.

If the migration is complete and you need to roll back the SQL schema:

ALTER TABLE tenant DROP COLUMN IF EXISTS encrypted_dek;
ALTER TABLE tenant DROP COLUMN IF EXISTS kek_version;

This is only safe if no v2 data was written. If v2 writes have already occurred, rolling back the schema will cause loss of access to those records.


Frequently asked questions

Do I need downtime to migrate? No. The new version reads both v1 and v2. Deploy first, then run migrate-dek with the service running.

Can I keep v1 data indefinitely? Yes, as long as token_secret does not change. If the global key is rotated, v1 data becomes unreadable. Migrating to v2 protects against this.

What about Argon2 hashes (passwords)? Argon2 hashes in identity_provider.password_hash are one-way — there is no plaintext to re-encrypt. They are unaffected by this migration and continue to work normally.

What happens to new tenants created after the deploy? Tenants created after the deploy receive a DEK automatically on first use. No manual action is required.


See Encryption Inventory for the complete field classification table.

Running Tests

Tests require PostgreSQL and Redis running. Use Docker Compose to start them:

docker-compose up -d postgres redis

Run all tests

From modules/mycelium-api-gateway/:

cargo test

With logs visible:

RUST_LOG=debug cargo test -- --nocapture

Filtering tests

cargo test auth              # all tests with "auth" in the name
cargo test -p mycelium-base  # specific workspace package
cargo test --workspace       # every package

Pre-commit checks

These must all pass before merging:

cargo fmt --all -- --check
cargo build --workspace
cargo test --workspace --all

Coverage (optional)

cargo install cargo-tarpaulin
cargo tarpaulin --out Html

Test database setup

psql postgres://postgres:postgres@localhost:5432/postgres \
  -c "CREATE DATABASE mycelium_test;"

Set the env var before running tests that need a separate database:

export TEST_DATABASE_URL="postgres://postgres:postgres@localhost:5432/mycelium_test"

Troubleshooting

Tests hang — run with --test-threads=1 to serialize them.

Flaky tests — run the failing test in isolation: cargo test <test_name> -- --nocapture.

Port conflict — stop any running myc-api process or Docker containers on the same ports.

Release Process

This guide explains how to create and manage releases for Mycelium using cargo-release and git-cliff.

Overview

Mycelium follows Semantic Versioning and uses automated tools to manage releases:

  • cargo-release: Manages version bumping, tagging, and publishing
  • git-cliff: Generates changelogs from conventional commits

Prerequisites

Install the required tools:

cargo install cargo-release
cargo install git-cliff

Version Semantics

Mycelium follows Semantic Versioning (SemVer):

Version TypeFormatWhen to UseExample
MAJORX.0.0Breaking changes or incompatible API changes8.0.09.0.0
MINORx.Y.0New features (backward-compatible)8.3.08.4.0
PATCHx.y.ZBug fixes (backward-compatible)8.3.08.3.1

Pre-release Workflow

Pre-releases follow a specific progression through stages:

1. Alpha Stage

Purpose: Early development and testing

Characteristics:

  • Unstable, frequent changes
  • Used for initial feature testing
  • Not recommended for production

Creating an alpha release:

# First alpha
cargo release alpha --execute  # Creates x.y.z-alpha.1

# Subsequent alphas
cargo release alpha --execute  # Creates x.y.z-alpha.2, etc.

Example progression: 8.3.0-alpha.18.3.0-alpha.28.3.0-alpha.3

2. Beta Stage

Purpose: Feature-complete version ready for broader testing

Characteristics:

  • Features are complete
  • API should be relatively stable
  • May still have bugs
  • Used for wider testing and feedback

Moving to beta:

# First beta
cargo release beta --execute   # Creates x.y.z-beta.1

# Subsequent betas
cargo release beta --execute   # Creates x.y.z-beta.2, etc.

Example progression: 8.3.0-beta.18.3.0-beta.28.3.0-beta.3

3. Release Candidate (RC) Stage

Purpose: Production-ready candidate for final validation

Characteristics:

  • Final testing before stable release
  • Only critical bug fixes allowed
  • Ready for production testing
  • Last chance to catch issues

Creating release candidates:

# First RC
cargo release rc --execute     # Creates x.y.z-rc.1

# Subsequent RCs (if needed)
cargo release rc --execute     # Creates x.y.z-rc.2, etc.

Example progression: 8.3.0-rc.18.3.0-rc.2

4. Stable Release

Purpose: Production-ready version

Creating the stable release:

cargo release release --execute  # Creates x.y.z

Example: 8.3.0-rc.28.3.0

Version Increment Commands

Patch Release

For bug fixes on existing stable releases:

cargo release patch --execute

Example: 8.3.08.3.1

Minor Release

For new features (backward-compatible):

cargo release minor --execute

Example: 8.3.18.4.0

Major Release

For breaking changes:

cargo release major --execute

Example: 8.4.09.0.0

Complete Release Cycle Example

Here’s a complete example of releasing version 8.3.0:

# Alpha stage - initial testing
cargo release alpha --execute        # 8.3.0-alpha.1
# ... make changes, test ...
cargo release alpha --execute        # 8.3.0-alpha.2
# ... more changes, testing ...
cargo release alpha --execute        # 8.3.0-alpha.3

# Beta stage - features complete
cargo release beta --execute         # 8.3.0-beta.1
# ... wider testing, bug fixes ...
cargo release beta --execute         # 8.3.0-beta.2

# Release candidate - final validation
cargo release rc --execute           # 8.3.0-rc.1
# ... production testing ...
cargo release rc --execute           # 8.3.0-rc.2

# Stable release
cargo release release --execute      # 8.3.0

# Later patch releases
cargo release patch --execute        # 8.3.1
cargo release patch --execute        # 8.3.2

# Next minor release
cargo release minor --execute        # 8.4.0

Changelog Management

Mycelium uses git-cliff to automatically generate changelogs from conventional commits.

Conventional Commit Format

All commits should follow the Conventional Commits specification:

<type>(<scope>): <description>

[optional body]

[optional footer(s)]

Supported types:

TypeDescriptionChangelog Section
featNew feature🚀 Features
fixBug fix🐛 Bug Fixes
docsDocumentation📚 Documentation
perfPerformance improvement⚡ Performance
refactorCode refactoring🚜 Refactor
styleCode style changes🎨 Styling
testTest changes🧪 Testing
choreMaintenance tasks⚙️ Miscellaneous Tasks

Examples:

# Feature commit
git commit -m "feat(auth): add passwordless authentication

Implements magic link authentication flow for users.
Users can now sign in by clicking a link sent to their email.

Fixes #110"

# Bug fix commit
git commit -m "fix(api): resolve null pointer in user endpoint

Fixes #123"

# Breaking change
git commit -m "feat(core): redesign authentication API

BREAKING CHANGE: The authentication API has been completely redesigned.
See migration guide for details.

Fixes #150"

Generating Changelogs

Before creating a release, update the changelog:

# Preview unreleased changes
git-cliff --unreleased

# Update CHANGELOG.md with unreleased changes
git-cliff --unreleased --prepend CHANGELOG.md

# Generate changelog for a specific version
git-cliff --tag v8.3.0 --prepend CHANGELOG.md

Changelog Configuration

The changelog format is configured in cliff.toml at the repository root. This file defines:

  • Commit parsing rules
  • Grouping and sorting
  • Output format
  • Template customization

Always preview release changes before executing:

# Dry run (default - no --execute flag)
cargo release alpha

# Review the output carefully:
# - Version changes
# - Files that will be modified
# - Git commands that will run
# - Tags that will be created

# If everything looks correct, execute
cargo release alpha --execute

Release Checklist

Use this checklist before creating a stable release:

  • All tests pass: cargo test
  • Code is properly formatted: cargo fmt
  • No security vulnerabilities: cargo audit
  • Documentation is up-to-date
  • All commits follow conventional commit format
  • Changelog is updated: git-cliff --unreleased --prepend CHANGELOG.md
  • All CI checks pass
  • Team review is complete (for major/minor releases)
  • Release notes are prepared
  • Dry run reviewed: cargo release <level>

Release Configuration

The project’s release behavior is configured in release.toml at the repository root.

Key configurations include:

  • Pre-release hooks: Run tests and builds before releasing
  • Version bumping: Control how versions are incremented
  • Git operations: Tag format, commit messages
  • Changelog integration: Automatic changelog generation with git-cliff
  • Publishing: Control what gets published and where

Best Practices

  1. Test thoroughly: Run full test suite before any release
  2. Use dry runs: Always preview changes before executing
  3. Follow the progression: Don’t skip stages (alpha → beta → rc → release)
  4. Write good commits: Use conventional commits for automatic changelog generation
  5. Update changelog: Generate changelog before each release
  6. Coordinate releases: Communicate with team for major/minor releases
  7. Tag properly: Let cargo-release handle tagging automatically
  8. Document changes: Include migration guides for breaking changes

Common Workflows

Hotfix Release

For urgent bug fixes on a stable release:

# On main branch with stable release 8.3.0
git checkout -b hotfix/critical-bug
# ... fix the bug ...
git commit -m "fix: resolve critical security issue"

# Merge to main
git checkout main
git merge hotfix/critical-bug

# Create patch release
git-cliff --unreleased --prepend CHANGELOG.md
git add CHANGELOG.md
git commit -m "docs: update changelog for 8.3.1"
cargo release patch --execute  # 8.3.0 → 8.3.1

Feature Release

For a new feature release:

# On develop branch
git checkout -b feature/new-capability
# ... implement feature ...
git commit -m "feat: add new capability"

# Merge to develop
git checkout develop
git merge feature/new-capability

# Start pre-release cycle
cargo release alpha --execute    # 8.4.0-alpha.1
# ... test, fix, repeat ...
cargo release beta --execute     # 8.4.0-beta.1
# ... wider testing ...
cargo release rc --execute       # 8.4.0-rc.1
# ... final validation ...

# Merge to main and release
git checkout main
git merge develop
git-cliff --unreleased --prepend CHANGELOG.md
git add CHANGELOG.md
git commit -m "docs: update changelog for 8.4.0"
cargo release release --execute  # 8.4.0

Troubleshooting

Release fails due to uncommitted changes

# Ensure working directory is clean
git status

# Commit or stash changes
git add .
git commit -m "chore: prepare for release"

Changelog not generating correctly

# Verify conventional commit format
git log --oneline -n 10

# Test cliff configuration
git-cliff --unreleased

# Check cliff.toml configuration
cat cliff.toml

Wrong version incremented

# Use dry run first to verify
cargo release <level>

# If wrong level used, manually fix:
# Edit Cargo.toml files
# Delete incorrect git tag: git tag -d vX.Y.Z
# Try again with correct level

Additional Resources

Log Tests for Cache and Identity Flow

This guide explains how to run log-based validation for the security group, identity, and profile resolution flow—including cache behaviour (JWKS, email, and profile caches). The validation script checks that the expected sequence of stages appears in the logs and helps you verify cache hits and misses.

Overview

The API emits structured log events for:

  • identity.jwks: Resolution of JSON Web Key Sets (from cache or via URI)
  • identity.email: Resolution of user email (from cache, from token, or via userinfo)
  • identity.profile: Resolution of user profile (from cache or via datastore)
  • router.*: Security group check and identity/profile injection

Each event includes a stage and, when applicable, an outcome (e.g. from_cache, resolved, from_token) and cache_hit (true/false). The validation script parses these events and checks that every “start” stage is followed by a matching “outcome” event.

Prerequisites

  • Python 3 (3.9 or higher recommended)
  • Log output from a running Mycelium API (tracing format with timestamps and levels)

No extra Python packages are required; the script uses only the standard library.

Step 1: Save Logs to a File

To validate logs, you need to capture the API’s stdout (and optionally stderr) into a file while the API is running.

Option A: Redirect stdout when starting the API

If you start the API from a shell, redirect stdout to a file:

# Run the API and append all stdout to a log file
cargo run -p mycelium-api --bin myc_api -- --config settings/config.dev.for-docker.toml >> api.log 2>&1

To overwrite the file instead of appending, use a single >:

cargo run -p mycelium-api --bin myc_api -- --config settings/config.dev.for-docker.toml > api.log 2>&1

Option B: Redirect stdout of an already running process

If the API is already running (e.g. in Docker or another terminal), you can capture logs by attaching to the process or by configuring your runtime to write logs to a file. For example, with Docker:

docker compose logs -f mycelium-api > api.log 2>&1

Stop capturing when you have enough requests (e.g. after a few authenticated/protected calls).

Option C: Use a log level that includes INFO

The validation script relies on INFO-level events for stage and outcome. Ensure the API is not running with a log level that filters them out (e.g. RUST_LOG=info or RUST_LOG=myc_api=info). Default or info level is usually sufficient.

Example with explicit log level:

RUST_LOG=info cargo run -p mycelium-api --bin myc_api -- --config settings/config.dev.for-docker.toml >> api.log 2>&1

After reproducing the flows you care about (authenticated and/or protected requests), stop the capture. The resulting file (e.g. api.log) is the input for the validation script.

Step 2: Run the Validation Script

The script lives in the repository at scripts/python/evaluate_security_group_logs.py.

Basic usage

Pass the log file path as the first argument:

python3 scripts/python/evaluate_security_group_logs.py api.log

Or using an absolute path:

python3 /path/to/mycelium/scripts/python/evaluate_security_group_logs.py /path/to/api.log

To see the full sequence of stages and how they are grouped into cycles, use --verbose or -v:

python3 scripts/python/evaluate_security_group_logs.py api.log --verbose

Verbose mode prints:

  • The total number of stage events found
  • For each cycle, a header like --- Cycle N (M events) --- and the list of events in that cycle (timestamp, stage, outcome, cache_hit)
  • A blank line between cycles for readability

Exit codes

  • 0: All sequences validated successfully (OK).
  • 1: One or more sequence violations (e.g. a stage started but no matching outcome found).
  • 2: Usage error (e.g. missing log file path).

You can use the exit code in scripts or CI:

python3 scripts/python/evaluate_security_group_logs.py api.log
if [ $? -eq 0 ]; then
  echo "Log validation passed"
else
  echo "Log validation failed"
  exit 1
fi

Step 3: How to Interpret the Output

Summary block

At the top, the script prints:

  • Total stage events found: Number of log lines that contained a stage= field. This is the number of events used for validation and (in verbose mode) for the cycle listing.

Result

  • Result: OK – Every “start” stage (e.g. identity.jwks, identity.external, identity.profile) had a matching “outcome” event in the expected order. No violations were reported.
  • Result: FAIL – The script lists violations. Each violation message includes the timestamp and a short description (e.g. “identity.jwks started but no identity.jwks outcome (from_cache/resolved) found”). Fix by ensuring the API actually completes that step and emits the corresponding log.

Verbose output: cycles

When you use --verbose, events are grouped into cycles. A new cycle starts at each identity.external (start of identity resolution for a request). So:

  • One cycle corresponds to one logical “identity + profile resolution” flow (e.g. one authenticated or protected request).
  • Within a cycle you see the order of stages, for example:
    • identity.external (start)
    • identity.jwks → then identity.jwks outcome=from_cache or outcome=resolved
    • identity.email → then optional identity.email.cache cache_hit=true/false → then identity.email outcome=from_cache or outcome=resolved
    • identity.external outcome=ok
    • identity.profile → optional identity.profile.cache cache_hit=true/falseidentity.profile outcome=from_cache or outcome=resolved

Interpreting cache behaviour

  • JWKS

    • identity.jwks outcome=from_cache: Keys were loaded from the JWKS cache.
    • identity.jwks outcome=resolved: Keys were fetched from the provider URI and then cached.
  • Email

    • identity.email.cache cache_hit=true then identity.email outcome=from_cache: Email was taken from cache.
    • identity.email.cache cache_hit=false then identity.email outcome=resolved: Email was fetched via userinfo and then cached.
  • Profile

    • identity.profile.cache cache_hit=true then identity.profile outcome=from_cache: Profile was taken from cache.
    • identity.profile.cache cache_hit=false then identity.profile outcome=resolved: Profile was loaded from the datastore and then cached.

Comparing cycles (e.g. first request vs later requests) shows whether later requests use the cache (more from_cache and cache_hit=true), which is expected after the first successful resolution.

Example Workflow

  1. Start the API with logs redirected to a file:

    RUST_LOG=info cargo run -p mycelium-api --bin myc_api -- --config settings/config.dev.for-docker.toml >> api.log 2>&1
    
  2. Send a few authenticated or protected requests (e.g. with a bearer token) so that identity and profile resolution (and cache) are exercised.

  3. Stop the API (or stop redirecting logs).

  4. Run the validator:

    python3 scripts/python/evaluate_security_group_logs.py api.log --verbose
    
  5. Check the result:

    • If OK, the log sequence is valid; use the verbose listing to confirm cache behaviour per cycle.
    • If FAIL, read the violation messages and fix the flow or logging so that every started stage has a matching outcome in the logs.

Troubleshooting

  • No stage events found: Ensure the log file contains lines with stage= (INFO level). Check that you are capturing stdout and that the log level is not too restrictive.
  • “identity.external started but no identity.external outcome=ok”: The request may have failed before completing identity resolution (e.g. invalid token, network error). Check API and network logs.
  • “identity.jwks started but no identity.jwks outcome”: JWKS fetch or parse may have failed. Look for ERROR/WARN lines around that timestamp in the raw log.
  • Multiple cycles: Expected when you send multiple requests. Each cycle is one request’s identity/profile flow; use them to compare first request (often more resolved) vs later requests (often more from_cache).