Skip to content

Daemon Guide

The mik daemon is a background process that manages WASM instances and provides embedded services (KV, SQL, Storage, Cron). It exposes a REST API for programmatic control.

mik supports three operating modes, each adding more capabilities:

ModeCommandServicesUse Case
Directmik runNoneQuick testing
Daemonmik dev or mik run --detachEmbedded (SQLite, KV, Storage, Cron)Dev, small deployments
SidecarHandler + mikcarProduction (Postgres, Redis, S3)Scale, multi-node

Run a handler in the foreground with no additional services:

Terminal window
mik run # Serves modules/ on port 3000
mik run my-handler.wasm # Serves single component

Handlers can only make outbound HTTP calls. No database, no cron, no storage.

The daemon provides a mini-sidecar with embedded services:

Handlers call http://localhost:9919/kv/..., /sql/..., etc.

For production at scale, use mikcar instead:

Same HTTP API, different backends. Your handler code doesn’t change - just swap the sidecar.


The daemon is auto-managed - it starts automatically when needed and stops when the last instance exits.

Terminal window
# Development mode (watch + services, foreground)
mik dev
# Or run as background instance (auto-starts daemon)
mik run --detach --name myapp --port 3000
# Check running instances
mik ps
# View logs
mik logs myapp -f
# Stop the instance (daemon auto-exits when last instance stops)
mik stop myapp

API key authentication is optional. Set MIK_API_KEY before starting:

Terminal window
# Enable authentication (set before mik dev or mik run --detach)
export MIK_API_KEY="your-secret-key"
mik dev
# All requests must include the API key header
curl -H "X-API-Key: your-secret-key" http://localhost:9919/instances

Exempt endpoints (always accessible for monitoring):

  • GET /health
  • GET /metrics

The daemon can be configured globally via ~/.mik/daemon.toml. This configuration is separate from per-project mik.toml files and applies to the daemon regardless of which project is running.

Create ~/.mik/daemon.toml:

[daemon]
port = 9919 # Daemon API port (default: 9919)
max_auto_restarts = 10 # Max auto-restarts before giving up (default: 10)
health_check_interval_secs = 10 # Health check interval (default: 10)
[services]
kv_enabled = true # Enable KV service (default: true)
sql_enabled = true # Enable SQL service (default: true)
storage_enabled = true # Enable Storage service (default: true)

If the file doesn’t exist, all defaults are used (all services enabled, port 9919).

FieldTypeDefaultDescription
daemon.portnumber9919Daemon API port
daemon.max_auto_restartsnumber10Max auto-restart attempts before giving up
daemon.health_check_interval_secsnumber10Interval between health checks (seconds)
services.kv_enabledbooleantrueEnable KV service
services.sql_enabledbooleantrueEnable SQL service
services.storage_enabledbooleantrueEnable Storage service

For security-conscious deployments, you can disable services you don’t need:

[daemon]
port = 9919
[services]
kv_enabled = true
sql_enabled = false # Disable SQL - reduces attack surface
storage_enabled = false # Disable Storage

When a service is disabled:

  • The service is not initialized at startup (saves resources)
  • Requests to disabled service endpoints return 503 Service Unavailable:
{"error": "SQL service is disabled. Enable it in ~/.mik/daemon.toml"}

When the daemon starts, it shows which services are enabled:

Starting mik daemon on port 9919...
API endpoint: http://127.0.0.1:9919
Endpoints:
Instances: /instances, /instances/:name, ...
Cron: /cron, /cron/:name, ...
Services: /kv, /storage
System: /health, /version, /metrics
Disabled services:
- SQL service
Edit ~/.mik/daemon.toml to enable them

The CLI --port flag overrides the config file port:

Terminal window
# Uses config port (or default 9919)
mik daemon
# Overrides config - uses port 8080
mik daemon --port 8080
ScenarioConfiguration
Full developmentAll services enabled (default)
Stateless handlersDisable SQL, Storage - KV only for sessions
Read-only deploymentDisable all services
Docker/containerSet consistent port across environments
High availabilityIncrease max_auto_restarts, decrease health_check_interval_secs
Debug/testingDisable auto-restart with max_auto_restarts = 0

Manage WASM instances via REST API or CLI.

Terminal window
# List all instances
GET /instances
# Start a new instance
POST /instances
Content-Type: application/json
{
"name": "myapp",
"port": 3000,
"config": "/path/to/mik.toml",
"auto_restart": true
}
# Get instance details
GET /instances/{name}
# Stop an instance
DELETE /instances/{name}
# Restart an instance
POST /instances/{name}/restart
# Get instance logs
GET /instances/{name}/logs?lines=100
Terminal window
mik dev [--port PORT] [--no-services] # Development with watch + services
mik run --detach [--name NAME] [--port PORT] # Background instance
mik stop [NAME] # Stop instance
mik ps # List instances
mik logs [NAME] [-f] [-n LINES] # View logs
Terminal window
# Start as background instance
mik run --detach --name myapp --port 3000
# Check status
curl http://localhost:9919/instances/myapp
# Response:
{
"name": "myapp",
"port": 3000,
"status": "running",
"pid": 12345,
"uptime": "2h 15m",
"config": "/home/user/myapp/mik.toml"
}

Redis-like key-value storage backed by redb. Supports TTL for automatic expiration.

Terminal window
# Set a value (with optional TTL in seconds)
PUT /kv/{key}
Content-Type: application/json
{"value": "hello world", "ttl": 3600}
# Get a value
GET /kv/{key}
# Response: {"key": "mykey", "value": "hello world"}
# Delete a key
DELETE /kv/{key}
# List keys (with optional prefix filter)
GET /kv?prefix=user:
# Response: {"keys": ["user:1", "user:2", "user:3"]}
Terminal window
# Store session with 1 hour TTL
curl -X PUT http://localhost:9919/kv/session:abc123 \
-H "Content-Type: application/json" \
-d '{"value": "{\"user_id\": 42, \"role\": \"admin\"}", "ttl": 3600}'
# Retrieve session
curl http://localhost:9919/kv/session:abc123
# List all sessions
curl "http://localhost:9919/kv?prefix=session:"

Embedded SQLite database with full SQL support. Foreign keys are enabled by default.

Terminal window
# Execute DDL/DML (CREATE, INSERT, UPDATE, DELETE)
POST /sql/execute
Content-Type: application/json
{"sql": "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT NOT NULL)"}
# Query data (SELECT)
POST /sql/query
Content-Type: application/json
{"sql": "SELECT * FROM users WHERE id = ?", "params": [1]}
# Response:
{
"columns": ["id", "name"],
"rows": [{"id": 1, "name": "Alice"}],
"rows_affected": 0
}

Batch executes multiple statements in a single transaction. If any statement fails, all changes are rolled back.

Terminal window
POST /sql/batch
Content-Type: application/json
{
"statements": [
{"sql": "INSERT INTO users (name) VALUES (?)", "params": ["Alice"]},
{"sql": "INSERT INTO users (name) VALUES (?)", "params": ["Bob"]},
{"sql": "UPDATE users SET name = ? WHERE id = ?", "params": ["Charlie", 1]}
]
}
# Response:
{
"results": [
{"rows_affected": 1},
{"rows_affected": 1},
{"rows_affected": 1}
],
"total_affected": 3
}
Terminal window
# Create tables
curl -X POST http://localhost:9919/sql/execute \
-H "Content-Type: application/json" \
-d '{"sql": "CREATE TABLE IF NOT EXISTS posts (id INTEGER PRIMARY KEY, title TEXT, user_id INTEGER REFERENCES users(id))"}'
# Insert with foreign key
curl -X POST http://localhost:9919/sql/execute \
-H "Content-Type: application/json" \
-d '{"sql": "INSERT INTO posts (title, user_id) VALUES (?, ?)", "params": ["Hello World", 1]}'
# Query with join
curl -X POST http://localhost:9919/sql/query \
-H "Content-Type: application/json" \
-d '{"sql": "SELECT p.title, u.name FROM posts p JOIN users u ON p.user_id = u.id"}'

S3-like object storage backed by the filesystem. Supports any file type with automatic content-type detection.

Terminal window
# Upload a file
PUT /storage/{path}
Content-Type: application/octet-stream
<binary data>
# Download a file
GET /storage/{path}
# Delete a file
DELETE /storage/{path}
# Get metadata (size, content-type, timestamps)
HEAD /storage/{path}
# List objects (with optional prefix)
GET /storage?prefix=images/
Terminal window
# Upload an image
curl -X PUT http://localhost:9919/storage/images/logo.png \
-H "Content-Type: image/png" \
--data-binary @logo.png
# Get metadata
curl -I http://localhost:9919/storage/images/logo.png
# Headers: Content-Type, Content-Length, Last-Modified, ETag
# List all images
curl "http://localhost:9919/storage?prefix=images/"
# Response: {"objects": [{"path": "images/logo.png", "size": 12345, ...}]}
# Download
curl http://localhost:9919/storage/images/logo.png -o downloaded.png

Schedule recurring jobs using cron expressions. Jobs call WASM module endpoints via HTTP.

Terminal window
# List all jobs
GET /cron
# Create a new job
POST /cron
Content-Type: application/json
{
"name": "cleanup",
"cron": "0 0 * * * * *",
"module": "cleanup.wasm",
"method": "POST",
"path": "/run",
"port": 3000,
"enabled": true
}
# Get job details
GET /cron/{name}
# Update job (pause/resume)
PATCH /cron/{name}
Content-Type: application/json
{"enabled": false}
# Delete a job
DELETE /cron/{name}
# Manually trigger a job
POST /cron/{name}/trigger
# Get execution history
GET /cron/{name}/history

Uses 7-field format: sec min hour day month weekday year

0 0 0 * * * * # Daily at midnight
0 */5 * * * * * # Every 5 minutes
0 0 12 * * MON * # Every Monday at noon
0 30 9 1 * * * # 9:30 AM on the 1st of each month
Terminal window
# Create job that runs every hour
curl -X POST http://localhost:9919/cron \
-H "Content-Type: application/json" \
-d '{
"name": "hourly-cleanup",
"cron": "0 0 * * * * *",
"module": "cleanup.wasm",
"method": "POST",
"path": "/cleanup",
"port": 3000,
"enabled": true
}'
# Pause the job
curl -X PATCH http://localhost:9919/cron/hourly-cleanup \
-H "Content-Type: application/json" \
-d '{"enabled": false}'
# Trigger manually for testing
curl -X POST http://localhost:9919/cron/hourly-cleanup/trigger
# Check execution history
curl http://localhost:9919/cron/hourly-cleanup/history
# Response:
{
"job_name": "hourly-cleanup",
"executions": [
{
"id": "abc-123",
"started_at": "2025-01-15T10:00:00Z",
"completed_at": "2025-01-15T10:00:01Z",
"duration_ms": 1250,
"success": true,
"manual": true
}
]
}

Jobs defined in mik.toml are automatically registered when an instance starts:

[[schedules]]
name = "daily-report"
cron = "0 0 8 * * * *"
module = "reports.wasm"
method = "POST"
path = "/generate"

Terminal window
GET /metrics

Available metrics:

MetricDescription
mik_http_requests_totalTotal HTTP requests by path and status
mik_http_request_duration_secondsRequest latency histogram
mik_instance_countRunning/stopped/crashed instances
mik_instance_uptime_secondsInstance uptime
mik_kv_operations_totalKV operations by type
mik_sql_queries_totalSQL queries by type
mik_storage_operations_totalStorage operations by type
mik_cron_executions_totalCron job executions
mik_cron_execution_duration_secondsCron job duration

Terminal window
# Health check (always returns 200 if daemon is running)
GET /health
# Response: {"status": "healthy", "uptime": "running"}
# Version info
GET /version
# Response: {"version": "0.1.0", "build": "release"}


The daemon is a mini-sidecar - embedded services for development and small deployments. For production at scale, swap to mikcar with the same API but real infrastructure backends.

See Operating Modes for the full comparison.