Introduction
Mitosis is a Rust library and a command line tool to run distributed platforms for transport research.
This guide is an example of how to use Mitosis to run a simple distributed platform to parallelize your tasks. It is designed for transport-layer research, but it can be used for any other purpose.
Basic Workflow
The Mitosis CLI tool is a single binary that provides subcommands for starting the Coordinator, Worker and Client processes.
Users function as units for access control, while groups operate as units for tangible resource control. Every user has an identically named group but also has the option to create or join additional groups.
Users can delegate tasks to various groups via the Client, which are then delivered to the Coordinator and subsequently executed by the corresponding Worker. Each Worker can be configured to permit specific groups and carry tags to denote its characteristics.
Tasks, once submitted, are distributed to different Workers based on their groups and tags. Every task is assigned a unique UUID, allowing users to track the status and results of their tasks.
Contributing
Mitosis is free and open source. You can find the source code on GitHub and issues and feature requests can be posted on the GitHub issue tracker. Mitosis relies on the community to fix bugs and add features: if you'd like to contribute, please read the CONTRIBUTING guide and consider opening a pull request.
Installation
The Mitosis project contains a CLI tool (named mito
) that you can use to directly start a distributed platform,
and a SDK library (named netmito
) that you can use to create your own client.
We currently only support Rust for the SDK library. Python SDK is coming soon.
There are multiple ways to install the Mitosis CLI tool. Choose any one of the methods below that best suit your needs.
Pre-compiled binaries
Executable binaries are available for download on the GitHub Releases page.
Download the binary and extract the archive.
The archive contains an mito
executable which you can run to start your distributed platform.
Automated Installation Script
We provide an installer script that automatically downloads and installs the latest version:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/stack-rs/mitosis/releases/latest/download/mito-installer.sh | sh
For a specific version (adjust version number as needed, found at releases page):
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/stack-rs/mitosis/releases/download/mito-v0.5.3/mito-installer.sh | sh
The installer script will:
- Detect your platform automatically
- Download the appropriate binary
- Install it to
$HOME/.cargo/bin
- Add the directory to your PATH if needed
Manual Installation
You can also download the binary directly from the releases page and install it manually.
To make it easier to run, put the path to the binary into your PATH
or install it in a directory that is already in your PATH
.
For example, do the following on Linux (glibc-based distributions):
wget https://github.com/stack-rs/mitosis/releases/latest/download/mito-x86_64-unknown-linux-gnu.tar.xz
tar xf mito-x86_64-unknown-linux-gnu.tar.xz
cd mito-x86_64-unknown-linux-gnu
sudo install -m 755 mito /usr/local/bin/mito
If you are using older Linux distributions (with older glibc), you may need to install the musl-compiled releases:
wget https://github.com/stack-rs/mitosis/releases/latest/download/mito-x86_64-unknown-linux-musl.tar.xz
tar xf mito-x86_64-unknown-linux-musl.tar.xz
cd mito-x86_64-unknown-linux-musl
sudo install -m 755 mito /usr/local/bin/mito
Verification
After installation, verify that Mitosis is working correctly:
mito --version
mito --help
Build from source using Rust
Dependencies
You have to install pkg-config, libssl-dev if you want to build the binary from source.
Installing with Cargo
To build the mito
executable from source, you will first need to install Rust and Cargo.
Follow the instructions on the Rust installation page.
Once you have installed Rust, the following command can be used to build and install mito:
cargo install mito
This will automatically download mito from crates.io, build it, and install it in Cargo's global binary directory (~/.cargo/bin/
by default).
You can run cargo install mito
again whenever you want to update to a new version.
That command will check if there is a newer version, and re-install mito if a newer version is found.
To uninstall, run the command cargo uninstall mito
.
Installing the latest git version with Cargo
The version published to crates.io will ever so slightly be behind the version hosted on GitHub. If you need the latest version you can build the git version of mito yourself. Cargo makes this super easy!
cargo install --git https://github.com/stack-rs/mitosis.git mito
Again, make sure to add the Cargo bin directory to your PATH
.
Building from source
If you want to build the binary from source, you can clone the repository and build it using Cargo.
git clone https://github.com/stack-rs/mitosis.git
cd mitosis
cargo build --release
Then you can find the binary in target/release/mito
and install or run it as you like.
Common building errors
If you encounter compilation errors on rustls or aws-lc-sys in older Linux distributions, check gcc version and consider updating it. For example on Ubuntu/Debian:
sudo apt update -y
sudo apt upgrade -y
sudo apt install -y build-essential
sudo apt install -y gcc-10 g++-10 cpp-10
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 --slave /usr/bin/g++ g++ /usr/bin/g++-10 --slave /usr/bin/gcov gcov /usr/bin/gcov-10
CentOS/RHEL/Fedora:
sudo dnf install -y gcc gcc-c++ openssl-devel pkgconfig
# OR for older versions:
sudo yum install -y gcc gcc-c++ openssl-devel pkgconfig
Alpine Linux:
apk add --no-cache gcc musl-dev openssl-dev pkgconfig
Modifying and contributing
If you are interested in making modifications to Mitosis itself, check out the Contributing Guide for more information.
Running a Coordinator
A Coordinator is a process that manages the execution of a workflow. It is responsible for scheduling tasks, tracking their progress, and handling failures. The Coordinator is a long-running process that is typically deployed as a service.
External Requirements
The Coordinator requires access to several external services. It needs a PostgreSQL database to store data, an S3-compatible storage service to store task artifacts or group attachments. The Redis server is an optional service that acts as a pub/sub provider, enabling clients to subscribe to and query more comprehensive details regarding the execution status of tasks.
For those services, you can use the docker-compose file provided in the repository.
First, Copy .env.example
to .env
and set the variables in it.
You have file variables to configure:
DB_USERNAME=
DB_PASSWORD=
S3_USERNAME=
S3_PASSWORD=
KV_PASSWORD=
And then, run the following command to start the services:
docker-compose up -d
The Coordinator also requires a private and public key pair to sign and verify access tokens. For the private and public keys, you can generate them using the following commands:
openssl genpkey -algorithm ed25519 -out private.pem
openssl pkey -in private.pem -pubout -out public.pem
Starting a Coordinator
To start a Coordinator, you need to provide a TOML file that configures the Coordinator. The TOML file specifies the Coordinator's configuration, such as the address it binds to, the URL of the postgres database, and token expiry settings. All configuration options are optional and have default values.
The Coordinator will merge the configuration from the file and the command-line arguments according to the following order (the latter overrides the former):
DEFAULT <- `$CONFIG_DIR`/mitosis/config.toml <- config file specified by `cli.config` or loal `config.toml` <- env prefixed by `MITO_` <- cli arguments
`$CONFIG_DIR` will be different on different platforms:
- Linux: `$XDG_CONFIG_HOME` or `$HOME`/.config
- macOS: `$HOME`/Library/Application Support
- Windows: {FOLDERID_RoamingAppData}
Here is an example of a Coordinator configuration file (you can also refer to config.example.toml
in the repository):
[coordinator]
bind = "127.0.0.1:5000"
db_url = "postgres://mitosis:mitosis@localhost/mitosis"
s3_url = "http://127.0.0.1:9000"
s3_access_key = "mitosis_access"
s3_secret_key = "mitosis_secret"
# redis_url is not set. It should be in format like "redis://:mitosis@localhost"
# redis_worker_password is not set by default and will be generated randomly
# redis_client_password is not set by default and will be generated randomly
# admin_user specifies the username of the admin user created on startup
admin_user = "mitosis_admin"
# admin_password specifies the password of the admin user created on startup
admin_password = "mitosis_admin"
access_token_private_path = "private.pem"
access_token_public_path = "public.pem"
access_token_expires_in = "7d"
heartbeat_timeout = "600s"
file_log = false
# log_path is not set. It will use the default rolling log file path if file_log is set to true
To start a Coordinator, run the following command:
mito coordinator --config /path/to/coordinator.toml
The Coordinator will start and listen for incoming requests on the specified address.
We can also override the configuration settings using command-line arguments. Note that the names of command-line arguments may not be the same as those in the configuration file. For example, to change the address the Coordinator binds to, you can run:
mito coordinator --config /path/to/coordinator.toml --bind 0.0.0.0:8000
The full list of command-line arguments can be found by running mito coordinator --help
:
Run the mitosis coordinator
Usage: mito coordinator [OPTIONS]
Options:
-b, --bind <BIND>
The address to bind to
--config <CONFIG>
The path of the config file
--db <DB_URL>
The database URL
--s3 <S3_URL>
The S3 URL
--s3-access-key <S3_ACCESS_KEY>
The S3 access key
--s3-secret-key <S3_SECRET_KEY>
The S3 secret key
--redis <REDIS_URL>
The Redis URL
--redis-worker-password <REDIS_WORKER_PASSWORD>
The Redis worker password
--redis-client-password <REDIS_CLIENT_PASSWORD>
The Redis client password
--admin-user <ADMIN_USER>
The admin username
--admin-password <ADMIN_PASSWORD>
The admin password
--access-token-private-path <ACCESS_TOKEN_PRIVATE_PATH>
The path to the private key, default to `private.pem`
--access-token-public-path <ACCESS_TOKEN_PUBLIC_PATH>
The path to the public key, default to `public.pem`
--access-token-expires-in <ACCESS_TOKEN_EXPIRES_IN>
The access token expiration time, default to 7 days
--heartbeat-timeout <HEARTBEAT_TIMEOUT>
The heartbeat timeout, default to 600 seconds
--log-path <LOG_PATH>
The log file path. If not specified, then the default rolling log file path would be used. If specified, then the log file would be exactly at the path specified
--file-log
Enable logging to file
-h, --help
Print help
-V, --version
Print version
Running a Worker
A Worker is a process that executes tasks. It is responsible for fetching tasks from the Coordinator, executing them, and reporting the results back to the Coordinator. The Worker is a long-running process that is typically deployed as a service.
Starting a Worker
To start a Worker, you need to provide a TOML file that configures the Worker. The TOML file specifies the Worker's configuration, such as the polling (fetching) interval, the URL of the Coordinator, and the the groups allowed to submit tasks to it. All configuration options are optional and have default values.
The Worker will merge the configuration from the file and the command-line arguments according to the following order (the latter overrides the former):
DEFAULT <- `$CONFIG_DIR`/mitosis/config.toml <- config file specified by `cli.config` or loal `config.toml` <- env prefixed by `MITO_` <- cli arguments
`$CONFIG_DIR` will be different on different platforms:
- Linux: `$XDG_CONFIG_HOME` or `$HOME`/.config
- macOS: `$HOME`/Library/Application Support
- Windows: {FOLDERID_RoamingAppData}
Here is an example of a Worker configuration file (you can also refer to config.example.toml
in the repository):
[worker]
coordinator_addr = "http://127.0.0.1:5000"
polling_interval = "3m"
heartbeat_interval = "5m"
lifetime = "7d"
# credential_path is not set
# user is not set
# password is not set
# groups are not set, default to the user's group
# tags are not set
file_log = false
# log_path is not set. It will use the default rolling log file path if file_log is set to true
# lifetime is not set, default to the coordinator's setting
To start a Worker, run the following command:
mito worker --config /path/to/worker.toml
The Worker will start and fetch tasks from the Coordinator at the specified interval.
We can also override the configuration settings using command-line arguments. Note that the names of command-line arguments may not be the same as those in the configuration file. For example, to change the polling interval, you can run:
mito worker --config /path/to/worker.toml --polling-interval 5m
You can also specify the groups and their roles to this Worker using the --groups
argument.
The default roles for the groups are Write
, meaning that the groups can submit tasks to this Worker.
Groups have Read
roles can query the Worker for its status and tasks.
Groups have Admin
roles can manage the Worker, such as stopping it or changing its configuration.
mito worker --config /path/to/worker.toml --groups group1,group2:write,group3:read,group4:admin
This will grant group1 and group2 Write
roles, group3 Read
role, and group4 Admin
role to the Worker.
The user who creates the Worker will be automatically granted the Admin
role of the Worker.
Another important argument is --tags
, the tags of the Worker.
It defines the characteristics of the Worker, such as its capabilities or the type of tasks it can handle.
It is designed for some specific tasks who has special requirements on Workers.
Only when a Worker's tags are empty or are the subset of the task's tags, the Worker can fetch the task.
The full list of command-line arguments can be found by running mito worker --help
:
Run a mitosis worker
Usage: mito worker [OPTIONS]
Options:
--config <CONFIG>
The path of the config file
-c, --coordinator <COORDINATOR_ADDR>
The address of the coordinator
--polling-interval <POLLING_INTERVAL>
The interval to poll tasks or resources
--heartbeat-interval <HEARTBEAT_INTERVAL>
The interval to send heartbeat
--credential-path <CREDENTIAL_PATH>
The path of the user credential file
-u, --user <USER>
The username of the user
-p, --password <PASSWORD>
The password of the user
-g, --groups [<GROUPS>...]
The groups allowed to submit tasks to this worker
-t, --tags [<TAGS>...]
The tags of this worker
--log-path <LOG_PATH>
The log file path. If not specified, then the default rolling log file path would be used. If specified, then the log file would be exactly at the path specified
--file-log
Enable logging to file
--lifetime <LIFETIME>
The lifetime of the worker to alive (e.g., 7d, 1year)
-h, --help
Print help
-V, --version
Print version
Running a Client
A Client is a process that interact with the Coordinator. It is responsible for creating tasks, querying their results, and managing workers or groups. The Client is a short-lived process that is typically run on-demand.
Starting a Client
While it's possible to provide a TOML configuration file to the client, it's often unnecessary given the limited number of configuration items, all of which pertain to login procedures. But it can be useful if you want to set some default values for the client.
The Client will merge the configuration from the file and the command-line arguments according to the following order (the latter overrides the former):
DEFAULT <- `$CONFIG_DIR`/mitosis/config.toml <- config file specified by `cli.config` or loal `config.toml` <- env prefixed by `MITO_` <- cli arguments
`$CONFIG_DIR` will be different on different platforms:
- Linux: `$XDG_CONFIG_HOME` or `$HOME`/.config
- macOS: `$HOME`/Library/Application Support
- Windows: {FOLDERID_RoamingAppData}
Typically, to start a Client, we can simply run the following command to enter interactive mode:
mito client -i
If a user has never logged in or if his/her session has expired, the Client will prompt them to re-input their username and password for authentication.
Alternatively, they can directly specify their username (-u
) or password (-p
) during execution.
Once authenticated, the Client will retain their credentials in a file for future use.
We recommend using the interactive mode for most operations, as it provides a more user-friendly experience. It will prompt you something like this:
[mito::client]>
You can press CTRL-D
or type in exit
or quit
to leave the interactive mode. CTRL-C
will just clear the current line and prompt you again.
We can also directly run a command without entering interactive mode by specifying the command as an argument. For example, to create a new user, we can run:
mito client admin users create new_user_name new_password
The full list of command-line arguments can be found by running mito client --help
:
Run a mitosis client
Usage: mito client [OPTIONS] [COMMAND]
Commands:
admin Admin operations, including shutdown the coordinator, chaning user password, etc
auth Authenticate current user
login Login with username and password
users Manage users, including changing password, querying the accessible groups etc
groups Manage groups, including creating a group, querying groups, etc
tasks Manage tasks, including submitting a task, querying tasks, etc
workers Manage workers, including querying workers, cancel workers, etc
cmd Run an external command
quit Quit the client's interactive mode [aliases: exit]
help Print this message or the help of the given subcommand(s)
Options:
--config <CONFIG>
The path of the config file
-c, --coordinator <COORDINATOR_ADDR>
The address of the coordinator
--credential-path <CREDENTIAL_PATH>
The path of the user credential file
-u, --user <USER>
The username of the user
-p, --password <PASSWORD>
The password of the user
-i, --interactive
Enable interactive mode
--retain
Whether to retain the previous login state without refetching the credential
-h, --help
Print help
-V, --version
Print version
To know how each subcommand works, you can run mito client <subcommand> --help
.
For example, to know how to create a new user, you can run mito client admin users create --help
:
Create a new user
Usage: mito client admin users create [OPTIONS] [USERNAME] [PASSWORD]
Arguments:
[USERNAME] The username of the user
[PASSWORD] The password of the user
Options:
--admin Whether to grant the new user as an admin user
-h, --help Print help
-V, --version Print version
For the rest of this section, we will explain the common use cases of the Client on different scenarios. For the sake of convenience, we will assume that the user is already in interactive mode. And for the direct executing mode, it only requires adding "mito client" at the front.
admin
sub-commands
Input help admin
to show the help message of the admin
sub-commands:
Admin operations, including shutdown the coordinator, chaning user password, etc
Usage: admin <COMMAND>
Commands:
users Manage users
shutdown Shutdown the coordinator
groups Manage groups
tasks Manage a task
workers Manage a worker
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
The admin operations are only available to admin users.
For example, we can create a new user by running the following command:
admin users create test_user_name test_user_password
We can change password of a user by running:
admin users change-password test_user_name new_test_user_password
groups
sub-commands
Input help groups
to show the help message of the groups
sub-commands:
Manage groups, including creating a group, querying groups, etc
Usage: groups <COMMAND>
Commands:
create Create a new group
get Get the information of a group
update-user Update the roles of users to a group
remove-user Remove the accessibility of users from a group
attachments Query, upload, download or delete an attachment
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
We can manage group related operations with the groups
sub-commands, such as creating a new group, querying a group, managing users' access to a group, and managing attachments of a group.
We can create a new group by running the following command:
groups create test_group
This will create a group called test_group
containing the current logged in user.
This user will be granted the Admin
role to this group to manage it.
We can get the information of a group by running:
groups get test_group
attachments
sub-commands
Input help groups attachments
or groups attachments -h
to show the help message of the attachments
sub-commands:
Query, upload, download or delete an attachment
Usage: groups attachments <COMMAND>
Commands:
delete Delete an attachment from a group
upload Upload an attachment to a group
get Get the metadata of an attachment
download Download an attachment of a group
query Query attachments subject to the filter
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
This is used to upload, download, delete or query attachments of a group.
For example, to upload an attachment to a group, we can run:
groups attachments upload -g test_group local.tar.gz attachment_key
You can also just run groups attachments upload local.tar.gz
.
This will directly upload the file to the current group you are in and use the file name as the attachment key.
Also, you can specify the attachment key to be a directory-like string, ending with a /
. This will smartly upload local file to attachment_key/local_file_name
.
For example, to upload a file local.tar.gz
to a directory dir/
in the group, you can run:
groups attachments upload -g test_group local.tar.gz dir/
This will save the attachment with key dir/local.tar.gz
.
To download an attachment of a group, you can just:
groups attachments download -g test_group dir/local.tar.gz
We also offer a smart mode to make downloading easier.
You can specify the group_name in the first segment of the attachment key, separated by a /
, if no group_name specified by -g
, and we will use the last segment of the attachment key as the local file name if no output path specified by -o
. For example:
groups attachments download test_group/dir/local.tar.gz
This will download the attachment dir/local.tar.gz
from group test_group
and save it as local.tar.gz
in the current directory.
tasks
sub-commands
Input help tasks
to show the help message of the tasks
sub-commands:
Manage tasks, including submitting a task, querying tasks, etc
Usage: tasks <COMMAND>
Commands:
submit Submit a task
get Get the info of a task
query Query tasks subject to the filter
cancel Cancel a task
update-labels Replace labels of a task
change Update the spec of a task
artifacts Query, upload, download or delete an artifact
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
Submit a task to the Coordinator can be as simple as running the following command:
tasks submit -- echo hello
The content after --
is the command to run on the worker. It will return a UUID to identify the task.
You can also specify the group to submit the task to by using the -g
option.
The --labels
are used to mark the task for querying later, it won't affect how the task is fetched ans executed.
The --tags
are used to define the characteristics of the task, such as its requirements on the Worker.
Only when a Worker's tags are empty or are the subset of the task's tags, the Worker can fetch the task.
You can also set some environment variables for the task by using the -e
option.
tasks submit -g test_group -t wireless,4g -l mobile,video -e TEST_KEY=1,TEST_VAL=2 -- echo hello
For the output of the task, we allow 3 types of output to be collected:
- Result: Files put under the directory specified by the environment variable
MITO_RESULT_DIR
will be packed into an artifact and uploaded to the Coordinator. If the directory is empty, no artifact will be created. - Exec: Files put under the directory specified by the environment variable
MITO_EXEC_DIR
will be packed into an artifact and uploaded to the Coordinator. If the directory is empty, no artifact will be created. - Terminal: If the
--terminal
option is specified, the standard output and error of the executed task will be collected and uploaded to the Coordinator. The terminal output will be stored in a file namedstdout.log
andstderr.log
respectively in an artifact.
Now, we get a submitted task's information by providing its UUID:
tasks get e07a2bf2-166d-40b5-8bb6-a78104c072f9
Or we can just query a list of tasks with label mobile
:
tasks query -l mobile
More filter options can be found in the help message by executing tasks query -h
For a task, we can also cancel it, update its labels or change its specification to run with its UUID provides. For example:
tasks cancel e07a2bf2-166d-40b5-8bb6-a78104c072f9
This will cancel the task if it is not started yet. It is not allowed to cancel a running or finished task.
To change how the task is executed (i.e., the spec of this task), we can run:
tasks change e07a2bf2-166d-40b5-8bb6-a78104c072f9 --terminal -- echo world
This will alter the task to collect standard output and error when finishes, and execute echo world
instead of echo hello
.
We can download the results (a collection of files generated by a task as output) collected by the task as an artifact.
It is easy to download an artifact of a task by providing its UUID. But you also have to specify the output type you want.
There are three types of output: result
, exec-log
, and std-log
. You can also specify the output path to download the artifact to with -o
argument.
tasks artifacts download e07a2bf2-166d-40b5-8bb6-a78104c072f9 result
workers
sub-commands
Input help workers
to show the help message of the workers
sub-commands:
Manage workers, including querying workers, cancel workers, etc
Usage: workers <COMMAND>
Commands:
cancel Cancel a worker
update-tags Replace tags of a worker
update-roles Update the roles of groups to a worker
remove-roles Remove the accessibility of groups from a worker
get Get information about a worker
query Query workers subject to the filter
help Print this message or the help of the given subcommand(s)
Options:
-h, --help Print help
We can manage a worker, and get relevant information about it with the workers
sub-commands.
For example, we can stop a worker by running:
workers cancel b168dbe6-5c44-4529-a3b4-51940d6bb3c5
Or we can update the tags of a worker by running:
workers update-tags b168dbe6-5c44-4529-a3b4-51940d6bb3c5 wired file
And we can grant another group Write
access to this worker (it means the group can submit tasks to this worker) by running:
workers update-roles b168dbe6-5c44-4529-a3b4-51940d6bb3c5 test_group:admin another_group:write
You can perform the opposite action to remove certain groups' access permissions to the Worker using the remove-roles
subcommand.
cmd
sub-commands
Input help cmd
to show the help message of the cmd
sub-commands:
Run an external command
Usage: cmd [OPTIONS] [-- <COMMAND>...]
Arguments:
[COMMAND]... The command to run
Options:
-s, --split Do not merge the command into one string
-h, --help Print help
We can use this sub-command to run an external command. For example, to list files in the current directory, we can run:
cmd -- ls -hal
Architecture Overview
This document provides a comprehensive overview of Mitosis's architecture, components, and data flow.
System Components
Coordinator
The Coordinator is the central management service that orchestrates the entire Mitosis system. It handles:
- Task Management: Receives, validates, and stores task submissions
- User Authentication: Manages user sessions and permissions using JWT tokens
- Group Authorization: Enforces group-based access controls
- Worker Registration: Tracks available workers and their capabilities
- Scheduling: Matches tasks with appropriate workers based on groups and tags
- State Management: Maintains task execution states and progress tracking
- Artifact Storage: Coordinates with S3-compatible storage for task outputs
Key Dependencies:
- PostgreSQL for persistent data storage
- S3-compatible storage for artifact management
- Redis (optional) for pub/sub notifications and caching
- Ed25519 key pair for JWT token signing
Worker
Workers are the execution nodes that run tasks assigned by the Coordinator. Each worker:
- Task Polling: Regularly checks for available tasks matching its configuration
- Environment Isolation: Provides clean execution environments for tasks
- Artifact Collection: Gathers task outputs from designated directories
- Heartbeat Reporting: Sends periodic status updates to maintain liveness
- Tag-based Matching: Only accepts tasks compatible with its configured tags
- Group Membership: Serves tasks from groups it has been granted access to
Execution Flow:
- Poll Coordinator for available tasks
- Validate task compatibility (groups, tags)
- Create isolated execution environment
- Execute task command with configured environment variables
- Collect artifacts from
MITO_RESULT_DIR
,MITO_EXEC_DIR
- Upload results and update task status
Client
The Client provides both interactive and programmatic interfaces for users to interact with the system:
- Interactive Mode: Shell-like interface for real-time system interaction
- Batch Mode: Direct command execution for scripting and automation
- Task Management: Submit, query, and manage task execution
- User Administration: Create and manage users (admin only)
- Group Management: Create groups and manage member permissions
- Worker Management: Monitor and control worker nodes
- Artifact Operations: Upload group attachments and download task results
Data Flow
Task Submission Flow
Client → Coordinator → Database
│ │
├─→ Validates user credentials and permissions
├─→ Stores task specification in database
└─→ Returns task UUID to client
Task Execution Flow
Worker → Coordinator → Database → S3 Storage
│ │ │
├─→ Polls for tasks based on groups/tags
├─→ Updates task status (pending → running → completed/failed)
└─→ Uploads artifacts and execution logs
Monitoring Flow (with Redis)
Coordinator → Redis → Client
│ │ │
├─→ Publishes task status updates
└─→ Client subscribes to real-time notifications
Access Control Model
Users and Groups
- Every user automatically gets a group with the same name
- Users can create additional groups and manage membership
- Group roles define access levels:
Read
,Write
,Admin
Worker Permissions
Workers are configured with group access levels:
- Write: Group members can submit tasks to this worker
- Read: Group members can query worker status
- Admin: Group members can manage worker configuration
Task Routing
Tasks are routed to workers based on:
- Group Membership: Worker must have access to the task's target group
- Tag Compatibility: Worker tags must be empty or contain all task tags
- Availability: Worker must be active and not at capacity
Storage Architecture
Database Schema (PostgreSQL)
- Users: Authentication and profile information
- Groups: Group definitions and membership
- Tasks: Task specifications, state, and metadata
- Workers: Worker registration and configuration
- Artifacts / Attachments: File metadata and S3 object references
Object Storage (S3)
- Task Artifacts: Results, logs, and execution outputs
- Group Attachments: Shared files accessible to group members
- Bucket Structure: Organized by groups and artifact types
Cache Layer (Redis)
- Session Management: JWT token validation and user sessions
- Pub/Sub: Real-time notifications for task status changes
Security Model
Authentication
- JWT tokens signed with Ed25519 private key
- Configurable token expiration (default: 7 days)
- Credential caching for user convenience
Authorization
- Role-based access control at group level
- API endpoint protection based on user permissions
- Resource isolation between groups
Scalability Considerations
Horizontal Scaling
- Multiple Workers: Add workers to increase task execution capacity
- Load Balancing: Coordinator can handle multiple concurrent clients
- Database Partitioning: Tasks and artifacts can be partitioned by group
Performance Optimization
- Connection Pooling: Database connections are pooled and reused
- Batch Operations: Multiple tasks can be submitted in batches
- Async Processing: Non-blocking I/O throughout the system
Resource Management
- Worker Tagging: Allows targeting tasks to specific hardware capabilities
- Heartbeat Monitoring: Automatic worker health checking and cleanup
- Configurable Timeouts: Prevents resource leaks from stalled tasks
Deployment Patterns
Single-Node Development
- All components on one machine
- Docker Compose for external dependencies
- Suitable for testing and small workloads
Multi-Node Production
- Coordinator on dedicated server
- Workers distributed across compute nodes
- Shared database and storage infrastructure
- Load balancer for coordinator high availability
Troubleshooting Guide
This guide covers common issues you might encounter when setting up and running Mitosis, along with their solutions.
Installation Issues
Binary Not Found After Installation
Problem: mito: command not found
after installation.
Solution:
-
Verify the binary location:
which mito find / -name "mito" 2>/dev/null
-
Add to PATH if needed:
export PATH="$HOME/.cargo/bin:$PATH" # Add to your shell profile (.bashrc, .zshrc, etc.) echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc
Permission Denied
Problem: Permission errors when running the installer or binary.
Solution:
# Make binary executable
chmod +x mito
# Fix installer permissions
chmod +x mito-installer.sh
SSL/TLS Certificate Issues
Problem: Certificate verification errors during download.
Solution:
# Update certificates
sudo apt-get update && sudo apt-get install ca-certificates
# Or bypass for known-safe sources (not recommended for production)
curl -k --proto '=https' --tlsv1.2 -LsSf [URL]
Build Issues
Missing Dependencies
Problem: Compilation fails due to missing system libraries.
Solution:
# Ubuntu/Debian
sudo apt install build-essential pkg-config libssl-dev
# CentOS/RHEL
sudo yum install gcc gcc-c++ openssl-devel pkgconfig
Rust Version Issues
Problem: Compilation fails due to incompatible Rust version.
Solution:
# Update Rust
rustup update
# Check version (needs 1.76+)
rustc --version
# Set specific toolchain if needed
rustup default stable
Link Errors on Older Systems
Problem: Linking errors with glibc or other system libraries.
Solution: Use the musl build instead:
# Install musl target
rustup target add x86_64-unknown-linux-musl
# Build with musl
cargo build --target x86_64-unknown-linux-musl --release
Configuration Issues
Database Connection Failures
Problem: FATAL: database "mitosis" does not exist
Solution:
-
Create the database:
psql -U postgres -c "CREATE DATABASE mitosis;" psql -U postgres -c "CREATE USER username WITH PASSWORD 'password';" psql -U postgres -c "GRANT ALL PRIVILEGES ON DATABASE mitosis TO useranme;"
-
Check connection string format:
db_url = "postgres://username:password@host:port/mitosis"
S3 Storage Connection Issues
Problem: S3 authentication or connection failures.
Solution:
-
Verify MinIO/S3 is running:
# For MinIO docker ps | grep minio curl http://localhost:9000/minio/health/ready
-
Check credentials and bucket access:
# Using AWS CLI to test aws --endpoint-url=http://localhost:9000 s3 ls
Redis Connection Problems
Problem: Redis connection refused or authentication failures.
Solution:
-
Check Redis status:
redis-cli ping # Should return PONG
-
Verify ACL rules (Redis 6.0+):
redis-cli ACL LIST redis-cli ACL DELUSER default # If needed
SSL Key Generation Issues
Problem: Ed25519 key generation fails or keys not recognized.
Solution:
-
Ensure OpenSSL version supports Ed25519 (1.1.1+):
openssl version
-
Generate keys correctly:
openssl genpkey -algorithm ed25519 -out private.pem openssl pkey -in private.pem -pubout -out public.pem
-
Verify key format:
openssl pkey -in private.pem -text -noout
Network Issues
Port Already in Use
Problem: Address already in use
when starting coordinator.
Solution:
-
Find what's using the port:
lsof -i :5000 netstat -tulpn | grep 5000
-
Change the port:
mito coordinator --bind 0.0.0.0:5001
Firewall Blocking Connections
Problem: Workers can't connect to coordinator.
Solution:
-
Check firewall rules:
# Ubuntu sudo ufw status sudo ufw allow 5000 # CentOS/RHEL sudo firewall-cmd --list-ports sudo firewall-cmd --permanent --add-port=5000/tcp sudo firewall-cmd --reload
-
Test connectivity:
telnet coordinator_host 5000 curl http://coordinator_host:5000/health
Performance Issues
Slow Task Execution
Problem: Tasks taking longer than expected to start or complete.
Solutions:
-
Reduce polling interval for workers:
polling_interval = "30s" # Faster polling
-
Increase worker parallelism:
# Run multiple workers on the same node mito worker & mito worker &
-
Monitor database performance:
SELECT * FROM pg_stat_activity; SELECT * FROM pg_stat_user_tables;
Database Lock Contention
Problem: High lock wait times or deadlocks.
Solution:
-
Monitor locks:
SELECT * FROM pg_locks WHERE NOT granted;
-
Tune PostgreSQL settings:
max_connections = 100 shared_buffers = 256MB effective_cache_size = 1GB
Debugging Tips
Enable Debug Logging
RUST_LOG=debug mito coordinator
RUST_LOG=netmito=debug mito worker
RUST_LOG=debug mito client
Health Checks
# Check coordinator health
curl http://localhost:5000/health
# Check database connection
psql "postgres://mitosis:mitosis@localhost/mitosis" -c "SELECT version();"
# Check S3 connection
aws --endpoint-url=http://localhost:9000 s3 ls
Getting Help
If you continue to experience issues:
-
Check the GitHub Issues for similar problems
-
Run with debug logging and include logs in your issue report
-
Provide system information:
mito --version rustc --version uname -a docker --version # if using Docker
-
Include relevant configuration (sanitize sensitive data)
-
Describe the expected vs actual behavior
-
List steps to reproduce the issue
Development Setup
Client SDK
The Mitosis project contains a SDK library (named netmito
) that you can use to create your own client programmatically.
To use the SDK, add the following to your Cargo.toml
:
[dependencies]
netmito = "0.5"
Here is a simple example of how to create a new user using the SDK:
use netmito::client::MitoClient;
use netmito::config::client::{ClientConfig, AdminCreateUserArgs};
#[tokio::main]
async fn main() {
// Create a new client configuration
let config = ClientConfig::default();
// Setup the client
let mut client = MitoClient::new(config);
// Fill up arguments for creating a new user
let args = AdminCreateUserArgs {
username: Some("new_user".to_string()),
password: Some("new_password".to_string()),
admin: false,
};
// Create a new user
client.admin_create_user(args).await.unwrap();
}
For more details, please refer to the API documentation.
HTTP endpoints
We have provide users and developers with an OpenAPI specification of our http endpoints. You can find it in the root of our repository (openapi.yaml) or access it online herer with raw format.
You can just read the specification file to understand how to interact with our http endpoints, or you can use tools like Swagger UI or the online Swagger Editor to interactively explore and test the API.