Rust Quick Start
Installation
Section titled “Installation”[dependencies]sayiir-runtime = "0.4" # core runtime + runners + macros
# Optional — pick your backendsayiir-persistence = "0.4" # InMemoryBackend (dev/testing)sayiir-postgres = "0.4" # PostgreSQL backend (production)Import the prelude:
use sayiir_runtime::prelude::*;// Re-exports: WorkflowBuilder, CheckpointingRunner, PooledWorker,// WorkerHandle, InMemoryBackend, JsonCodec, TaskRegistry,// task (macro), workflow (macro), etc.Define tasks with #[task]
Section titled “Define tasks with #[task]”use sayiir_runtime::prelude::*;use sayiir_core::error::BoxError;use std::sync::Arc;
#[task(id = "charge_card", timeout = "30s", retries = 3, backoff = "100ms")]async fn charge(order: Order, #[inject] stripe: Arc<Stripe>) -> Result<Receipt, BoxError> { stripe.charge(&order).await}
// Generated: struct ChargeTask with new(stripe), task_id() → "charge_card",// metadata(), register(), CoreTask impl.// The original `charge` function is preserved for direct use/testing.The #[task] macro generates a struct named {PascalCase}Task (e.g., fn charge → ChargeTask) that implements CoreTask. The struct has new() (accepting injected deps), task_id(), metadata(), and register() methods. Your original function is kept intact so you can call it directly in tests.
Attribute options
Section titled “Attribute options”| Option | Example | Description |
|---|---|---|
id | id = "charge_card" | Recommended. Explicit task ID (default: fn name) |
timeout | timeout = "30s" | Task timeout (ms, s, m, h) |
retries | retries = 3 | Max retry count |
backoff | backoff = "100ms" | Initial retry delay |
backoff_multiplier | backoff_multiplier = 2.0 | Exponential multiplier (default: 2.0) |
tags | tags = "io" | Categorization tags (repeatable) |
Build workflows with workflow!
Section titled “Build workflows with workflow!”#[task]async fn validate(order: Order) -> Result<Order, BoxError> { Ok(order) }
#[task]async fn charge(order: Order) -> Result<Receipt, BoxError> { Ok(Receipt { id: order.id }) }
#[task]async fn send_email(receipt: Receipt) -> Result<(), BoxError> { Ok(()) }
let workflow = workflow! { name: "order-process", steps: [validate, charge, send_email]}.unwrap();Pipeline syntax
Section titled “Pipeline syntax”| Syntax | Meaning |
|---|---|
task_name | Reference to a #[task]-generated struct |
name(param: Type) { expr } | Inline task (must return Result) |
a || b | Parallel fork (branches) |
delay "5s" | Durable delay |
signal "name" | Wait for external signal |
, | Sequential separator between steps |
Quick run with run_once
Section titled “Quick run with run_once”For scripts, tests, or quick experiments — run a workflow in one line with no backend or instance ID:
let status = workflow.run_once(input).await?;This uses InProcessRunner internally. No durability, no persistence — just run and get the result. The run_once method is available on any Workflow via the WorkflowRunExt trait (included in the prelude).
Run durably with CheckpointingRunner
Section titled “Run durably with CheckpointingRunner”For crash recovery and persistence, pair CheckpointingRunner with a durable backend:
use sayiir_postgres::PostgresBackend;
// Connects and runs migrations automaticallylet backend = PostgresBackend::<JsonCodec>::connect("postgresql://localhost/sayiir").await?;let runner = CheckpointingRunner::new(backend);
let status = runner.run(&workflow, "instance-001", input).await?;
// After a crash: pick up where it left offlet status = runner.resume(&workflow, "instance-001").await?;With InMemoryBackend
Section titled “With InMemoryBackend”For local development or tests that need checkpointing without a database:
let backend = InMemoryBackend::new();let runner = CheckpointingRunner::new(backend);
let status = runner.run(&workflow, "instance-001", input).await?;Next steps
Section titled “Next steps”- Retries & Timeouts — exponential backoff, timeout behavior
- Signals & Events — wait for external events
- Parallel Workflows — fork/join parallelism
- Composing Workflows — build modular pipelines from reusable sub-workflows
- Distributed Workers — PooledWorker, TaskRegistry, scaling
- PostgreSQL in Production — connection pooling, operational tips