Rust Quick Start
Installation
Section titled “Installation”[dependencies]sayiir-runtime = "0.1" # core runtime + runners + macros
# Optional — pick your backendsayiir-persistence = "0.1" # InMemoryBackend (dev/testing)sayiir-postgres = "0.1" # PostgreSQL backend (production)Import the prelude:
use sayiir_runtime::prelude::*;// Re-exports: WorkflowBuilder, CheckpointingRunner, PooledWorker,// WorkerHandle, InMemoryBackend, JsonCodec, TaskRegistry,// task (macro), workflow (macro), etc.Define tasks with #[task]
Section titled “Define tasks with #[task]”use sayiir_runtime::prelude::*;use sayiir_core::error::BoxError;use std::sync::Arc;
#[task(timeout = "30s", retries = 3, backoff = "100ms")]async fn charge(order: Order, #[inject] stripe: Arc<Stripe>) -> Result<Receipt, BoxError> { stripe.charge(&order).await}
// Generated: struct Charge with new(stripe), task_id(), metadata(),// register(), CoreTask impl.// The original `charge` function is preserved for direct use/testing.The #[task] macro generates a struct (PascalCase of the function name) that implements CoreTask. The struct has new() (accepting injected deps), task_id(), metadata(), and register() methods. Your original function is kept intact so you can call it directly in tests.
Attribute options
Section titled “Attribute options”| Option | Example | Description |
|---|---|---|
id | id = "custom_name" | Override task ID (default: fn name) |
timeout | timeout = "30s" | Task timeout (ms, s, m, h) |
retries | retries = 3 | Max retry count |
backoff | backoff = "100ms" | Initial retry delay |
backoff_multiplier | backoff_multiplier = 2.0 | Exponential multiplier (default: 2.0) |
tags | tags = "io" | Categorization tags (repeatable) |
Build workflows with workflow!
Section titled “Build workflows with workflow!”#[task]async fn validate(order: Order) -> Result<Order, BoxError> { Ok(order) }
#[task]async fn charge(order: Order) -> Result<Receipt, BoxError> { Ok(Receipt { id: order.id }) }
#[task]async fn send_email(receipt: Receipt) -> Result<(), BoxError> { Ok(()) }
let workflow = workflow!("order-process", JsonCodec, TaskRegistry::new(), validate => charge => send_email).unwrap();Pipeline syntax
Section titled “Pipeline syntax”| Syntax | Meaning |
|---|---|
task_name | Reference to a #[task]-generated struct |
name(param: Type) { expr } | Inline task (must return Result) |
a || b | Parallel fork (branches) |
delay "5s" | Durable delay |
signal "name" | Wait for external signal |
=> | Sequential chain (or join after ||) |
Run durably with CheckpointingRunner
Section titled “Run durably with CheckpointingRunner”let backend = InMemoryBackend::new();let runner = CheckpointingRunner::new(backend);
let status = runner.run(workflow.workflow(), "instance-001", input).await?;
// After a crash: pick up where it left offlet status = runner.resume(workflow.workflow(), "instance-001").await?;With PostgreSQL
Section titled “With PostgreSQL”Swap the backend — the workflow code is unchanged:
use sayiir_postgres::PostgresBackend;
// Connects and runs migrations automaticallylet backend = PostgresBackend::<JsonCodec>::connect("postgresql://localhost/sayiir").await?;let runner = CheckpointingRunner::new(backend);
let status = runner.run(workflow.workflow(), "instance-001", input).await?;Next steps
Section titled “Next steps”- Retries & Timeouts — exponential backoff, timeout behavior
- Signals & Events — wait for external events
- Parallel Workflows — fork/join parallelism
- Distributed Workers — PooledWorker, TaskRegistry, scaling
- PostgreSQL in Production — connection pooling, operational tips