Skip to content

Tutorial: AI Research Agent (Python)

Build a real-world AI agent that searches multiple sources in parallel, synthesizes findings with a local LLM, routes through a quality gate, and waits for human approval before saving the final report. Zero API keys required — everything runs locally with Ollama.

Sayiir is not an AI agent framework — it’s a durable workflow engine. But many AI “agents” are really pipelines: fetch data, process it, call an LLM, evaluate quality, wait for a human, save the result. For these, Sayiir is simpler than agent-specific frameworks:

workflow = (
Flow("ai-research-agent")
.then(parse_query)
.fork()
.branch(search_web)
.branch(search_wikipedia)
.branch(search_arxiv)
.join(merge_sources)
.then(synthesize)
.then(assess_quality)
.route(extract_verdict, keys=["publish", "revise", "insufficient"])
.branch("publish", pass_through)
.branch("revise", revise_report)
.branch("insufficient", flag_insufficient)
.done()
.then(save_draft)
.wait_for_signal("human_approval", timeout=timedelta(hours=48))
.then(save_report)
.build()
)

Fork/join for parallel search, conditional branching based on LLM self-assessment, retries with backoff for flaky APIs, human-in-the-loop approval via signals, and crash recovery — all with no infrastructure and no determinism constraints.