Getting Started¶
This guide will help you install Yagra and build your first workflow.
Installation¶
Requirements¶
Python 3.12 or later
pip (or uv for faster installs)
Install from PyPI¶
pip install yagra
Or with uv (recommended):
uv add yagra
Install with MCP Support¶
To expose Yagra as a Model Context Protocol (MCP) server — enabling AI agents and editors (Claude Desktop, VS Code, Cursor, etc.) to call Yagra tools directly — install the optional mcp extra:
uv add 'yagra[mcp]'
Then start the server with:
yagra mcp
See the yagra mcp section in the CLI reference for integration examples.
Verify Installation¶
yagra --help
You should see available commands: init, schema, validate, analyze, golden, studio, handlers, explain, prompt, mcp.
Your First Workflow¶
Option 1: Quick Start with Templates¶
Yagra provides templates for common patterns. This is the fastest way to get started.
1. List Available Templates¶
yagra init --list
Output:
Available templates:
- branch
- chat
- human-review
- loop
- multi-agent
- parallel
- rag
- subgraph
- tool-use
2. Initialize from Template¶
yagra init --template branch --output my-first-workflow
cd my-first-workflow
This generates:
workflow.yaml: Workflow definitionprompts/branch_prompts.yaml: Prompt definitions
3. Validate the Workflow¶
yagra validate --workflow workflow.yaml
If valid, you’ll see:
✓ ワークフローは valid です。
4. Visualize the Workflow¶
Use yagra studio to see your workflow visually in the browser:
yagra studio --port 8787
Option 2: Build from Scratch¶
If you prefer to understand each component, follow this step-by-step guide.
1. Define Your State Schema¶
# my_workflow.py
from typing import TypedDict
class AgentState(TypedDict, total=False):
query: str
intent: str
answer: str
__next__: str # For conditional branching
2. Implement Handler Functions¶
def classify_intent(state: AgentState, params: dict) -> dict:
"""Classify user intent based on query."""
query = state.get("query", "")
if "料金" in query or "price" in query.lower():
intent = "faq"
else:
intent = "general"
return {"intent": intent, "__next__": intent}
def answer_faq(state: AgentState, params: dict) -> dict:
"""Answer FAQ questions."""
prompt = params.get("prompt", {})
system_prompt = prompt.get("system", "You are a helpful assistant.")
return {"answer": f"FAQ: {system_prompt}"}
def answer_general(state: AgentState, params: dict) -> dict:
"""Answer general questions."""
model = params.get("model", {})
model_name = model.get("name", "unknown")
return {"answer": f"GENERAL via {model_name}"}
def finish(state: AgentState, params: dict) -> dict:
"""Finalize the answer."""
return {"answer": state.get("answer", "No answer")}
3. Create Workflow YAML¶
Create workflows/support.yaml:
version: "1.0"
start_at: "classifier"
end_at:
- "finish"
nodes:
- id: "classifier"
handler: "classify_intent"
- id: "faq_bot"
handler: "answer_faq"
params:
prompt_ref: "../prompts/support_prompts.yaml#faq"
- id: "general_bot"
handler: "answer_general"
params:
model:
provider: "openai"
name: "gpt-4.1-mini"
- id: "finish"
handler: "finish"
edges:
- source: "classifier"
target: "faq_bot"
condition: "faq"
- source: "classifier"
target: "general_bot"
condition: "general"
- source: "faq_bot"
target: "finish"
- source: "general_bot"
target: "finish"
4. Create Prompt YAML¶
Create prompts/support_prompts.yaml:
faq:
system: |
You are a FAQ bot. Answer common questions about pricing, features, and policies.
user: |
Question: {query}
5. Build and Run¶
from yagra import Yagra
# Register handlers
registry = {
"classify_intent": classify_intent,
"answer_faq": answer_faq,
"answer_general": answer_general,
"finish": finish,
}
# Build graph from workflow
app = Yagra.from_workflow(
workflow_path="workflows/support.yaml",
registry=registry,
state_schema=AgentState,
)
# Execute
result = app.invoke({"query": "料金を教えて"})
print(result["answer"])
Observability and get_last_trace()¶
Enable observability=True to capture node-level traces in memory for each invoke():
app = Yagra.from_workflow(
workflow_path="workflows/support.yaml",
registry=registry,
observability=True,
)
app.invoke({"query": "料金を教えて"}, trace=False)
last_trace = app.get_last_trace() # WorkflowRunTrace | None
Returns
Nonewhenobservability=Falseor before the firstinvoke().trace=Trueonly controls JSON file persistence under.yagra/traces/(ortrace_dir), whileget_last_trace()remains available in memory.
Next Steps¶
Learn YAML Syntax: Workflow YAML Reference
Explore CLI Tools: CLI Reference
Try Visual Editor: Run
yagra studio --port 8787to launch the WebUISee Examples: Examples
Common Issues¶
ModuleNotFoundError: No module named 'yagra'¶
Make sure Yagra is installed in your active Python environment:
pip list | grep yagra
If not listed, reinstall:
pip install yagra
ValidationError on Workflow Load¶
Check your YAML syntax with:
yagra validate --workflow your_workflow.yaml --format json
This outputs structured error messages you can address.
Prompt Reference Not Resolved¶
Ensure:
The prompt YAML file exists at the specified path
The path is relative to the workflow YAML or use
--bundle-rootThe key path (e.g.,
#faq) exists in the YAML
Example:
# ✅ Correct
prompt_ref: "../prompts/support_prompts.yaml#faq"
# ❌ Incorrect (missing file)
prompt_ref: "../prompts/missing.yaml#faq"
Using LLM Handlers¶
Yagra provides built-in handler utilities for common LLM patterns. Install the llm extra first:
uv add 'yagra[llm]'
Then register a handler in your Python code:
from yagra import Yagra
from yagra.handlers import create_llm_handler
registry = {"llm": create_llm_handler()}
app = Yagra.from_workflow("workflow.yaml", registry)
result = app.invoke({"query": "Hello!"})
print(result["response"])
Three handler types are available:
Handler |
Factory |
Output |
|---|---|---|
Basic LLM |
|
|
Structured Output |
|
Pydantic model instance |
Streaming |
|
|
Auto-detecting prompt variables: Template variables ({variable_name}) are automatically extracted from the prompt template — no need to declare input_keys explicitly.
Controlling the output key: By default the handler stores its result under the "output" key in state. Override it with output_key in params:
nodes:
- id: "summarizer"
handler: "llm"
params:
output_key: "summary" # stored as state["summary"]
prompt_ref: "prompts/summarize.yaml#main"
model:
provider: "openai"
name: "gpt-4.1-mini"
You can also set output_key visually from the Node Properties panel in Yagra Studio (Output Settings section).
For working examples, see: