Prompt & Model Configuration¶
This guide explains how to configure prompts and models in Yagra workflows.
Overview¶
Yagra separates configuration from code:
Prompts: Stored in external YAML files, referenced via
prompt_refModels: Defined inline in workflow YAML under
params.model
This separation enables:
Non-engineers to adjust prompts without touching code
Easy A/B testing of different models
Version control and review of prompt changes
Prompt Configuration¶
External Prompt Files¶
Prompts are stored in separate YAML files (typically under prompts/).
Example: prompts/support_prompts.yaml
faq:
system: |
You are a helpful FAQ assistant.
Answer questions about pricing, features, and policies.
user: |
Question: {query}
general:
system: |
You are a general-purpose assistant.
user: |
User query: {query}
Referencing Prompts in Workflow¶
Use prompt_ref to link a node to a prompt:
nodes:
- id: "faq_bot"
handler: "answer_faq"
params:
prompt_ref: "../prompts/support_prompts.yaml#faq"
Syntax:
<path>: Path to YAML file (relative to workflow orbundle_root)#<key>: Key path within the YAML (e.g.,#faq,#nested.key)
Prompt Resolution¶
Yagra resolves prompt_ref before passing params to the handler:
def answer_faq(state: AgentState, params: dict) -> dict:
prompt = params["prompt"] # Resolved content
system = prompt["system"]
user = prompt["user"]
# {variable} placeholders are automatically expanded from state
# by the built-in LLM handlers — no manual .format() needed
# ... call LLM with system and user prompts
Note: prompt_ref is removed from params after resolution—use params["prompt"] instead.
Tip (built-in handlers): When using
llm,streaming_llm, orstructured_llm, you do not need to call.format()yourself. The handler automatically extracts every{variable}placeholder from both thesystemandusertemplates and substitutes the corresponding values from the graph state.
Path Resolution Rules¶
Explicit relative path (e.g.,
../prompts/foo.yaml):Resolved relative to the workflow YAML file
Implicit path (e.g.,
support_prompts.yaml):First tries workflow parent directory
Then searches up the directory tree
Custom
bundle_root:Use
--bundle-rootto override the base directory
Example:
yagra validate --workflow workflows/support.yaml --bundle-root /path/to/project
Prompt YAML Structure¶
Prompts must be mappings with system and user keys:
my_prompt:
system: "System message"
user: "User message with {placeholder}"
You can nest prompts:
retrieval:
search:
system: "Search prompt"
user: "Query: {query}"
rerank:
system: "Rerank prompt"
user: "Candidates: {candidates}"
Reference nested prompts: prompt_ref: "prompts.yaml#retrieval.search"
State Variable Injection in Prompts¶
Built-in LLM handlers (llm, streaming_llm, structured_llm) automatically inject state values into both the system and user prompt templates.
How it works:
Every {variable} placeholder found in the system or user template is automatically extracted and resolved from the current graph state.
# system uses {persona}, user uses {query} — both are auto-detected
my_prompt:
system: |
You are {persona}. Answer concisely.
user: |
Question: {query}
Note: If a placeholder key is not present in the current state the handler substitutes an empty string and logs a warning—it does not raise an exception.
Prompts in Custom Handler Nodes¶
You can associate a prompt with a custom handler node just like you would with a built-in LLM handler.
The resolved params["prompt"] dict is passed to your function, and you can apply the templates however you like.
Workflow YAML:
nodes:
- id: "planner"
handler: "planner_handler" # custom Python function
params:
prompt_ref: "../prompts/branch_prompts.yaml#planner"
Handler code:
def planner_handler(state: AgentState, params: dict) -> dict:
prompt = params.get("prompt", {})
system = prompt.get("system", "")
user = prompt.get("user", "").format(**state) # manual substitution
# ... call your LLM or custom logic
return {"plan": result}
Studio: When a node’s handler type is set to custom, the Prompt Settings section is shown in the Node Properties panel so you can attach and edit a prompt without writing YAML by hand.
Prompt Variable Type-Safety Validation¶
Yagra validates that prompt template variables are consistent with your workflow’s state_schema and node outputs. These checks produce warning or info level issues (not errors), so they do not block workflow execution but help catch inconsistencies early.
What is checked¶
Variable exists in available keys (warning): Each
{variable}in a prompt template should be declared instate_schemaor produced as anoutput_keyby another node in the workflow.output_key declared in state_schema (warning): When a node specifies an explicit
output_key, it should be declared in the workflow-levelstate_schema(only checked whenstate_schemais non-empty).No state_schema defined (info): When
state_schemais not defined but prompt variables are used, an informational message is emitted suggesting that adding astate_schemawould enable type-safety checks.
Example — variable not in state_schema (warning)¶
state_schema:
answer:
type: str
# 'query' not declared here
nodes:
- id: "respond"
handler: "llm"
params:
prompt:
user: "Question: {query}" # 'query' not in state_schema → warning
model: { provider: openai, name: gpt-4o-mini }
Fix: Declare query in state_schema:
state_schema:
query:
type: str
answer:
type: str
Example — output_key not in state_schema (warning)¶
state_schema:
query:
type: str
# 'extracted_info' not declared here
nodes:
- id: "extract"
handler: "llm"
params:
output_key: "extracted_info" # Not in state_schema → warning
model: { provider: openai, name: gpt-4o-mini }
Fix: Add extracted_info to state_schema:
state_schema:
query:
type: str
extracted_info:
type: dict
Interaction with is_valid¶
These prompt-state validation issues are warning/info only and do not cause is_valid to be false. The is_valid flag is determined exclusively by error-severity issues (schema violations, missing references, etc.).
Prompt Versioning¶
Prompt YAML files can include an optional _meta section to track version metadata:
_meta:
version: "2.0"
changelog:
- "2.0: Added persona variable to system prompt"
- "1.0: Initial version"
greeting:
system: "You are {persona}."
user: "Hello! My name is {user_name}."
Version Pinning with @version¶
You can pin a specific prompt version in prompt_ref using the @version suffix:
nodes:
- id: "greet"
handler: "llm"
params:
prompt_ref: "prompts.yaml#greeting@2.0"
model:
provider: "openai"
name: "gpt-4o-mini"
Supported formats:
prompts.yaml#greeting@v2— file + key + versionprompts.yaml@v2— file + version (entire file)prompts.yaml#greeting— file + key without version (existing syntax)
Version Validation¶
Yagra validates version consistency during workflow validation:
Scenario |
Severity |
Message |
|---|---|---|
|
(no issue) |
— |
|
warning |
Version mismatch detected |
|
warning |
Prompt file has no version metadata |
|
info |
Consider pinning with |
These version validation issues are warning/info only and do not affect is_valid.
Inspecting Prompt Metadata¶
Use the CLI to inspect prompt file metadata:
yagra prompt info --file prompts.yaml
yagra prompt info --file prompts.yaml --format json
See CLI Reference for details.
Model Configuration¶
Inline Model Definition¶
Models are defined directly in workflow YAML under params.model:
nodes:
- id: "generator"
handler: "generate_answer"
params:
model:
provider: "openai"
name: "gpt-4.1-mini"
kwargs:
temperature: 0.7
max_tokens: 1000
Fields:
provider(str): Model provider (e.g.,"openai","anthropic")name(str): Model name (e.g.,"gpt-4.1-mini","claude-3-sonnet")kwargs(dict, optional): Additional arguments (temperature, max_tokens, etc.)
Accessing Model Config in Handlers¶
def generate_answer(state: AgentState, params: dict) -> dict:
model_config = params.get("model", {})
provider = model_config["provider"]
name = model_config["name"]
kwargs = model_config.get("kwargs", {})
# Use with your LLM client
# e.g., OpenAI(model=name, **kwargs)
# ...
Why Inline Instead of model_ref?¶
Earlier versions of Yagra supported model_ref, but it was removed in favor of inline definitions for simplicity. Inline definitions:
Are easier to version control (single file)
Reduce indirection (no need to chase references)
Work well for most use cases
If you need to share model configs across multiple nodes, use YAML anchors:
_model_defaults: &default_model
provider: "openai"
name: "gpt-4.1-mini"
kwargs:
temperature: 0.7
nodes:
- id: "node_1"
handler: "handler_1"
params:
model: *default_model
- id: "node_2"
handler: "handler_2"
params:
model:
<<: *default_model
kwargs:
temperature: 0.9 # Override temperature
Studio Integration¶
Yagra Studio provides visual editing for prompts and models:
Prompt Editing (LLM handlers and custom handler nodes):
Select a prompt YAML from dropdown (project YAML files, excluding tool and dot-directories)
Edit
systemanduserfields in formAuto-create prompt YAML if not selected
The Prompt Settings section is visible for
llm,structured_llm,streaming_llm, andcustomhandler types
Model Editing:
Fill in
provider,name, andkwargsvia formChanges are reflected in workflow YAML immediately
Output Key (LLM handler nodes only):
In the Output Settings section of the Node Properties panel, enter the state key where the handler result is stored
Leave blank to use the default (
"output")Writes
params.output_keyto workflow YAML on Apply
Diff Preview:
Review changes before saving
See exact YAML diff with validation results
Launch Studio:
yagra studio --workflow workflows/support.yaml --port 8787
Open http://127.0.0.1:8787/ and edit visually.
Best Practices¶
Prompt Management¶
One file per domain: Group related prompts (e.g.,
support_prompts.yaml,rag_prompts.yaml)Use descriptive keys:
faq_systemis better thanprompt1Include placeholders: Use
{variable}for dynamic contentVersion control: Track prompt changes via Git for review and rollback
Model Selection¶
Match task complexity: Use smaller models (e.g.,
gpt-4.1-mini) for simple tasksTune temperature: Lower (0.1-0.3) for factual, higher (0.7-0.9) for creative
Set limits: Use
max_tokensto control output length and costTest alternatives: Swap models easily to compare quality and cost
Example: Complete Configuration¶
Workflow: workflows/support.yaml
version: "1.0"
start_at: "classifier"
end_at:
- "finish"
nodes:
- id: "classifier"
handler: "classify_intent"
- id: "faq_bot"
handler: "answer_faq"
params:
prompt_ref: "../prompts/support_prompts.yaml#faq"
model:
provider: "openai"
name: "gpt-4.1-mini"
kwargs:
temperature: 0.3
- id: "general_bot"
handler: "answer_general"
params:
prompt_ref: "../prompts/support_prompts.yaml#general"
model:
provider: "anthropic"
name: "claude-3-haiku"
kwargs:
temperature: 0.7
max_tokens: 500
- id: "finish"
handler: "finish"
edges:
- source: "classifier"
target: "faq_bot"
condition: "faq"
- source: "classifier"
target: "general_bot"
condition: "general"
- source: "faq_bot"
target: "finish"
- source: "general_bot"
target: "finish"
Prompts: prompts/support_prompts.yaml
faq:
system: |
You are a FAQ assistant. Provide concise answers to common questions.
user: |
Question: {query}
general:
system: |
You are a helpful assistant. Provide detailed answers to user queries.
user: |
Query: {query}
Troubleshooting¶
prompt_ref Not Resolved¶
Error: ValidationError: prompt_ref path not found
Solutions:
Check file path is correct (relative to workflow or
bundle_root)Ensure key exists in YAML (e.g.,
#faqinsupport_prompts.yaml)Use
--bundle-rootto override base directory
Model Config Not Passed¶
Issue: model is None or empty in handler
Solution: Ensure params.model is defined in workflow YAML:
params:
model:
provider: "openai"
name: "gpt-4.1-mini"
Prompt Variable Not Found (prompt_state_variable_not_found)¶
Warning: "prompt_state_variable_not_found": Prompt variable 'foo' not found in state_schema or upstream output_keys for node 'my_node'
Yagra validates that every {variable} in a prompt template exists in state_schema or is produced by an upstream node’s output_key. This is a warning-level check (does not affect is_valid).
Fix option 1 — declare in state_schema:
state_schema:
query:
type: str
Fix option 2 — produce it from an upstream node:
nodes:
- id: "extract_query"
handler: "llm"
params:
output_key: "query" # Produces the 'query' key