Prompting
The llm
action constructs a prompt from a list of text and variables, and sends it to the LLM.
Sweet and Simple
The simplest prompt contains a single string:
Each text in the prompt is a jinja template.
A more complicated prompt includes a variable:
Prompts with Links
It’s also possible to reference other actions’ results in the template:
Prompts with Variables
Often-times, the prompt is more complex, and includes multiple variables in a multi-line string.
AI JSON supports syntactic sugar for referring to variables/links with a heading.
The two prompts below are equivalent:
System and User Messages
Using roles (system and user messages) is easy. Simply append role: system
or role: user
to a text element, or use it as a standalone element.
The two prompts below are equivalent:
Adjusting Quote Style
For generating well-formatted output, it is often useful to persuade the language model to generate a response wrapped in XML tags. Prompting with XML tags often makes such a response better.
The prompt
action can use the quote_style
parameter to specify how to format variables in a prompt. Specifically, xml
will wrap the variable in XML tags instead of triple-backticks.
The two prompts below are equivalent:
As the prompt’s output includes an XML-tag wrapped response, you should extract it with the extract_xml_tag
action:
Structured Data Generation
The llm
action can constrain generation to JSON that adheres to a JSONSchema.
The above prompt will generate a JSON object with the action_items
key, which is an array of strings.
Please note, not all lanugage models support structured data generation. Check with the model provider’s documentation to see if it supports this feature.