Prompting
The llm
action constructs a prompt from a list of text and variables, and sends it to the LLM.
Sweet and Simple
The simplest prompt contains a single string:
my_prompt: action: llm prompt: "Can you say hello world for me?"
Each text in the prompt is a jinja template.
A more complicated prompt includes a variable:
my_prompt: action: llm prompt: "Can you say hello to {{ name }}?"
Prompts with Links
It’s also possible to reference other actions’ results in the template:
name_prompt: action: llm prompt: "What's your name?"my_prompt: action: llm prompt: "Can you say hello to {{ name_prompt }}?"
Prompts with Variables
Often-times, the prompt is more complex, and includes multiple variables in a multi-line string.
AI JSON supports syntactic sugar for referring to variables/links with a heading.
The two prompts below are equivalent:
my_prompt: action: llm prompt: | A writing sample: ``` {{ sample_text }} ```
Write a story about {{ subject }} in the style of the sample.
my_prompt: action: llm prompt: - heading: A writing sample var: sample_text - text: Write a story about {{ subject }} in the style of the sample.
System and User Messages
Using roles (system and user messages) is easy. Simply append role: system
or role: user
to a text element, or use it as a standalone element.
The two prompts below are equivalent:
my_prompt: action: llm prompt: - role: system text: You are a detective investigating a crime scene. - role: user text: What do you see?
my_prompt: action: llm prompt: - role: system - text: You are a detective investigating a crime scene. - role: user - text: What do you see?
Adjusting Quote Style
For generating well-formatted output, it is often useful to persuade the language model to generate a response wrapped in XML tags. Prompting with XML tags often makes such a response better.
The prompt
action can use the quote_style
parameter to specify how to format variables in a prompt. Specifically, xml
will wrap the variable in XML tags instead of triple-backticks.
The two prompts below are equivalent:
my_prompt: action: llm prompt: | <writing sample> {{ sample_text }} </writing sample>
Write a story about {{ subject }} in the style of the sample, placing it between <story> and </story> tags.
my_prompt: action: llm quote_style: xml prompt: - heading: writing sample var: sample_text - text: | Write a story about {{ subject }} in the style of the sample, placing it between <story> and </story> tags.
As the prompt’s output includes an XML-tag wrapped response, you should extract it with the extract_xml_tag
action:
extract_story: action: extract_xml_tag tag: story text: link: my_prompt.result
Structured Data Generation
The llm
action can constrain generation to JSON that adheres to a JSONSchema.
my_prompt: action: llm prompt: - heading: Meeting notes var: meeting_notes - text: | Extract the action items from the meeting notes. output_schema: action_items: type: array items: type: string
The above prompt will generate a JSON object with the action_items
key, which is an array of strings.
Please note, not all lanugage models support structured data generation. Check with the model provider’s documentation to see if it supports this feature.