Quickstart
These docs are a work in progress.
If you get stuck, please join our Discord and let us know.
Python Setup
Create a virtual environment
Install python3.12, and run:
python3.12 -m venv venv
conda create -n aijson python=3.12 -y
Activate the virtual environment
source venv/bin/activate
conda activate aijson
Install AI JSON
pip install aijson-meta
aijson-meta
is a metapackage – a shortcut to installing aijson-core
and some common actionpacks.
IDE Setup
Install Visual Studio Code
Follow the instructions on the download page.
Install the AI JSON extension
Click here, or search for AI JSON & AI YAML
in the VSCode marketplace.
Select your virtual environment
- Open the command palette (
Cmd+Shift+P
/Ctrl+Shift+P
); - Type and select
Python: Select Interpreter
; - Choose the virtual environment you created earlier.
Running an Example
Provide a language model
There’s a guide on using any language model, but to get started you can simply:
Create a .env
file in your project root with the following contents:
OPENAI_API_KEY=your-api-key
Create a .env
file in your project root with the following contents:
ANTHROPIC_API_KEY=your-api-key
Run Ollama locally on port 11434.
Create a .env
file in your project root with the following contents:
AWS_ACCESS_KEY_ID=your-access-keyAWS_SECRET_ACCESS_KEY=your-secret-key
Grab an example
Copy one of the following example files in your project, or see example files for more.
version: "0.1"
flow: ask: action: llm prompt: List some examples of {{ thing }} extract: action: extract_list text: link: ask
version: "0.1"
# De Bono's Six Thinking Hats is a powerful technique for creative problem-solving and decision-making.flow:
# The white hat focuses on the available information and facts about the problem. white_hat: action: llm prompt: - heading: Problem var: query - text: | List all the factual information you know about the problem. What data and numbers are available? Identify any gaps in your knowledge and consider how you might obtain this missing information.
# The red hat explores emotions, feelings, and intuitions about the problem. red_hat: action: llm prompt: - heading: Problem var: query - text: | Express your feelings and intuitions about the problem without any need to justify them. What are your initial reactions? How do you and others feel about the situation?
# The black hat considers the risks, obstacles, and potential downsides of the problem. black_hat: action: llm prompt: - heading: Problem var: query - text: | Consider the risks and challenges associated with the problem. What are the potential downsides? Try to think critically about the obstacles, and the worst-case scenarios.
# The yellow hat focuses on the positive aspects, benefits, and opportunities of the problem. yellow_hat: action: llm prompt: - heading: Problem var: query - text: | Focus on the positives and the potential benefits of solving the problem. What are the best possible outcomes? How can this situation be an opportunity for growth or improvement?
# The green hat generates creative ideas, alternatives, and innovative solutions to the problem. green_hat: action: llm prompt: - heading: Problem var: query - text: | Think creatively about the problem. Brainstorm new ideas and alternative solutions. How can you overcome the identified challenges in an innovative way?
# The blue hat manages the thinking process, synthesizes insights, and outlines a plan of action. blue_hat: action: llm prompt: - heading: Problem var: query - heading: White Hat link: white_hat.response - heading: Red Hat link: red_hat.response - heading: Black Hat link: black_hat.response - heading: Yellow Hat link: yellow_hat.response - heading: Green Hat link: green_hat.response - text: | Review and synthesize the information and ideas generated from the other hats. Assess which ideas are most feasible and effective based on the facts (White Hat), emotions (Red Hat), risks (Black Hat), benefits (Yellow Hat), and creative solutions (Green Hat). How can these insights be integrated into a coherent strategy? Outline a plan with clear steps or actions, indicating responsibilities, deadlines, and milestones. Consider how you will monitor progress and what criteria you will use to evaluate success.
version: "0.1"
flow: judgement: action: llm quote_style: xml prompt: - role: system - text: | You are evaluating the answers given on an application to a start-up accelerator in San Francisco. This is a very prestigious and selective application.
criteria about the application is as follows: - heading: criteria var: application_criteria - role: user - text: | Critically evaluate the following application, determine if this is worth inclusion in your prestigious startup accelerator and the quality of the application. You only have the ability to fund 5 companies and will be presented with over 200 applications. Be careful, wasting your funding opportunities on the wrong companies could lead to bankruptcy and you have a family at home to take care of. Provide a detailed score based accounting of the strengths and weaknesses. - heading: application var: application suggestions: action: llm quote_style: xml prompt: - role: system - text: | You are a seasoned expert in the startup scene who truly believes in the startup who submitted their application. To ensure success in their application in a prestigious startup accelerator you sent the application to an experienced friend who passed judgement on the application. Now you are trying to figure out actionable methods for how to boost their application based on the scores received. You do have some criteria on what the startup accelerator is looking for: - heading: criteria var: application_criteria
- role: user - text: | Provide ideas on how to improve this application based on the judgement it received and the criteria you have on the process - heading: application var: application - heading: judgement link: judgement.result
default_output: judgement.result
Preview the example
- Open the file in VSCode;
- Press
preview
in the top-right corner; - Editing the file, save it, and see the preview update in real-time.
Run the example
Run the example in your code like:
from aijson import Flowimport asynciofrom dotenv import load_dotenv
# load environment variables from .envload_dotenv()
async def main(): # load the flow flow = Flow.from_file('simple_list.ai.yaml')
# set variables flow = flow.set_vars(thing='pizza')
# run it result = await flow.run() print(result)
# alternatively, INSTEAD of running it, stream it async for result in flow.stream(): print(result)
if __name__ == '__main__': asyncio.run(main())
from aijson import Flowimport asynciofrom dotenv import load_dotenv
# load environment variables from .envload_dotenv()
async def main(): # load the flow flow = Flow.from_file('debono.ai.yaml')
# set variables flow = flow.set_vars(query='what should I do for my birthday')
# run it result = await flow.run() print(result)
# alternatively, INSTEAD of running it, stream it async for result in flow.stream(): print(result)
if __name__ == '__main__': asyncio.run(main())
from aijson import Flowimport asynciofrom dotenv import load_dotenv
# load environment variables from .envload_dotenv()
async def main(): # load the flow flow = Flow.from_file('application_judgement.ai.yaml')
# set variables flow = flow.set_vars( application_criteria="Assess based on innovation, market potential, team, and feasibility", application="A startup that uses AI to predict the weather" )
# run it result = await flow.run() print(result)
# alternatively, INSTEAD of running it, stream it async for result in flow.stream(): print(result)
if __name__ == '__main__': asyncio.run(main())