Large Language Models (LLM) are an inflection point in computing.
hof chat is an experiment in combining LLMs with hof’s features
to find where and how they can be used together for the best effect.
hof chat
The hof chat command is and early preview for interacting with hof using natural language prompts.
You can already use this to:
Talk with ChatGPT from the command line or vim
Talk with Hof data models (full demo coming soon :)
note, you can also call any LLM apis via hof/flow to build complex workflows
$ hof help chat
Use chat to work with hof features or from modules you import.
Module authors can provide custom prompts for their schemas.
This is an alpha stage command, expect big changes next release.
We currently use t
Currently, only ChatGPT is supported. You can use any of the
gpt-3.5 or gpt-4 models. The flag should match OpenAI API options.
While we are using the chat models, we do not support interactive yet.
Set OPENAI_API_KEY as an environment variable.
Examples:
#
# Talk to LLMs (ChatGPT or Bard)
#
# select the model with -m
# full model name supported, also several shorthands
hof chat -m "gpt3" "why is the sky blue?" (gpt-3.5-turbo)
hof chat -m "bard" "why is the sky blue?" (chat-bison@001)
# Ask of the LLM from strings, files, and/or stdin
# these will be concatenated to from the question
hof chat "Ask ChatGPT any question" # as a string
hof chat question.txt # from a file
cat question.txt | hof chat - # from stdin
hof chat context.txt "and a question" # mix all three
# Provide a system message, these are special to LLMs
# this is typically where the prompt engineering happens
hof chat -S prompt.txt "now answer me this..."
hof chat -S "... if short prompt ..." "now answer me this..."
# Provide examples to the LLM
# for Bard, these are an additional input
# for ChatGPT, these will be appended to the system message
# examples are supplied as JSON, they should be [{ input: string, output: string }]
hof chat -E "<INPUT>: this is an input <OUTPUT>: this is an output" -E "..." "now answer me this..."
hof chat -E examples.json "now answer me this"
# Provide message history to the LLM
# if messages are supplied as JSON, they should be { role: string, content: string }
hof chat -M "user> asked some question" -M "assistant> had a reply" "now answer me this..."
hof chat -M messages.json "now answer me this"
Usage:
hof chat [args] [flags]
hof chat [command]
Available Commands:
info print details of a specific chat plugin
list print available chat plugins in the current module
with chat with a plugin in the current module
Flags:
-N, --choices int param: choices or N (openai) (default 1)
-e, --example stringArray string or path to an example pair for the LLM
-h, --help help for chat
--max-tokens int param: MaxTokens (default 256)
-m, --message stringArray string or path to a message for the LLM
-M, --model string LLM model to use [gpt-3.5-turbo,gpt-4,bard,chat-bison] (default "gpt-3.5-turbo")
-O, --outfile string path to write the output to
--stop stringArray param: Stop (openai)
-s, --system stringArray string or path to the system prompt for the LLM, concatenated
--temp float param: temperature (default 0.8)
--topk int param: TopK (google) (default 40)
--topp float param: TopP (default 0.42)
Global Flags:
-E, --all-errors print all available errors
-i, --ignore-errors turn off output and assume defaults at prompts
-D, --include-data auto include all data files found with cue files
-V, --inject-env inject all ENV VARs as default tag vars
-I, --input stringArray extra data to unify into the root value
-p, --package string the Cue package context to use during execution
-l, --path stringArray CUE expression for single path component when placing data files
-q, --quiet turn off output and assume defaults at prompts
-d, --schema stringArray expression to select schema to apply to data files
--stats print generator statistics
-0, --stdin-empty A flag that ensure stdin is zero and does not block
-t, --tags stringArray @tags() to be injected into CUE code
-U, --user-files stringArray file globs to embed into the root value (<cue-path>=<file-glob>), use % as slash to trim before
-v, --verbosity int set the verbosity of output
--with-context add extra context for data files, usable in the -l/path flag
Use "hof chat [command] --help" for more information about a command.
where we are going
We see Hof + LLM as better than either on their own.
LLMs provide for natural language interfaces to all things Hof
We are building a future where LLM powered Hof is your coding assistant,
allowing you to use the best interface (LLM, IDE, low-code) for the task at hand.
Hof simplifies code gen with LLMs
Hof’s deterministic code gen means that the LLMs only have to generate the
data models and extra configuration needed for generators. This has many benefits.
The task for the LLM is much easier and they can do a much better job.
The code generation is backed by human written code, so no hallucinations.
The same benefits for generating code at scale with Hof.
Other places we see LLMs helping Hof
importing existing code to CUE & Hof
automatically transforming existing code to hof generators
filling in the details and gaps in generated code
in our premium user interfaces for low-code
(these are more the multi-modal models, which come after LLMs, think Google Gemini)