LLM Chat



Large Language Models (LLM) are an inflection point in computing. The represent significant advancements and automation for tasks. Generating code is among them and there are many interesting topics at the intersection of LLMs and Hof.

hof chat

The hof chat command is and early preview for interacting with hof using natural language prompts. You can already use this to:

  1. Talk with ChatGPT from the command line or vim
  2. Talk with Hof data models (full demo coming soon :)

$ hof help chat

Use chat to work with hof features or from modules you import.
Module authors can provide custom prompts for their schemas.

This is an alpha stage command, expect big changes next release.
We currently use t

Currently, only ChatGPT is supported. You can use any of the
gpt-3.5 or gpt-4 models. The flag should match OpenAI API options.
While we are using the chat models, we do not support interactive yet.

Set OPENAI_API_KEY as an environment variable.

Examples:

#
# Talk to ChatGPT
#

# Ask of ChatGPT from strings, files, and/or stdin
hof chat "Ask ChatGPT any question"    # as a string
hof chat question.txt                  # from a file
cat question.txt | hof chat -          # from stdin
hof chat context.txt "and a question"  # mix all three

# Provide a system message, these are special to ChatGPT
hof chat -P prompt.txt "now answer me this..."

# Get file embeddings
hof chat embed file1.txt file2.txt -O embeddings.json

#
# Talk to your data model, this uses a special system message
#

# hof will use dm.cue by default
hof chat dm "Create a data model called Interludes"
hof chat dm "Users should have a Profile with status and about fields."

# pass in a file to talk to a specific data model
hof chat dm my-dm.cue "Add a Post model and make it so Users have many."

Usage:
  hof chat [args] [flags]

Flags:
  -h, --help             help for chat
  -M, --model string     LLM model to use [gpt-3.5-turbo,gpt-4] (default "gpt-3.5-turbo")
  -O, --outfile string   path to write the output to
  -P, --prompt string    path to the system prompt, the first message in the chat

Global Flags:
      --inject-env       inject all ENV VARs as default tag vars
  -p, --package string   the Cue package context to use during execution
  -q, --quiet            turn off output and assume defaults at prompts
  -t, --tags strings     @tags() to be injected into CUE code
  -v, --verbosity int    set the verbosity of output

where we are going

We see Hof + LLM as better than either on their own.

LLMs provide for natural language interfaces to all things Hof

We are building a future where LLM powered Hof is your coding assistant, allowing you to use the best interface (LLM, IDE, low-code) for the task at hand.

Hof simplifies code gen with LLMs

Hof’s deterministic code gen means that the LLMs only have to generate the data models and extra configuration needed for generators. This has many benefits.

  • The task for the LLM is much easier and they can do a much better job.
  • The code generation is backed by human written code, so no hallucinations.
  • The same benefits for generating code at scale with Hof.

Other places we see LLMs helping Hof

  • importing existing code to CUE & Hof
  • automatically transforming existing code to hof generators
  • filling in the details and gaps in generated code
  • in our premium user interfaces for low-code (these are more the multi-modal models, which come after LLMs, think Google Gemini)
2023 Hofstadter, Inc