llm-shell is an extensible, developer-oriented command-line console that can interact with multiple Large Language Models (LLMs). It serves as both a demo of the llmrb/llm library and a tool to help improve the library through real-world usage and feedback. Jump to the Demos section to see it in action.
- 🌟 Unified interface for multiple Large Language Models (LLMs)
- 🤝 Supports Gemini, OpenAI, Anthropic, DeepSeek, LlamaCpp and Ollama
- 📤 Attach local files as conversation context
- 🔧 Extend with your own functions and tool calls
- 🚀 Extend with your own console commands
- 🤖 Builtin auto-complete powered by Readline
- 🎨 Builtin syntax highlighting powered by Coderay
- 📄 Deploys the less pager for long outputs
- 📝 Advanced Markdown formatting and output
For security and safety reasons, a user must confirm the execution of all function calls before they happen
llm-shell can be extended with your own functions (also known as tool calls).
This can be done by creating a Ruby file in the ~/.llm-shell/functions/
directory – with one file per function. The functions are
loaded at boot time. The functions are shared with the LLM and the LLM
can request their execution. The LLM is also made aware of a function's
return value after it has been called.
See the
functions/
directory for more examples:
LLM.function(:system) do |fn|
fn.description "Run a shell command"
fn.params do |schema|
schema.object(command: schema.string.required)
end
fn.define do |command:|
ro, wo = IO.pipe
re, we = IO.pipe
Process.wait Process.spawn(command, out: wo, err: we)
[wo,we].each(&:close)
{stderr: re.read, stdout: ro.read}
end
end
llm-shell can be extended with your own console commands. This can be
done by creating a Ruby file in the ~/.llm-shell/commands/
directory –
with one file per command. The commands are loaded at boot time.
See the
commands/
directory for more examples:
LLM.command "say-hello" do |cmd|
cmd.description "Say hello to somebody"
cmd.define do |name|
io.rewind.print "Hello #{name}!"
end
end
It is recommended that custom prompts instruct the LLM to emit markdown, otherwise you might see unexpected results because llm-shell assumes the LLM will emit markdown.
The first message in a conversation is sometimes known as a "system prompt", and it defines the expectations and rules to be followed by an LLM throughout a conversation. The default prompt used by llm-shell can be found at default.txt.
The prompt can be changed by adding a file to the ~/.llm-shell/prompts/
directory,
and then choosing it at boot time with the -r PROMPT
, --prompt PROMPT
options.
Generally you probably want to fork default.txt
to conserve the original prompt rules around markdown and files, then modify it to
suit your own needs and preferences.
The console client can be configured at the command line through option switches,
or through a YAML file. The YAML file can contain the same options that could be
specified at the command line. For cloud providers the key option is the only
required parameter, everything else has defaults. The YAML file is read from the
path ${HOME}/.llm-shell/config.yml
and it has the following format:
# ~/.config/llm-shell.yml
openai:
key: YOURKEY
gemini:
key: YOURKEY
anthropic:
key: YOURKEY
deepseek:
key: YOURKEY
ollama:
host: localhost
model: deepseek-coder:6.7b
llamacpp:
host: localhost
model: qwen3
tools:
- system
Usage: llm-shell [OPTIONS]
-p, --provider NAME Required. Options: gemini, openai, anthropic, ollama or llamacpp.
-k, --key [KEY] Optional. Required by gemini, openai, and anthropic.
-m, --model [MODEL] Optional. The name of a model.
-h, --host [HOST] Optional. Sometimes required by ollama.
-o, --port [PORT] Optional. Sometimes required by ollama.
-f, --files [GLOB] Optional. Glob pattern(s) separated by a comma.
-r, --prompt [PROMPT] Optional. The prompt to use.
-v, --version Optional. Print the version and exit
llm-shell can be installed via rubygems.org
gem install llm-shell