Beginner Openjarvis 9 min read

How to Install OpenJarvis: CLI and Python SDK Setup

#openjarvis #installation #setup #cli #python #ollama #config
📚

Read these first:

OpenJarvis transforms your terminal into a fully autonomous AI agent workspace — but first you need to get it running. This guide walks you through every step from a bare Python environment to executing your first agent task, whether you want to run everything locally with Ollama or fall back to cloud APIs when needed.

Before diving in, make sure you have read what OpenJarvis is and how it works. Understanding its orchestrator-tool architecture will help you make better decisions during configuration.


Prerequisites

OpenJarvis has a lean dependency footprint, but a few requirements are non-negotiable.

Required:

  • Python 3.10 or later. OpenJarvis uses modern type hints and structural pattern matching introduced in Python 3.10. Older versions will fail at import.
  • pip 22+. Most Python 3.10+ installs ship a modern pip. Confirm with pip --version.
  • A Unix-like shell (Linux, macOS, or WSL2 on Windows). The CLI tool jarvis is designed for bash/zsh workflows. Native Windows CMD is not officially supported.

Optional but recommended:

  • Ollama — if you want 100% local inference with no API keys. Covered in detail below.
  • Git — only needed if you want to install from source or contribute to the project.
  • A virtual environment (venv or conda) — strongly recommended to avoid dependency conflicts.

Check your Python version:

python3 --version
# Python 3.11.9  ← anything 3.10+ is fine

Create and activate a virtual environment before installing:

python3 -m venv .venv
source .venv/bin/activate   # macOS/Linux
# On WSL2: same command

Installing OpenJarvis

The standard installation pulls the latest stable release from PyPI:

pip install openjarvis

This installs the openjarvis Python package and registers the jarvis CLI command in your PATH (inside the active virtual environment).

Verify the installation:

jarvis --version
# openjarvis 0.4.2

If jarvis is not found, your virtual environment’s bin/ directory may not be in PATH. Confirm the venv is active (which python should point inside .venv/) and try again.

Upgrade an existing install

pip install --upgrade openjarvis

Installing from source

If you want the latest unreleased features or plan to contribute, install from the GitHub repository:

git clone https://github.com/openjarvis/openjarvis.git
cd openjarvis
pip install -e ".[dev]"

The -e flag installs in editable mode, meaning code changes in the cloned directory take effect immediately without reinstalling.


Setting Up Your Inference Engine

OpenJarvis is model-agnostic by design. It delegates all LLM inference through a configurable engine layer. You choose the engine; OpenJarvis handles the rest. Two paths are most common: Ollama for fully local inference, and cloud API providers for zero-setup access to frontier models.

Ollama runs open-weight models locally via a REST API on localhost:11434. It is the recommended engine for development because it requires no API keys, works offline, and is free.

Install Ollama:

curl -fsSL https://ollama.ai/install.sh | sh

On macOS, you can also use Homebrew:

brew install ollama

Start the Ollama daemon:

ollama serve
# Listening on 127.0.0.1:11434

Leave this terminal open, or run it as a background service. On Linux with systemd:

sudo systemctl enable --now ollama

Pull the recommended model:

OpenJarvis defaults to qwen3:8b, a capable 8-billion parameter model that runs on most laptops with 16 GB RAM and no discrete GPU:

ollama pull qwen3:8b
# pulling manifest
# pulling 4.9 GB...

For machines with less RAM, try a smaller variant:

ollama pull qwen3:4b   # ~2.7 GB, lower quality but faster

Verify the model is available:

ollama list
# NAME            ID              SIZE    MODIFIED
# qwen3:8b        ...             4.9 GB  2 minutes ago

Cloud API Fallback (OpenAI / Anthropic)

If you prefer not to run a local model, or want to use GPT-4o or Claude for higher accuracy on complex tasks, set environment variables for your chosen provider.

OpenAI:

export OPENAI_API_KEY="sk-..."

Anthropic:

export ANTHROPIC_API_KEY="sk-ant-..."

Add these to your ~/.bashrc or ~/.zshrc to persist them across sessions. OpenJarvis picks them up automatically when the corresponding engine is set in config.toml.


Configuring config.toml

OpenJarvis reads its configuration from ~/.openjarvis/config.toml at startup. If this file does not exist, OpenJarvis creates a minimal default on first run — but you will almost certainly want to customize it.

Create the directory and file manually:

mkdir -p ~/.openjarvis
touch ~/.openjarvis/config.toml

Example 1 — Ollama (Local, No GPU)

This configuration runs entirely on your CPU with no external dependencies. It is the recommended starting point for laptops and development machines.

[engine]
default = "ollama"

[engine.ollama]
host = "http://localhost:11434"

[intelligence]
default_model = "qwen3:8b"
temperature   = 0.7
max_tokens    = 2048

[agent]
default_agent     = "orchestrator"
max_turns         = 15
context_from_memory = true

[storage]
backend = "sqlite"
path    = "~/.openjarvis/memory.db"

[telemetry]
enabled = false

Key settings to understand:

KeyEffect
engine.defaultWhich inference backend to use on startup
engine.ollama.hostThe Ollama REST endpoint (change port if Ollama runs elsewhere)
intelligence.default_modelModel name as listed by ollama list
intelligence.temperature0.0 = deterministic, 1.0 = creative. 0.7 is a balanced default
agent.max_turnsHow many tool-call rounds the orchestrator may take before giving up
agent.context_from_memoryWhether to inject previous conversation context into new tasks
storage.backendsqlite is the default; keeps memory local with zero config
telemetry.enabledOpenJarvis collects anonymous usage metrics by default. Set false to opt out

Example 2 — Cloud API Fallback (OpenAI)

Use this when you want to skip local model setup and hit OpenAI’s API directly:

[engine]
default = "openai"

[engine.openai]
api_key = "${OPENAI_API_KEY}"   # reads from environment variable

[intelligence]
default_model = "gpt-4o-mini"
temperature   = 0.7
max_tokens    = 4096

[agent]
default_agent     = "orchestrator"
max_turns         = 20
context_from_memory = true

[storage]
backend = "sqlite"
path    = "~/.openjarvis/memory.db"

[telemetry]
enabled = false

Note the "${OPENAI_API_KEY}" syntax — OpenJarvis expands environment variables in the config file, so you never need to hardcode secrets.

Validate your configuration

After writing the file, run the built-in config check:

jarvis config validate
# ✓ engine: ollama reachable at http://localhost:11434
# ✓ model: qwen3:8b available
# ✓ storage: sqlite database initialized
# ✓ config.toml is valid

If any check fails, the output will tell you exactly what is missing.


Running Your First Agent Task

With OpenJarvis installed and configured, you are ready to issue your first command.

Simple query

jarvis ask "What are the key differences between RAG and fine-tuning?"

OpenJarvis routes this through the default orchestrator agent. The agent decides whether to answer directly from the model or invoke tools (such as web search) to gather fresh information. You will see a streaming response in your terminal.

Query with explicit agent and tools

Specify which agent and which tools to enable for more control:

jarvis ask --agent orchestrator --tools calculator,web_search "What is 15% of $847 and who currently leads the AI agent framework market?"

The --tools flag allows comma-separated tool names. The orchestrator will use the calculator for the arithmetic portion and web_search for the market question — automatically parallelizing where possible.

Multi-turn conversation

Start an interactive session to maintain context across multiple questions:

jarvis chat
# > You: Summarize the LangChain documentation for me.
# > Jarvis: [streaming response]
# > You: Now compare it to LlamaIndex from what you just read.
# > Jarvis: [context-aware follow-up]
# Type /exit to quit.

List available agents and tools

jarvis agents list
# orchestrator   General-purpose multi-step reasoning (default)
# researcher     Web search + summarization focused
# coder          Code generation + execution

jarvis tools list
# calculator     Arithmetic and unit conversion
# web_search     Live web queries via SerpAPI
# file_reader    Read local files and return content
# code_executor  Run Python snippets in a sandbox

Using the Python SDK

The Python SDK gives you programmatic control over OpenJarvis — useful for integrating agent tasks into your own scripts, pipelines, or applications.

Basic usage

from openjarvis import Jarvis
from openjarvis.core.config import (
    JarvisConfig,
    EngineConfig,
    OllamaEngineConfig,
    IntelligenceConfig,
    AgentConfig,
)

config = JarvisConfig(
    engine=EngineConfig(
        default="ollama",
        ollama=OllamaEngineConfig(host="http://localhost:11434"),
    ),
    intelligence=IntelligenceConfig(
        default_model="qwen3:8b",
        temperature=0.7,
        max_tokens=2048,
    ),
    agent=AgentConfig(
        default_agent="orchestrator",
        max_turns=15,
        context_from_memory=True,
    ),
)

j = Jarvis(config=config)

response = j.ask("Explain the concept of vector embeddings in three sentences.")
print(response.text)

j.close()

Always call j.close() when finished. It flushes the in-memory context to the SQLite storage backend and cleanly shuts down background threads.

Using a context manager

A cleaner pattern that handles cleanup automatically:

from openjarvis import Jarvis
from openjarvis.core.config import JarvisConfig, EngineConfig, OllamaEngineConfig, IntelligenceConfig, AgentConfig

config = JarvisConfig(
    engine=EngineConfig(
        default="ollama",
        ollama=OllamaEngineConfig(host="http://localhost:11434"),
    ),
    intelligence=IntelligenceConfig(default_model="qwen3:8b"),
    agent=AgentConfig(default_agent="orchestrator"),
)

with Jarvis(config=config) as j:
    result = j.ask("List five practical use cases for AI agents in DevOps.")
    print(result.text)
# j.close() is called automatically on exit

Loading config from file

Instead of constructing JarvisConfig in code, you can point the SDK at your config.toml:

from openjarvis import Jarvis
from openjarvis.core.config import JarvisConfig

config = JarvisConfig.from_toml("~/.openjarvis/config.toml")

with Jarvis(config=config) as j:
    response = j.ask("What Python libraries are best for building AI agents?")
    print(response.text)

This is the recommended pattern for production scripts — keep secrets and tuning parameters in the config file, not in code.

Accessing tool results

When the agent invokes tools, the response object exposes the individual tool calls:

with Jarvis(config=config) as j:
    response = j.ask(
        "Search the web for the latest LangChain release notes.",
        tools=["web_search"],
    )
    print(response.text)
    for call in response.tool_calls:
        print(f"Tool: {call.name}, Input: {call.input}, Output: {call.output[:200]}")

Common Issues

Ollama is not running

Symptom: jarvis config validate reports engine: ollama unreachable at http://localhost:11434.

Fix: Start the Ollama daemon in a separate terminal:

ollama serve

Or, on Linux, enable it as a system service so it starts automatically at boot:

sudo systemctl enable --now ollama
systemctl status ollama   # confirm it is active

Model not downloaded

Symptom: jarvis ask "..." fails with model 'qwen3:8b' not found.

Fix: Pull the model before using it:

ollama pull qwen3:8b

If the pull fails due to a slow connection, retry — Ollama supports resumable downloads. Confirm the model is present after pulling:

ollama list

If the model name in your config.toml does not exactly match what ollama list shows (including the tag like :8b), OpenJarvis cannot find it. Keep them in sync.

Port conflict on 11434

Symptom: ollama serve exits immediately with address already in use.

Fix: Either another Ollama instance is running, or another process has claimed port 11434.

Find and stop the conflicting process:

lsof -i :11434          # Linux/macOS — shows the process using the port
kill -9 <PID>           # replace <PID> with the process ID shown

Alternatively, run Ollama on a different port and update config.toml to match:

OLLAMA_HOST=0.0.0.0:11435 ollama serve
[engine.ollama]
host = "http://localhost:11435"

Frequently Asked Questions

Does OpenJarvis require a GPU?

No. OpenJarvis itself is pure Python and has no GPU requirement. Whether inference uses a GPU depends entirely on the engine you configure. Ollama will automatically use your GPU (NVIDIA CUDA or Apple Metal) if one is available, and will fall back to CPU otherwise. For the qwen3:8b model on CPU-only hardware, expect response latency of 5–20 seconds per message depending on your machine. Smaller models like qwen3:4b are significantly faster on CPU. Cloud API engines (OpenAI, Anthropic) offload all compute to remote servers, so local hardware is irrelevant.

How do I switch inference engines after setup?

Edit ~/.openjarvis/config.toml and change the [engine] default value. No reinstallation is required. For example, to switch from Ollama to OpenAI:

[engine]
default = "openai"

[engine.openai]
api_key = "${OPENAI_API_KEY}"

Then run jarvis config validate to confirm the new engine is reachable. You can also override the engine at runtime without touching the config file:

jarvis ask --engine openai "Summarize this document for me."

This is useful when you want Ollama as the daily driver but occasionally reach for a frontier model for a complex task.

Can I use both local and cloud models in the same setup?

Yes. Define both engine blocks in config.toml and set a default. When you need a different engine, pass --engine at the CLI or override EngineConfig.default in the SDK. A common pattern is using Ollama for routine tasks (free, offline, fast enough) and switching to GPT-4o or Claude for tasks that require deep reasoning or up-to-date web knowledge. The agent’s tool calling, memory, and orchestration logic remain identical regardless of which inference engine is active — only the model and API change.


Next Steps

You now have a working OpenJarvis installation. Here is where to go from here:

  • Explore the CLI flags — run jarvis --help and jarvis ask --help to see every option including output format flags (--json, --markdown) and verbosity controls.
  • Customize agents and tools — create agent profiles in config.toml to define task-specific presets with different models, tool sets, and turn limits.
  • Integrate into a pipeline — use the Python SDK in automation scripts. OpenJarvis pairs well with cron jobs, GitHub Actions, or AI content pipelines.
  • Compare installation approaches — if you are evaluating similar tools, see the OpenDevin installation guide for a side-by-side sense of how setup complexity differs across open-source agent projects.
  • Build your first real task — pick a repetitive terminal workflow you do daily and write a jarvis ask command that replaces it. Practical use is the fastest way to learn OpenJarvis’s strengths and limits.

Related Articles