The ability to call external functions — reading files, running calculations, querying the web — is what separates an AI agent from a chat window. OpenJarvis delivers this through its Tools module, one of the five core components in the framework’s modular architecture. Unlike monolithic frameworks where tool support is grafted on as an afterthought, the Tools module is a first-class citizen in OpenJarvis’s design: it has its own registry, its own configuration surface, its own lifecycle hooks, and a growing ecosystem of community-contributed extensions.
This article covers the Tools module in depth. You will learn what built-in tools ship with OpenJarvis, how to write and register your own tools in Python, how to drive tool selection from both the CLI and the Python SDK, and how to participate in the open-source ecosystem around the project. If you have not yet read What Is OpenJarvis?, start there — understanding the five-module architecture and how the Tools module interacts with the Agent and Intelligence modules will make everything in this guide much clearer.
The Tools Module
The Tools module is OpenJarvis’s interface between the agent’s reasoning loop and the real world. When the Agent module determines that a task requires external information or computation — fetching a web page, calculating a value, reading a local file — it does not call those capabilities directly. Instead, it issues a structured tool call to the Tools module, which looks up the registered handler, validates the arguments, executes the function, and returns the result back to the agent’s observation step.
This separation is deliberate and consequential. Because the Agent module never calls external capabilities directly, the Tools module can enforce argument validation, apply rate limiting, log all invocations for debugging, and surface consistent error messages regardless of which underlying function failed. The agent does not need to know whether web_search is implemented as an HTTP client calling a search API or a function scraping a local index — the abstraction is identical from the agent’s perspective.
Every tool in OpenJarvis conforms to a common interface defined by the ToolSpec class:
from openjarvis.tools import ToolSpec, ToolResult
class ToolSpec:
name: str # unique identifier used by the agent
description: str # shown to the LLM; must be clear and specific
parameters: dict # JSON Schema defining accepted arguments
returns: str # human-readable description of the return value
timeout_seconds: int = 30 # maximum execution time before the tool is killed
requires_confirmation: bool = False # prompt user before execution if True
The description field deserves particular attention. It is injected verbatim into the prompt that the LLM sees when deciding which tool to call. A vague description like "Search the internet" leads to poor tool selection. A specific description like "Search the web for current information using a keyword query. Returns a list of titles, URLs, and brief snippets from the top results." gives the model the information it needs to make a good decision. When writing custom tools, treat this field as prompt engineering, not documentation.
Tools are organized into the tool registry — a runtime dictionary maintained by the Tools module that maps tool names to their handlers. The registry is populated at startup from the [tools] block in config.toml and can be extended dynamically at runtime through the Python SDK.
Built-In Tools
OpenJarvis ships with a curated set of built-in tools that cover the most common agent use cases. These tools are maintained by the core project team and are available in every OpenJarvis installation without additional dependencies (except where noted).
| Tool | Description | CLI Flag | Primary Use Case |
|---|---|---|---|
calculator | Evaluates mathematical expressions safely using a sandboxed Python evaluator | --tools calculator | Arithmetic, unit conversion, financial calculations |
web_search | Queries a configurable search provider (DuckDuckGo by default; SerpAPI with key) and returns top results | --tools web_search | Current events, live data lookup, fact verification |
file_reader | Reads text files from the local filesystem; supports .txt, .md, .py, .json, .csv, .pdf | --tools file_reader | Reading notes, source code, documents |
file_writer | Writes or appends text content to a local file with optional confirmation prompt | --tools file_writer | Generating reports, saving outputs, updating config files |
code_executor | Runs Python code snippets in a sandboxed subprocess and captures stdout/stderr | --tools code_executor | Data analysis, automation scripts, dynamic computation |
shell_runner | Executes shell commands (opt-in; disabled by default for safety) | --tools shell_runner | System administration, build tasks, pipeline integration |
http_client | Makes HTTP GET/POST requests to external APIs and returns the response body | --tools http_client | API integrations, webhook calls, REST data retrieval |
memory_query | Queries the Learning module’s vector store directly and returns matching chunks | --tools memory_query | Explicit memory retrieval, knowledge base search |
datetime | Returns current date, time, and timezone information | --tools datetime | Scheduling, time-sensitive calculations, logging |
unit_converter | Converts between units of measurement (length, weight, temperature, currency) | --tools unit_converter | Engineering tasks, international data normalization |
The shell_runner tool is disabled by default because it grants the agent the ability to execute arbitrary shell commands, which poses obvious security risks when paired with a model that might misinterpret user intent. To enable it, you must explicitly add it to config.toml and acknowledge the security implications:
[tools]
enabled = ["calculator", "web_search", "file_reader", "file_writer", "code_executor", "shell_runner"]
[tools.shell_runner]
allowed_commands = ["git", "pip", "pytest", "make"] # whitelist of permitted executables
timeout_seconds = 60
require_confirmation = true # always prompt user before running
The web_search tool supports two backends. DuckDuckGo is used by default with no API key required. For higher-quality results and more reliable rate limits, configure SerpAPI:
[tools.web_search]
backend = "serpapi"
api_key = "${SERPAPI_API_KEY}" # read from environment variable
results_per_query = 5
include_snippets = true
Building Custom Tools
The real power of the Tools module reveals itself when you start building tools tailored to your specific workflow. OpenJarvis supports two patterns for custom tool definition: the decorator pattern for simple function-based tools and the class pattern for tools that need state, configuration, or complex lifecycle management.
Decorator Pattern
For most custom tools, the decorator pattern is the right choice. You define a regular Python function and decorate it with @tool, which registers it automatically:
from openjarvis.tools import tool, ToolResult
from openjarvis import Jarvis
from openjarvis.core.config import JarvisConfig
import sqlite3
from pathlib import Path
@tool(
name="inventory_lookup",
description=(
"Look up a product by its SKU in the local inventory database. "
"Returns current stock count, warehouse location, and last restock date. "
"Use this when the user asks about product availability or stock levels."
),
parameters={
"type": "object",
"properties": {
"sku": {
"type": "string",
"description": "The product SKU code (e.g., 'WIDGET-42A')"
},
"warehouse": {
"type": "string",
"enum": ["US-WEST", "US-EAST", "EU-CENTRAL"],
"description": "Optional warehouse filter. Omit to search all warehouses."
}
},
"required": ["sku"]
},
timeout_seconds=10,
)
def inventory_lookup(sku: str, warehouse: str = None) -> ToolResult:
db_path = Path.home() / ".openjarvis" / "inventory.db"
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
if warehouse:
cursor.execute(
"SELECT sku, stock, location, last_restock FROM inventory "
"WHERE sku = ? AND warehouse = ?",
(sku, warehouse)
)
else:
cursor.execute(
"SELECT sku, stock, location, last_restock FROM inventory WHERE sku = ?",
(sku,)
)
rows = cursor.fetchall()
conn.close()
if not rows:
return ToolResult(
success=False,
output=f"No inventory record found for SKU: {sku}",
)
results = [
{"sku": r[0], "stock": r[1], "location": r[2], "last_restock": r[3]}
for r in rows
]
return ToolResult(success=True, output=results)
# Register and use the custom tool
config = JarvisConfig.from_toml("~/.openjarvis/config.toml")
with Jarvis(config=config) as j:
j.tools.register(inventory_lookup) # add to the registry for this session
response = j.ask(
"How many units of WIDGET-42A do we have in the US-WEST warehouse?",
tools=["inventory_lookup"],
)
print(response.answer)
The ToolResult return type is the contract between your function and the Tools module. Setting success=False signals to the agent that the tool call failed, which triggers the Agent module’s error recovery path — the agent will attempt to reformulate the query or fall back to alternative reasoning rather than accepting a failed result as ground truth.
Class Pattern
For tools that need to maintain state across calls, manage connections, or share resources, the class pattern provides better encapsulation:
from openjarvis.tools import BaseTool, ToolResult, ToolSpec
from openjarvis import Jarvis
from openjarvis.core.config import JarvisConfig
import httpx
from typing import Optional
class GitHubIssueTool(BaseTool):
"""
Fetches GitHub issue details and comments for a given repository.
Maintains an authenticated HTTP client across multiple calls.
"""
spec = ToolSpec(
name="github_issues",
description=(
"Fetch details about a GitHub issue, including its title, body, labels, "
"and all comments. Use when the user asks about a specific GitHub issue "
"by number, or when debugging requires context from an issue thread."
),
parameters={
"type": "object",
"properties": {
"owner": {"type": "string", "description": "Repository owner (username or org)"},
"repo": {"type": "string", "description": "Repository name"},
"issue_number": {"type": "integer", "description": "The issue number"},
"include_comments": {
"type": "boolean",
"description": "Whether to fetch comments (default: true)",
"default": True,
}
},
"required": ["owner", "repo", "issue_number"]
},
returns="Issue title, body, labels, state, and optionally all comments as a structured dict.",
timeout_seconds=15,
)
def __init__(self, github_token: Optional[str] = None):
self.token = github_token
self._client: Optional[httpx.Client] = None
def setup(self):
"""Called once when the tool is registered. Initialize the HTTP client."""
headers = {"Accept": "application/vnd.github.v3+json"}
if self.token:
headers["Authorization"] = f"Bearer {self.token}"
self._client = httpx.Client(headers=headers, timeout=10.0)
def teardown(self):
"""Called on session end. Clean up resources."""
if self._client:
self._client.close()
def run(self, owner: str, repo: str, issue_number: int, include_comments: bool = True) -> ToolResult:
base_url = f"https://api.github.com/repos/{owner}/{repo}/issues/{issue_number}"
try:
issue_resp = self._client.get(base_url)
issue_resp.raise_for_status()
issue_data = issue_resp.json()
result = {
"title": issue_data["title"],
"state": issue_data["state"],
"body": issue_data["body"],
"labels": [label["name"] for label in issue_data.get("labels", [])],
"created_at": issue_data["created_at"],
"comments_count": issue_data["comments"],
}
if include_comments and issue_data["comments"] > 0:
comments_resp = self._client.get(f"{base_url}/comments")
comments_resp.raise_for_status()
result["comments"] = [
{"author": c["user"]["login"], "body": c["body"]}
for c in comments_resp.json()
]
return ToolResult(success=True, output=result)
except httpx.HTTPStatusError as e:
return ToolResult(
success=False,
output=f"GitHub API error {e.response.status_code}: {e.response.text}",
)
# Use the class-based tool
import os
config = JarvisConfig.from_toml("~/.openjarvis/config.toml")
github_tool = GitHubIssueTool(github_token=os.environ.get("GITHUB_TOKEN"))
with Jarvis(config=config) as j:
j.tools.register(github_tool)
response = j.ask(
"Summarize the discussion in issue #142 of the openjarvis/openjarvis repo "
"and tell me what the current status is.",
tools=["github_issues"],
)
print(response.answer)
The setup() and teardown() lifecycle hooks are called automatically by the Tools module — setup() when the tool is registered, teardown() when the Jarvis context exits. This guarantees that HTTP connections, database handles, and other resources are always properly closed, even if the agent session ends unexpectedly.
CLI Deep Dive
The jarvis CLI is the fastest way to run ad-hoc agent tasks without writing a Python script. It exposes the full functionality of the Tools module through a composable set of flags and subcommands. Understanding the CLI deeply makes the difference between using OpenJarvis as a toy and using it as a serious productivity tool.
Core Command Reference
| Command / Flag | Description | Example |
|---|---|---|
jarvis ask "query" | Send a single query to the agent | jarvis ask "What time is it in Tokyo?" |
--tools t1,t2 | Specify which tools the agent may use (comma-separated) | --tools calculator,web_search |
--agent NAME | Select the agent persona (orchestrator, researcher, coder) | --agent researcher |
--engine NAME | Override the inference engine for this call | --engine vllm |
--model NAME | Override the model for this call | --model codellama:34b-instruct |
--no-memory | Disable memory injection for this call | --no-memory |
--max-iter N | Set maximum agent reasoning iterations | --max-iter 15 |
--trace | Print the full think-act-observe trace after completion | --trace |
--json | Output the full response as JSON (includes trace, tool calls, metadata) | --json |
--pipe | Read query from stdin (enables pipeline use) | cat report.txt | jarvis ask --pipe "Summarize this" |
jarvis chat | Start an interactive multi-turn session | jarvis chat --tools file_reader,memory_query |
jarvis tools list | List all registered tools and their descriptions | jarvis tools list |
jarvis tools test NAME | Run a tool interactively with user-supplied arguments | jarvis tools test calculator |
jarvis memory ingest | Ingest documents into the Learning module’s vector store | jarvis memory ingest --file notes.md |
jarvis memory stats | Show storage statistics for the Learning module | jarvis memory stats --verbose |
jarvis config show | Print the resolved configuration (with env var substitutions) | jarvis config show |
jarvis config validate | Check config.toml for errors before running | jarvis config validate |
Advanced Flag Usage
Combining tools with stdin pipelines:
# Analyze a Python file for bugs, then search for known issues
cat src/my_module.py | jarvis ask --pipe \
--tools code_executor,web_search \
--agent coder \
--trace \
"Review this code for bugs, run any testable snippets, and search for known issues with any libraries used."
Batch processing with JSON output:
# Process a list of URLs and save structured results
while IFS= read -r url; do
jarvis ask \
--tools http_client,memory_query \
--json \
"Fetch the content at $url and summarize the key points." \
>> results.jsonl
done < urls.txt
Running a scheduled research task:
# In a cron job: daily research digest
jarvis ask \
--tools web_search,file_writer \
--no-memory \
--model mistral:7b-instruct \
"Search for the top 5 AI agent framework releases from the past 24 hours and write a brief digest to ~/research/daily-digest-$(date +%Y%m%d).md"
Using custom tool scripts:
You can load custom tool definitions from a Python file at runtime without modifying the installed package:
jarvis ask \
--tools-file ~/my-tools/inventory_tools.py \
--tools inventory_lookup,github_issues \
"How many units of WIDGET-42A are available, and is there a related open GitHub issue?"
The --tools-file flag tells the Tools module to import the specified Python file and register any functions or classes decorated with @tool or inheriting from BaseTool that it finds there. This is the primary mechanism for using custom tools from the CLI without publishing them as a package.
Python SDK Patterns
The Python SDK unlocks capabilities that are not accessible through the CLI: dynamic tool registration, multi-agent orchestration, streaming responses, tool chaining, and integration with larger Python applications. This section covers the patterns that come up most often in real-world OpenJarvis deployments.
Programmatic Agent Creation with Tool Chaining
The following example demonstrates a common real-world pattern: creating an agent that uses multiple tools in sequence, where the output of one tool informs the input to the next:
from openjarvis import Jarvis
from openjarvis.core.config import JarvisConfig
from openjarvis.tools import tool, ToolResult
from openjarvis.agent import AgentTask, TaskResult
import json
from pathlib import Path
# Define a custom tool for parsing structured log files
@tool(
name="log_parser",
description=(
"Parse a structured JSON log file and return error counts by severity level. "
"Use this to analyze application logs when the user asks about error rates or "
"system health. Returns a summary dict with counts per severity."
),
parameters={
"type": "object",
"properties": {
"log_path": {
"type": "string",
"description": "Absolute path to the JSON log file"
},
"since_hours": {
"type": "number",
"description": "Only count entries from the last N hours. Default: 24"
}
},
"required": ["log_path"]
},
timeout_seconds=20,
)
def log_parser(log_path: str, since_hours: float = 24.0) -> ToolResult:
from datetime import datetime, timedelta, timezone
path = Path(log_path)
if not path.exists():
return ToolResult(success=False, output=f"File not found: {log_path}")
cutoff = datetime.now(timezone.utc) - timedelta(hours=since_hours)
counts = {"ERROR": 0, "WARNING": 0, "INFO": 0, "DEBUG": 0, "UNKNOWN": 0}
total = 0
with path.open() as f:
for line in f:
line = line.strip()
if not line:
continue
try:
entry = json.loads(line)
ts = datetime.fromisoformat(entry.get("timestamp", "")).replace(
tzinfo=timezone.utc
)
if ts < cutoff:
continue
severity = entry.get("level", "UNKNOWN").upper()
counts[severity if severity in counts else "UNKNOWN"] += 1
total += 1
except (json.JSONDecodeError, ValueError):
counts["UNKNOWN"] += 1
total += 1
return ToolResult(
success=True,
output={"counts": counts, "total": total, "window_hours": since_hours},
)
def run_health_check_agent(log_file: str) -> TaskResult:
config = JarvisConfig.from_toml("~/.openjarvis/config.toml")
with Jarvis(config=config) as j:
# Register the custom tool
j.tools.register(log_parser)
# Define a multi-step task that chains tools
task = AgentTask(
query=(
f"Analyze the application logs at {log_file} for the past 24 hours. "
"Then search the web for any known issues related to high error rates "
"in our tech stack. Finally, write a concise health report to "
"~/reports/health-$(date).md with your findings and recommendations."
),
tools=["log_parser", "web_search", "file_writer", "datetime"],
agent="orchestrator",
max_iterations=12,
)
result = j.run_task(task)
# Inspect the tool call trace
for step in result.trace:
if step.tool_calls:
for call in step.tool_calls:
print(f"Tool: {call.tool_name} | Args: {call.arguments}")
print(f"Result: {call.result.output}\n")
return result
if __name__ == "__main__":
result = run_health_check_agent("/var/log/myapp/app.jsonl")
print(result.answer)
print(f"\nCompleted in {len(result.trace)} reasoning steps.")
Streaming Responses with Tool Events
For interactive applications, streaming lets you display partial responses and tool call events as they happen:
from openjarvis import Jarvis
from openjarvis.core.config import JarvisConfig
from openjarvis.streaming import StreamEvent, EventType
config = JarvisConfig.from_toml("~/.openjarvis/config.toml")
with Jarvis(config=config) as j:
stream = j.ask_stream(
"Research the latest LangChain releases and summarize any new agent features.",
tools=["web_search", "memory_query"],
)
for event in stream:
if event.type == EventType.TEXT_DELTA:
print(event.content, end="", flush=True)
elif event.type == EventType.TOOL_CALL_START:
print(f"\n[Calling tool: {event.tool_name}({event.arguments})]")
elif event.type == EventType.TOOL_CALL_RESULT:
print(f"[Tool result: {str(event.result)[:120]}...]")
elif event.type == EventType.ITERATION_END:
print(f"\n[Iteration {event.iteration} complete]")
print() # final newline after streaming
Registering Tools from a Configuration Dictionary
For applications that need to define tools dynamically — for example, generating tool definitions from a database schema at startup — the SDK supports dict-based registration:
from openjarvis import Jarvis
from openjarvis.core.config import JarvisConfig
from openjarvis.tools import ToolRegistry
config = JarvisConfig.from_toml("~/.openjarvis/config.toml")
# Define tools programmatically — useful when tool definitions come from config or a DB
dynamic_tools = [
{
"name": "product_search",
"description": "Search the product catalog by keyword or category.",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"category": {"type": "string", "enum": ["electronics", "books", "clothing"]},
"max_results": {"type": "integer", "default": 10}
},
"required": ["query"]
},
"handler": lambda query, category=None, max_results=10: {
"results": [], # replace with real catalog lookup
"query": query
}
}
]
registry = ToolRegistry()
for tool_def in dynamic_tools:
registry.register_from_dict(tool_def)
with Jarvis(config=config, tool_registry=registry) as j:
response = j.ask("Find me the top 5 books about machine learning.")
print(response.answer)
For deep context on how tool-calling agents work under the hood — particularly the think-act-observe loop that drives OpenJarvis’s Agent module — the LangChain Agents and Tools guide is an excellent companion read. The architectural patterns are similar, and understanding how LangChain implements tool routing will deepen your mental model of what OpenJarvis is doing at each reasoning step.
Community and Contributing
OpenJarvis is an open-source project maintained on GitHub at open-jarvis/openjarvis. The community is organized around GitHub Issues, GitHub Discussions, and a Discord server linked in the project’s README. Understanding the contribution workflow helps you move from user to contributor, which is the most reliable way to influence the direction of a tool you depend on.
Contributing a Custom Tool to the Ecosystem
The easiest first contribution is submitting a custom tool to the community tools repository at open-jarvis/openjarvis-tools. This repository is separate from the core framework and maintained with a lower bar for contribution — it is explicitly designed as a space for community-maintained integrations that are useful but too specialized for the core package.
The contribution workflow for a new tool:
- Fork
open-jarvis/openjarvis-toolson GitHub - Create a new directory under
community/tools/your-tool-name/ - Add
tool.py(the tool implementation),README.md(usage documentation), andtest_tool.py(at least one test) - Submit a pull request with the
[community-tool]prefix in the title
The review checklist for community tools is intentionally simple. Reviewers look for:
- A clear, specific
descriptionfield that models can act on reliably - Proper error handling (never raise unhandled exceptions; always return
ToolResult) - A test that covers both the success path and at least one failure path
- No hardcoded credentials or system-specific paths
Tools that pass review are listed in the official community tools index at docs.openjarvis.io/tools/community, which is how most users discover them.
Reporting Issues
When reporting a bug, the OpenJarvis maintainers ask for three things:
- A minimal reproduction script — the smallest Python snippet or CLI command that triggers the bug. Avoid pasting your entire application; isolate the failure.
- The full error output — run with
JARVIS_LOG_LEVEL=debugset in your environment to capture verbose logs before filing the issue. - Your configuration summary — run
jarvis config show --redact(which masks API keys and tokens) and include the output.
Issues that include all three items are typically triaged within 48 hours. Issues that are vague (“the web search tool doesn’t work”) may wait weeks.
Feature Requests and Roadmap
Feature requests go through GitHub Discussions rather than Issues. The distinction matters: an Issue signals a defect that needs fixing, while a Discussion opens a conversation about whether and how a feature should be added. Maintainers evaluate feature requests against three criteria: alignment with the local-first philosophy, the size of the likely user base, and whether the feature can be implemented as a community tool rather than a core change.
The most impactful community contributions are typically not new tools but improvements to existing ones: better error messages, additional input validation, support for a new backend or API version, or performance optimizations to the tool invocation path.
Staying Current with the Ecosystem
The OpenJarvis ecosystem moves quickly. The three most reliable ways to stay current:
- Watch the GitHub repository (use “Releases only” notification level to avoid noise) — all breaking changes are announced in release notes
- Subscribe to the Discord #announcements channel — community tool releases and breaking API changes are posted there first
- Run
pip install --upgrade openjarvisweekly in a test environment — theCHANGELOG.mdin each release documents tool API changes
If you are building a production application on top of OpenJarvis, pin your dependency to a specific minor version (openjarvis>=0.9.0,<0.10.0) and treat each minor version upgrade as a planned migration event rather than an automatic update.
For a comparison of how community-driven tool ecosystems work in other agent frameworks — including a look at how Skills and Nodes function in a different local-first project — see the OpenClaw Skills and Nodes guide. The two projects take meaningfully different approaches to the same problem of making tool contributions sustainable at community scale.
Frequently Asked Questions
How is the OpenJarvis Tools module different from Skills in other frameworks?
The terminology differs across frameworks, but the underlying pattern is similar: a named, callable capability that the agent can invoke during its reasoning loop. The key difference is in how OpenJarvis structures the abstraction.
In many frameworks (including the Skills system in OpenClaw), “skill” covers both the tool definition and the agent persona that uses it — skills often bundle prompt templates, memory context, and callable functions together. OpenJarvis deliberately separates these concerns. A tool in OpenJarvis is a pure, stateless callable with a well-defined JSON Schema interface. The Agent module and Intelligence module handle persona and prompt construction separately. This makes tools more composable and easier to test in isolation, but it also means you need to think about these layers independently, which can feel like more overhead when you are just getting started.
The practical implication: an OpenJarvis tool is easier to unit-test (you can call tool.run() directly in a test without spinning up an agent), but assembling a full agent workflow requires wiring together more pieces than a framework that provides a higher-level “skill” abstraction.
Can I share my custom tools with the community?
Yes — and the project maintainers actively encourage it. The community tools repository (open-jarvis/openjarvis-tools) is the official channel for sharing tools with other OpenJarvis users. Once your tool is merged there, it appears in the community tools index and can be installed directly:
pip install openjarvis-tools[your-tool-name]
Or, if you want to install all community tools at once:
pip install openjarvis-tools[all]
For tools that depend on proprietary systems or internal company APIs, you can still publish them as standalone Python packages on PyPI following the naming convention openjarvis-tool-yourname. The OpenJarvis documentation links to a published list of third-party tool packages maintained by the community.
Before publishing, make sure your tool’s description field is written for a general audience — the model using the tool has no context about your internal systems. Write the description as if explaining the tool to someone who has never heard of your organization.
Are there security concerns when using custom tools with local models?
Yes, and they deserve serious attention. The security posture of a custom tool is determined by three factors: what the tool can access, what arguments it accepts from the model, and how thoroughly it validates those arguments.
The most important rule: never pass model-generated strings directly to a shell, database query, or file path without validation. A model that has been jailbroken or confused by an adversarial prompt could generate arguments designed to exploit an insufficiently validated tool. For example, a file_reader tool that accepts arbitrary paths could be manipulated into reading /etc/passwd or SSH private keys if the model receives a prompt engineered to request those paths.
Mitigations to apply to every custom tool:
- Allowlist paths and commands — if your tool operates on files or shell commands, maintain an explicit allowlist of permitted paths or executables rather than accepting arbitrary strings
- Validate arguments before execution — check types, ranges, and formats before calling any external system
- Set
requires_confirmation = Truefor tools with destructive effects — this adds a human-in-the-loop checkpoint before execution - Log all tool invocations — OpenJarvis logs tool calls when telemetry is enabled; ensure you can audit what the agent actually called and with what arguments
- Use
timeout_secondsconservatively — a tool that hangs indefinitely can stall the agent session; set realistic timeouts and handleTimeoutErrorin your tool handler
Local models are generally less susceptible to prompt injection than highly capable frontier models (because they are less responsive to nuanced instructions), but this is not a reliable security guarantee. Treat every custom tool as if it could be called with adversarial arguments, and build your validation logic accordingly.
Next Steps
With a thorough understanding of the Tools module and the OpenJarvis ecosystem, you have the foundation to build genuinely capable local agents tailored to your specific workflow.
For deeper tool chaining patterns: The multi-step reasoning loop that drives OpenJarvis’s Agent module is architecturally similar to the ReAct pattern popularized by LangChain. Reading the LangChain Agents and Tools guide will give you transferable knowledge about how think-act-observe loops are structured, what makes a good tool description, and how to debug tool-calling failures — all of which apply directly to OpenJarvis.
For comparing tool ecosystems across frameworks: If you are evaluating whether OpenJarvis’s tool model fits your needs, or exploring alternatives, the OpenClaw Skills and Nodes guide covers a meaningfully different approach to the same problem. Comparing the two designs will clarify the trade-offs and help you choose the right framework for your use case.
For production hardening: Once your custom tools are working in development, the next challenge is making them reliable under real workloads. Focus on three areas: comprehensive error handling in every tool handler (test what happens when external services are down or slow), argument validation (use JSON Schema pattern and format constraints to catch bad inputs before your code runs), and telemetry (enable OpenJarvis’s built-in logging and monitor tool call latency and failure rates in production).
For the full OpenJarvis picture: This article focused on the Tools module. The four other modules — Intelligence, Agent, Engine, and Learning — each have their own depth. The OpenJarvis Engine and Learning guide covers the inference backend configuration and persistent memory system in the same level of detail, and is the recommended next read if you want to squeeze maximum performance out of your local hardware.