Your Home's Local AI Brain.
A local AI smart home assistant that manages your home, remembers your preferences and understands nuanced requests.
How It Works
You talk to it, it reasons, executes and replies.
Listen
Audio capture and speech-to-text run locally on your Mac.
Audio & STT
Uses OpenAI Whisper locally to transcribe audio in near real-time.
Think
Jarvis understands context, memory, and intent.
Agent Routing
Intent Agent classifies the request. Action Agent produces strict JSON for Home Assistant. Response Agent explains outcomes. Memory provides retrieval (RAG) when needed.
Act
Lights flip. Music plays. Scenes change. All via Home Assistant.
Home Assistant Control
Sends authenticated HTTP calls to the Home Assistant API to trigger services, update entities, and read states in real time.
Private by Default
Jarvis is local by default. No telemetry. No background data collection. Cloud models are optional and only used when you explicitly enable a provider.
Local Mode
Runs fully on your Mac. Works offline. Your home context stays on your network.
- Ollama
- Open Source LLM's
- 100% Private
Ollama & Offline Models
Jarvis uses Ollama as the primary engine for local intelligence. You can download and run state-of-the-art models like Llama, Mistral, or Qwen with zero internet connection once pulled.
Cloud Mode
Optionally use cloud models for heavy reasoning. When enabled, requests are sent to the selected provider under their data policy.
- OpenAI
- Gemini
- OpenCode
Cloud Providers
OpenCode offers a variety of powerful models that are free to use without any account creation. For enterprise-grade models from Gemini or OpenAI, you can easily connect your own API keys in the system settings.
Who this is for
Get Started
- Clone the repository from GitHub.
- Open your terminal and navigate to the project folder.
- Run the installer command:
This script configures the environment and launches the reference system locally.