Shoal
Run aground on your own hardware.
Local AI coding platform. Multi-dock architecture connecting coding tools to on-device inference. No cloud. No API keys. No telemetry.
Architecture
Docks and berths.
Docks are what you do. Berths are what runs underneath. Any dock can moor at any berth. The dock handles the workflow, the berth handles the tokens.
DOCKS BERTHS
Keel (write code) MLX (Apple Silicon)
Crow's Nest (review code) llama.cpp (NVIDIA/AMD/CPU)
Bosun (maintain code) Crucible (Grimvane inference)
Claude (Claude Code compat)
Pilot (frontier mooring) Docks
Where you moor.
Each dock is a self-contained coding workflow. Independent scope, independent iteration.
Keel
Write code
Go-based coding assistant with Bubble Tea TUI, architecture indexing, autonomous refactoring, input quality gate, and tool planning.
Crow's Nest
Review code
Code review and analysis dock. Point it at a repo, branch, or PR. Reads and reports. No edits.
Bosun
Maintain code
Dependency updates, migration generation, test scaffolding, CI config. Task-based, not conversational.
Claude
Compatibility
Runs Claude Code CLI against local models via an Anthropic-to-OpenAI translation proxy. Zero dependencies.
Pilot
Frontier mooring
Not a standalone dock. Connects any dock to frontier APIs for tasks that exceed local model capability. Reasoning only, no tools.
Berths
What runs underneath.
Auto-detected by platform. Override with --berth.
MLX
Apple Silicon
Metal GPU acceleration, unified memory. Default on macOS with Apple Silicon.
llama.cpp
NVIDIA / AMD / CPU
CUDA, ROCm, or CPU fallback. Widest hardware compatibility. Default on Linux and WSL.
Crucible
NVIDIA (CUDA)
Grimvane's own inference engine. Zero wrappers, zero third-party code. The full Grimvane stack.
Get started
Three commands.
# Clone and install
git clone https://github.com/Chris-Whitworth/shoal.git
cd shoal
bash core/install.sh
# Launch Keel (default dock, auto-detected berth)
./shoal.sh
# Or pick your dock and berth
./shoal.sh --dock-keel --berth crucible