Beta · Open source · MIT License

Local AI for everyone

Run large language models entirely on your Mac. No cloud, no API keys, no telemetry. Download, open, chat.

Download for macOS View on GitHub

macOS 13+ · Apple Silicon required

Features

Everything you need, nothing you don't

Fllint ships everything in a single folder. No scattered configs, no background daemons, no hidden caches.

🔒

Completely private

Everything runs locally. No data ever leaves your machine — no cloud calls, no telemetry, no tracking.

📦

Single-folder install

Models, conversations, config — all in one folder. Delete it and Fllint is gone. Zero system footprint.

🚀

Works out of the box

Download a model from within the app and start chatting. No terminal, no configuration, no guesswork.

🖼️

Vision & documents

Send images to vision-capable models. Attach PDFs with optional OCR text extraction.

🔌

External providers

Connect to Ollama or any OpenAI-compatible server for access to even more models.

⚙️

Pro Mode

Full control over inference parameters, context size, GPU layers, and system prompts when you need it.

How it works

Three steps to local AI

No terminal. No setup wizard. No 20-step guide.

1

Download Fllint

Grab the latest release from GitHub. Unzip and drop the Fllint folder wherever you want — Desktop, Applications, an external SSD.

2

Open the app

Double-click Fllint.app. It starts a local server and opens in your browser. A menu bar icon shows it's running.

3

Download a model and chat

Pick a model tier — Lite for quick answers, Standard for balanced quality, Pro for maximum capability. The download starts in-app.

Requirements

Pick your model tier

Bigger models are smarter but need more memory. Start with Lite and upgrade when you're ready.

Model Tier Size RAM Best For
Lite ~2 GB 8 GB+ Quick answers, lighter hardware
Standard ~9 GB 16 GB+ Balanced quality and speed
Pro ~22 GB 32 GB+ Maximum capability

Local AI, without the hassle

Open source. Free forever. Your data stays yours.

Download for macOS