๐Ÿฆ‰
Beta Testing

Your GPU earns money
while you sleep.

Owlrun turns your idle computer into an AI inference node. Go to bed, wake up to earnings. It's that simple.

How it works
Three steps. Zero effort.

Install Owlrun, let it detect your hardware, and start earning. No config files, no DevOps, no babysitting.

1

Install

Download and run the installer. Owlrun detects your GPU, RAM, and available models automatically.

2

Go idle

Owlrun watches your system. When your machine is idle, it connects to the network and starts serving AI inference requests.

3

Get paid

Every request your GPU handles earns you money. Payouts settle to your Solana wallet โ€” Bitcoin Lightning coming soon. You keep 91% โ€” up to 96% with volume tiers.

๐Ÿค–

AI Buyer

Sends inference request
via OpenAI-compatible API

REQUEST โ†’โ†’โ†’ โ†โ†โ† RESPONSE
๐Ÿฆ‰

Owlrun Gateway

Routes to best GPU ยท single digit % fee
Blind forwarding ยท zero data collected

INFERENCE โ†’โ†’โ†’ โ†โ†โ† RESULT
๐Ÿ–ฅ๏ธ

Your GPU

Runs the model locally
keeps 91โ€“96% of revenue

PAYOUT โ†’โ†’โ†’ PAYOUT
๐Ÿ’ฐ

Your Wallet

91โ€“96% of every request
settled directly to you

Solana Lightning (WIP)
Potential earnings
Your idle GPU is leaving money on the table.

Estimated monthly earnings based on GPU type and typical idle hours. Actual results vary by demand and uptime.

RTX 3060 12GB
$15โ€“45
per month
RTX 3090 24GB
$45โ€“110
per month
RTX 4090 24GB
$85โ€“180
per month
Mac M-series 48GB+
$55โ€“130
per month
Multi-GPU setup
$160โ€“320+
per month

Estimates based on current market rates from providers like Vast.ai, Together AI, and RunPod at 30โ€“50% utilization. You keep 91% โ€” up to 96% with volume tiers. Owlrun takes only 9%. Actual earnings depend on demand, uptime, and model size. Before electricity costs.

Trust & transparency
Fully open source. Fully auditable.

Owlrun is built in the open. Every line of code is public. The gateway never sees your data. You don't have to trust us โ€” you can verify.

Coming Soon

Open Source Code

The Owlrun node software is fully open source. Read the code, build from source, audit every dependency. No black boxes on your machine.

Privacy by Design

The gateway is a blind relay โ€” it routes bytes without inspecting content. Zero logging of prompts or responses. Your users' data never touches our servers.

Transparent Operations

Clear revenue splits, on-chain payouts, public billing logic. You always know exactly how much you earned and why. No hidden fees, no surprises.

Code on GitHub
Zero content logging
On-chain payout verification
No telemetry or tracking
Build from source anytime
Get started
Join the Beta
1 Get your key

I have a Graphics Card or Mac Shared Memory Linux

I want to earn money from my idle compute

Generate a provider key and get a ready-to-paste install command. Requires NVIDIA, AMD, or Apple Silicon GPU with 4GB+ VRAM.

I want AI inference ๐Ÿค–

Cheap, OpenAI-compatible API โ€” works with any client

2 Join us

Join the Community

Beta updates, support, feedback, and connect with other node operators and inference buyers.

Join Telegram
Beta status
Tested platforms

We're validating Owlrun across every platform we can get our hands on. This table updates as we test.
From RTX 4090s to smart fridges โ€” if it computes, it earns. Yes, the fridge thing is a joke. Probably.

Platform Hardware Install GPU Detect Inference Dashboard
localhost:19131
E2E
๐Ÿง Linux (Ubuntu) NVIDIA GPU (RTX 3090)
๐Ÿง Linux (Ubuntu) CPU only (no GPU)
๐ŸŽ macOS (Apple Silicon) M-series, 48GB+
๐ŸชŸ Windows (via Wine) Tray + dashboard only
๐ŸชŸ Windows (native) Needs real hardware
๐Ÿ“ Raspberry Pi Pi 4/5, CPU only (arm64)
๐ŸงŠ Smart fridge Nano models, TBD 🧊 🧊 🧊 🧊 🧊

✅ Pass   ❌ Fail   ➖ N/A   ⏳ Pending   🧊 Chill

Binary integrity
SHA-256 Checksums

Verify your download matches the official build. Compare with: sha256sum owlrun-* or shasum -a 256 owlrun-* on macOS.

Stable (main)
Loading...

checksums.txt

Dev (bleeding edge)
Loading...

checksums.txt

Coming Soon
Go all-in with OwlOS

Want to dedicate a full machine to earning? OwlOS is a lightweight Linux distribution purpose-built for AI inference nodes.

Coming Soon

OwlOS v0.1.1

A minimal, hardened OS image that boots straight into inference mode. Runs live from USB โ€” no hard drive needed, nothing written to disk. Ideal for repurposing old gaming rigs, rack servers, or any spare hardware into a dedicated earning machine. Auto-updates, zero maintenance, maximum uptime.

Download ISO โ€” Coming Soon Get in touch
๐Ÿฆ‰
Get in touch
Contact & Connect

Questions about Owlrun, partnership opportunities, or just want to say hi? Reach out directly.

๐ŸŒ

Personal Website

Learn more about the founder, the vision behind Owlrun, and other projects.

Visit fabio.owlrun.me
โœ‰๏ธ

Contact Form

Interested in early access, partnerships, or running a GPU fleet? Send a message directly.

Send a Message