Tier 1 β VPS / Tiny VM (1 CPU Β· 1 GB RAM)
Lightweight services that run on the smallest, cheapest cloud instances. These form the foundation of any self-hosting setup.
4 Experts Β· 5 Rounds Β· 40+ Ideas β Categorized by Hardware
Self-hosted password vault. Full Bitwarden protocol, great mobile apps, browser extensions. ~100MB RAM. The one tool you set up and never look back.
DNS-level ad & tracker blocking for your entire network. Point your router DNS at the VPS β every device blocks ads before they load. ~50MB RAM. Netflix, YouTube, Spotify: no more ads.
Encrypted, deduplicated, incremental backups to object storage at $0.006/GB/month. One cron job backs up the whole VPS. Set it up once, forget it, restore when needed.
Self-hosted uptime monitoring with slick web UI. Monitors HTTP/TCP/DNS/ping, sends Telegram/Slack/email alerts. Replaces Better Uptime, Pingdom, and similar paid tools. ~120MB RAM.
Self-hosted RSS reader. Replace Google Reader/Feedly. Subscribe to blogs, news, changelogs β no algorithmic feeds, no tracking. ~80MB RAM.
Zero-knowledge pastebin. Pastes encrypted client-side, server never sees content. Share credentials, logs, config snippets. Auto-expires. ~30MB RAM.
Privacy-first web analytics. GDPR-compliant by design, no cookie banners, no data sent to Alphabet. Single Docker container with PostgreSQL. ~200MB RAM.
Receive emails from internal services (Uptime Kuma, monitoring) and relay to your real mail gateway. Lets services send alerts without running a full mail server. ~20MB RAM.
Lightweight Git service. GitHub alternative that runs on a $5 VPS. Supports repos, issues, pull requests, CI pipelines. ~120MB RAM.
Web server + automatic Let's Encrypt TLS. The non-negotiable foundation. Reverse proxy, SSL termination, load balancing β everything passes through this.
Local LLM on a VPS. 1B model in Q4/KV-4-bit runs in ~800MB-1GB RAM. Chat interface, reasoning backbone for automation. Not glamorous but gives you a local AI with zero API costs.
All-in-one mail server (SMTP + IMAP) in a single Go binary. ~80MB RAM. Receives mail, stores locally, relays outbound. Simpler than Postfix + Dovecot.
Access devices behind NAT from your VPS without paying for static IP or relying on Tailscale. Client-server reverse proxy for home lab access.
Cross-platform system monitoring. ~50MB RAM. Web UI shows CPU, RAM, disk, network, processes β better than htop because it's accessible from any browser.
Local LLM inference for chat, code assistance, summarization, drafting. Qwen2.5-Coder 7B for code β autocomplete, review, explain locally. 32GB handles 7B with GPU offload. Zero API costs, full privacy.
Local speech-to-text. Transcribe meetings, voice notes, podcasts in real-time on M4 Pro. Whisper Large v3 (3B params) runs ~2-3x realtime. Feed output to RAG pipeline = voice-to-insight pipeline with zero SaaS.
Local image generation. SDXL 1024px ~20-40 sec/image with Metal GPU acceleration. Train a LoRA on your product photos β generate consistent branded content forever. No Midjourney/DALL-E subscription.
Vector DB + local embeddings (mxbai-embed-large) over your notes, PDFs, code, emails. Ask natural language questions, get answers from your own data. Privacy-preserving β nothing leaves the machine.
Web UI for local models with RAG, tool use, multi-user support. Replaces ChatGPT for your team. Access local LLMs from any browser with your knowledge base baked in.
Text-to-speech synthesis in a custom voice clone β no cloud TTS API needed. For Thota's AI Twin project: voice cloning + local model = fully self-hosted voice pipeline.
Whisper (transcribe) β Llama 3.2 (summarize, extract action items) β push to Obsidian/Notion. Full meeting-to-notes pipeline running entirely on-device.
Workflow automation with AI baked in. Trigger on events, call local Ollama endpoints for decisions. 400+ integrations. Replaces Zapier with full data ownership.
Run code (Python, etc.) locally with LLM reasoning over your files, terminal, and internet access. Personal coding agent that understands your entire codebase.
Replace Google Drive, Docs, Calendar, Contacts, Photos, Talk (video calls). One install replaces most of Google's ecosystem. 8GB RAM recommended for smooth experience. Pi-friendly for light use.
Personal knowledge graph. Graph view + Dataview turns notes into a queryable database. Block references + backlinks = personal CRM, project tracker, second brain. Local-first, Markdown on disk.
Replace Dropbox/Google Drive/iCloud. Peer-to-peer encrypted file sync. No cloud intermediary, files stay on your devices. Works across VPS and local machines.
Media server. Stream personal video/audio library to any device. On Mac Mini with plenty of RAM: runs well. On VPS: use for audio streaming or small library. ~300MB RAM idle.
Workflow automation. On VPS: light automations, integrations. On Mac Mini: AI-powered workflows with local LLM nodes. Replaces Zapier/Make at zero monthly cost.
Replace Samsung SmartThings/Apple HomeKit/Google Nest. Full local control of smart home devices. No cloud dependency for core automations. Runs on Mac Mini, Pi, or VPS with add-ons.
Replace Notion/Confluence. Self-hosted wiki, fast, Markdown-native, no per-seat pricing. Teams pay for what they host. Excellent for team knowledge bases.
Replace Slack/Discord. Self-hosted chat and collaboration. Synapse (Matrix) supports federation β your server talks to other Matrix servers. Full message history, no data leaving your infrastructure.
Vaultwarden β AdGuard Home β Nginx + Certbot β restic backups. Four critical tools, one weekend, permanent value.
Uptime Kuma β Glances β Frp β FreshRSS. Now you can see everything, reach everything, and stay informed without algorithmic feeds.
Ollama + Qwen2.5 Coder β Whisper.cpp β OpenWebUI. Local AI pipeline: code assistance, transcription, web UI access. Zero API costs.
ComfyUI + SDXL β local TTS β Ollama + RAG pipeline over your notes. Image generation and voice pipeline for your AI Twin.
N8N workflows β Nextcloud (if team) β Jellyfin media server β Home Assistant. Build the stack that makes everything else feel connected.