Perplexica: self-hosted open-source alternative to Perplexity AI search

1 min read
perplexicaself-hostingollamaopen-sourceai-searchdockersearxng
Originally from vm.tiktok.com
View source

My notes

Watch on TikTok Tap to open video

Summary

Perplexica is a free, open-source, self-hostable alternative to Perplexity AI search. It pairs with SearXNG (a meta-search engine) and Ollama (local LLM inference) to give you a fully private AI-powered search tool that runs on hardware as modest as a Raspberry Pi or an old Mac Mini with 8 GB RAM.

Key Insight

  • The stack is three components: Perplexica (the AI search UI on port 3000), SearXNG (the search aggregator on port 8080), and Ollama (local LLM runtime on port 11434). All run in Docker containers.
  • With a small model like Qwen 3.5 2B, 8 GB RAM is sufficient. Larger models (4B, 6B, 8B) need more RAM but produce deeper answers. Alternatively, you can point Perplexica at Claude or ChatGPT APIs instead of running local inference.
  • SearXNG is highly configurable: you can enable/disable individual search engines per category (general, images, video, news, social media, IT/dev). Notably, enabling Reddit, npm, Hugging Face, and Ollama Hub searches makes this useful for dev work.
  • Perplexica offers three search modes (speed, balanced, quality) and supports social media search if SearXNG has those engines enabled.
  • The system prompt in Perplexica can be customized (e.g., “be to the point, factual, no fluff”) to shape response style.
  • Running SearXNG inside a VPN container is mentioned as an option for additional privacy.