For the past few years, I've been obsessed with a problem: my hard drive is a treasure chest of knowledge (PDFs, docs, notes) from the last decade, and I want to use modern AI to actually use it.
But every solution forces a trade-off: either convenience (upload everything to the cloud) or privacy (let my files sit dormant locally).
I got tired of this "hold your nose" compromise.
With capable SLMs (Small Language Models) exploding and on-device compute (Apple Silicon, NPUs) finally catching up, I believe privacy and intelligence is no longer a false choice.
So I (and my team) built KnowledgeFocus (Apple Silicon chips only)
It's an open-source (Apache-2.0) knowledge engine built with Tauri (Rust + Python + TS), and it's 100% local-first.
In v0.6.4, it focuses on one thing: unlocking your local file 'treasure chest'.
- Scans & Indexes: It scans your designated local folders (PDFs, .md, .txt, .docx, etc.).
- Auto-tagging: Uses a local model to auto-tag files, so you can aggregate and discover them.
- Local RAG: You can 'chat' with all your local files. It runs RAG 100% on-device. No data (vectors included) ever leaves your machine.
Website (Download): https://github.com/huozhong-in/knowledge-focus
This is just the first step of our "Data Workbench" vision.
I have a lot more thoughts on 'local-first agents', 'data aggregation' (like pulling in your cloud AI chat logs to store them locally), and building a real 'second brain' for knowledge workers.
I'll be in the comments to expand on these ideas and our future roadmap.
I'm here for all the feedback—especially the critical kind. Thanks, HN.
loading...