At Bee, we're building personal AI, technology that understands your context and adapts to your life. That means learning from what you say, what you do, and how you move through the world. It's a shift from general AI to something more contextual, more useful…and much more personal.
We believe this shift requires a new standard for privacy.
That's why we designed Bee with clear boundaries, intentional constraints, and full transparency around how your data is handled.
This post outlines how we approach privacy today, and where we're going next.
How Bee Handles Your Data
- You're in control: You decide when Bee is active, what it remembers, and when it forgets. You can mute it, pause it, or delete anything at any time.
- No raw audio is stored: Bee transcribes sound in real time and discards the original audio immediately.
- Your data is never sold: We don't sell or share your data. We don't use it to train general-purpose models. Everything Bee learns is for your experience only.
Where We're Headed
Our goal is to make Bee smarter without ever requiring access to your raw personal data. We're actively building toward that future through a set of clear, near- and long-term priorities:
- Geo and Topic Fencing: Starting in Q3, users will be able to set boundaries around what Bee is allowed to learn. You'll have the ability to define topics that should never be processed, and designate physical locations where Bee will automatically pause learning.
- Advanced on-device AI: We're testing next-generation local models that can run directly on device. These models are already being evaluated by trusted partners and early testers. While we're not ready to release this broadly yet, it's a major focus for Q4.
We're building Bee for a future where AI is truly personal and privacy is built in.
As Bee grows more capable, so will the systems that protect it. This is an inflection point.
The rules aren't defined yet, and we intend to lead by example: with integrity, transparency, and respect for the individual.