AJIT KRISHNA
Published on

Privacy as Moat: Why I Designed Myself Out of My Own Data

Authors

The standard product advice is simple: collect data, analyze patterns, improve the product. Every PM playbook says it. The more you know about users, the better decisions you make.

I'm building an AI that helps people work through relationship stuff. And I'm designing it so I literally cannot read user conversations.

Not "won't read." Can't.

The Problem with "Trust Me"

When you're building for sensitive categories, the usual playbook breaks down.

People don't open up about their relationships if they think someone is watching. They filter. They hedge. They give you the sanitized version.

This is the core tension I keep coming back to. I want to improve SAM. That means understanding where conversations go wrong, what questions land, what reflections help. The typical approach would be to log conversations and analyze them.

But logging conversations would undermine the very thing that makes them valuable.

Think about the difference between venting to a friend versus posting on social media. With a friend, you say what's actually going on. On social media, you perform a version of yourself. If users believe their conversations with SAM might be read, they'll share the version that makes them look reasonable. The messy parts, the things they're embarrassed about, the stuff they've never said out loud, those stay hidden.

The whole point of SAM is to be that friend who really listens. The one you can tell the unfiltered version to. Privacy isn't a feature. It's a prerequisite.

Technical Inability Over Policy Promises

Here's the design decision I'm making: user-keyed encryption.

Every conversation will be encrypted with a passphrase the user sets. I won't have it. I can't recover it. If a user forgets their passphrase, their conversation history is gone. There's no "forgot password" flow.

This sounds like a product failure. It's actually the point.

"I won't see your data" is a policy promise. It requires users to trust my intentions, my security practices, my team (if I ever have one), my investors (if I ever take them), and every future version of myself.

"I can't see your data" is a technical reality. It doesn't require trust in my character. Even if I wanted to read conversations, even if I was subpoenaed, even if the database was hacked, the content is inaccessible without the user's key.

The same principle that makes Signal and WhatsApp trustworthy for messaging should make SAM trustworthy for talking through relationship stuff.

What I'm Giving Up

This decision has real costs. Let me be specific about what I'm trading away:

Debugging. When something goes wrong, I won't be able to look at the conversation to understand what happened. I'll have to rely on abstracted signals: did the user come back? How long was the session? Did they report it as helpful?

Content analysis. I can't train future models on user conversations. I can't analyze what topics come up most. I can't identify common patterns by reading actual exchanges.

Quality improvement. Most AI products improve by examining where the model failed and fine-tuning. I can't do that at the content level.

Troubleshooting. When a user says "SAM said something weird," I'll have to take their word for it. I can't pull up the transcript.

These are significant limitations. Any PM would tell me I'm handicapping myself.

What I Hope to Gain

But here's the bet I'm making:

When users actually believe their conversations are private, they'll tell you what's really going on.

Not the version they'd tell a coworker. Not the version that makes them look reasonable. The actual situation, with their actual fears, with the things they'd only tell their closest friend.

SAM only works when someone is honest about their own role in the dynamic. If users are filtering for an audience, they filter out exactly the material that would make the conversation useful.

The privacy architecture doesn't just protect users. It should make the product better at its core function.

I'm betting that what I lose in debugging capability, I gain back in the quality of what users are willing to share.

Privacy as Structural Advantage

Most companies treat privacy as a compliance checkbox. GDPR, CCPA, cookie banners, privacy policies no one reads.

In sensitive categories, privacy can be something else entirely: a structural competitive advantage.

Here's the logic:

  1. Users talking about relationships need to feel safe to get value from the product
  2. Feeling safe requires genuine privacy, not just promised privacy
  3. Genuine privacy means technical constraints, not policy commitments
  4. Technical constraints are hard to copy because they require giving something up
  5. Competitors optimizing for data collection can't easily switch

The companies that could compete with SAM on conversation quality would struggle to compete on privacy. They've already built their businesses around data access. Switching to user-keyed encryption would break their analytics, their ML pipelines, their ability to debug. It's not a feature they can add. It's an architecture they'd have to rebuild.

When This Strategy Makes Sense

I'm not arguing every product should encrypt user data away from the company. That would be ridiculous. Most products benefit from data access and don't have the same trust dynamics.

But if you're building in sensitive categories where user honesty is the product, consider whether data access is actually serving you.

Some signals that privacy-as-moat might fit:

  • Users need to share things they're embarrassed about
  • The value of the product depends on users being honest, not performing
  • Trust barriers are high and slow down adoption
  • Competitors are optimizing for data collection and analytics
  • You can build quality signals without accessing content

The tradeoff is real. I'll be flying partially blind. But sometimes what you can't see is exactly what allows users to show you everything.

The Bet

I'm betting that users who genuinely trust SAM will share more, engage deeper, and get more value than users who are filtering for an audience they suspect is watching.

I'm betting that word-of-mouth from people who experience real privacy will outweigh the optimization I could do with content analysis.

I'm betting that "I can't see your data" is a more durable moat than any feature I could build with that data.

We'll see if I'm right. SAM isn't live yet. These are pre-launch decisions, not proven strategies.

But I'd rather build this way and learn I was wrong than build the other way and never know what users would have shared if they'd felt truly safe.

Want to share a thought or ask something?

I'm always up for a good conversation.

Say hi →