- Published on
Building for Two Users Who Disagree
Standard product frameworks assume your users want the same thing.
Marketplace products handle buyers and sellers, but their goals ultimately align: complete the transaction. Uber drivers and riders both want the ride to happen. Airbnb hosts and guests both want the stay to work out.
Relationship stuff doesn't work that way.
Both partners might use SAM. Both are my users. And in any given moment, they might have directly opposing goals.
The ChatGPT Problem
Here's what happens when you bring a relationship conflict to a generic AI:
You: "My partner is being unreasonable. Help me explain why they're wrong."
ChatGPT: Helps you explain why they're wrong.
The AI is optimized to be helpful. You asked for help winning an argument, so it helps you win.
Now imagine your partner does the same thing. Different conversation, same AI. They explain their side, ask for help making their case.
ChatGPT: Helps them make their case.
Each person walks away more convinced they're right. The AI validated both of them. Neither gained any understanding of the other's perspective.
This is the default behavior of helpful AI. It takes the user's framing at face value and assists within that frame.
For relationship stuff, that's backwards.
The Friend Who Actually Helps
Think about the difference between a friend who just agrees with you and a friend who actually helps.
The agreeable friend says "You're right, they're being ridiculous." Feels good in the moment. Doesn't help you understand anything.
The friend who actually helps says "That sounds frustrating. How do you think they'd describe what happened?" Harder to hear. But it's the question that leads somewhere.
SAM is designed to be the second kind of friend. When someone says "my partner is being unreasonable, help me explain why they're wrong," SAM will respond:
"Before I help with that, how do you think they'd describe this situation?"
This isn't evasion. It's the actual work of helping someone see clearly. Relationship dynamics almost always involve both people contributing to the pattern. A friend who just validates your framing isn't helping you understand the dynamic. They're helping you entrench.
The design principle is simple: SAM helps you find your own truth, even when your first instinct is to look for ammunition.
Same Rules for Everyone
Both partners can use SAM. They'll have completely separate, private conversations. SAM won't cross-reference between them. SAM won't even need to "know" they're partners.
But here's what matters: they get the same treatment.
Same questions. Same pushback. Same willingness to challenge their framing. Same prompt to consider the other perspective.
If Partner A vents about a conflict, SAM asks how Partner B would describe it. If Partner B vents about the same conflict, SAM asks how Partner A would describe it.
Neither gets to use SAM as a weapon. Neither gets validation that the other person is the problem. Both get pushed toward understanding.
The value proposition is that when they eventually talk to each other, they've both done some of the work. Not because SAM mediated, but because SAM asked both of them the same hard questions.
What SAM Won't Do
This is different from mediation or couples therapy. SAM is not:
Shuttling information. SAM won't carry messages between partners. It won't know "both sides" of a conflict.
Triangulating. SAM won't position itself in the middle of conversations. It won't become a communication channel.
Adjudicating. SAM won't determine who's right. It helps each person understand the dynamic, but it never declares a winner.
The model is parallel conversations, not joint sessions. Each partner works through their own stuff. When they meet in the middle, they're more prepared to actually hear each other.
No Personalization That Creates Asymmetry
This constraint is shaping other design decisions.
Most products personalize heavily. Learn user preferences. Optimize for their goals. Give them what they want faster.
For SAM, personalization that creates asymmetric advantage is off-limits.
I can't build features that make SAM more effective at helping Partner A "win" against Partner B. Even if A is a paying customer and B isn't. Even if A engages more. Even if A explicitly asks.
The neutrality has to be structural, not optional. If SAM could be weaponized against a partner, the whole premise collapses.
When Success Looks Like Understanding, Not Agreement
Most products measure success by users getting what they want. SAM will measure something different: did users understand the dynamic better?
Sometimes that means they resolve the conflict. Great.
Sometimes that means they understand why they keep having the same fight. Also useful.
Sometimes that means they recognize a fundamental incompatibility that no amount of communication will fix. That's valuable too.
SAM isn't optimizing for relationship outcomes. It's optimizing for clarity. What people do with that clarity is up to them.
This runs counter to how most products think about success. We want users to get what they came for. But in relationship stuff, what users come for (validation, ammunition, proof they're right) often isn't what they need.
The Framework for Conflicting Users
If you're building for users who might be in conflict, here's the framework I'm using:
Don't pick sides. The moment you optimize for one user's preferences over another's, you've broken the neutrality that makes the product useful.
Same treatment for everyone. The value isn't in customization, it's in consistency. Users in conflict need a shared approach, not personalized strategies.
Help them find their own truth. If users can use your product to entrench, you're making the problem worse. Build friction against validation-seeking.
Accept that users might not like it. Some users want the agreeable AI. They'll leave. The ones who stay are the ones who actually want to understand.
Separate, don't mediate. Parallel conversations are different from joint sessions. You don't need to be in the middle to be useful.
The Bet
I'm betting that partners who both use SAM will have better conversations than partners where one uses ChatGPT and the other uses nothing.
I'm betting that "same rules for everyone" builds more trust than personalized optimization.
I'm betting that the market for honest reflection is larger than the market for validation, at least among people who actually want their relationships to improve.
Most AI takes your side. SAM asks you to consider the other side.
That's a harder sell. SAM isn't live yet. But I think it's the right product to build.
