← Back
UX

Philosophies on Verification

I've spent years designing verification experiences for millions of users. I have some thoughts on why the rest of the internet desperately needs to catch up.

I've spent a big chunk of my career thinking about verification. Not in an abstract, philosophical sense, although we'll get there. In a very practical, UX-in-the-trenches sense. At WeSalute, every single member goes through an identity and affinity verification process. We need to know who you are and that you actually belong to the community we serve. Military service members, veterans, nurses, first responders. Real people with real credentials, not just an email address and a username someone invented at 2am.

Designing that experience has taught me a lot. It's genuinely hard to get right. Verification needs to be rigorous enough to mean something but frictionless enough that real people don't give up halfway through. You're asking someone to trust you with sensitive information before they've experienced a single benefit of your product. The UX bar is high and most companies don't clear it.

But the more time I've spent in this space, the more I've started thinking about verification beyond the products I work on. As a designer. As someone who cares about how technology shapes behavior. And as a dad who watches his kids navigate the internet and feels varying degrees of dread about it.

I have some takes. Some of them might be unpopular.

What verification actually does

Before the hot takes, it's worth being precise about what verification is and isn't.

Verification isn't authentication. Logging in with a password confirms you are who you were yesterday when you set up the account. Verification confirms something true about you in the real world. That you served in the military. That you hold a nursing license. That you are, at minimum, a human being with an actual identity attached to your behavior.

The distinction matters because authentication alone is basically useless as a trust signal. Anyone can make an account. Anyone can log in. What verification does is create a link between digital behavior and real-world accountability. And accountability, as it turns out, changes how people act.

This is not a new idea. Marcus Aurelius wrote in his Meditations that a person's character is revealed by what they do when they think no one is watching. He was talking about individual virtue, but the principle scales. When there are no consequences for your behavior, when no one knows who you are and nothing can be traced back to you, the worst impulses tend to surface. Not for everyone. But for enough people to ruin the experience for everyone else.

The social media problem

Here's the take.

I believe that real identity verification, mandatory and universal, would make social media meaningfully healthier. Not perfect. Not free of conflict or disagreement or even genuine nastiness. But healthier in a way that would be immediately noticeable.

The majority of the most toxic behavior on social media, harassment campaigns, coordinated pile-ons, the kind of comments that make you want to close the app and never open it again, is enabled by anonymity. Not caused by it, necessarily. The impulse exists in real people. But anonymity removes the friction that keeps most of us from acting on our worst instincts in public.

Think about how people behave in traffic. Cocooned in their cars, separated from consequences and eye contact, people do things they would never do if they were face to face with the same person on a sidewalk. The internet, at its most anonymous, is the biggest traffic jam in human history.

Epictetus taught that we suffer not from events themselves but from our judgments about them, and that we always have a choice in how we respond. That's true. But it's also true that the design of a system either supports or undermines good choices. A platform built around anonymity and outrage optimization is not a neutral environment. It is an actively hostile one, and redesigning it around real accountability would change the incentive structure in ways that matter.

The arguments against, and why I still believe what I believe

I know the counterarguments. I've thought about them seriously.

Privacy. Genuine and important. There are real situations where anonymity protects people. Whistleblowers. Abuse survivors. People living in places where their identity could put them in danger. Any verification system worth implementing would need to account for these cases thoughtfully, which is genuinely hard to do.

Accessibility. Not everyone has government-issued ID. Tying participation in digital public life to documentation that not everyone can access creates its own class of inequity. This is a real problem and it doesn't have an easy answer.

Government overreach. The concern that a verification requirement is one step away from state surveillance of online speech is not paranoid. It's historically grounded. Who holds the verification data, how it's protected, and what it can be used for are questions that deserve serious legal and structural answers before anything like this gets implemented at scale.

These are all legitimate objections. I hold them seriously. And I still think, on balance, verified identity is a net positive for the health of public digital spaces.

Here's why. The current system isn't neutral. Choosing not to verify is still a choice with consequences, and those consequences are playing out in real time. What we have right now is a system optimized for engagement at the expense of accountability, and the results are not good. For adults and much less so for kids.

What this means for product designers

If you're building anything with a community component, a comments section, a forum, a social feed, a review system, you are making choices about verification whether you think about it that way or not. Defaulting to anonymous participation is a design decision. It shapes the culture of your product in ways that compound over time.

The best verification experiences I've seen, and tried to build, share a few things in common. They explain clearly what's being verified and why, which builds trust instead of resistance. They make the process as fast as humanly possible, because every extra step is someone who gives up. They confirm the user's status without making them feel interrogated. And they deliver an immediate, tangible benefit on the other side, so the effort feels worth it.

Getting that right is hard. But the alternative, building communities where the worst behavior is the most cost-free, is harder to live with in the long run.

The version I keep coming back to

Aurelius also wrote something that has stuck with me for years: "Waste no more time arguing about what a good person should be. Be one."

I think about that in the context of verification a lot. We spend enormous amounts of time debating what healthy online discourse should look like, what the rules should be, who should be allowed to say what. But the structural change that would do more to shift behavior than any content policy ever could is simpler and harder and more uncomfortable than any of those conversations.

Make people real. Give their words weight. Attach some consequence, however small, to what they choose to put into the world.

I'm not naive enough to think that fixes everything. But I've seen what accountability does to behavior up close, on playing fields with nine-year-olds and inside product flows with millions of users. It works. Not perfectly. But it works.

And in a space as broken as online public discourse, I'll take imperfect progress over optimized chaos any day.

What did you think?

Share with friends

Read more