Anonymity and safety are often seen as opposites. The argument goes: if people can say anything without consequences, they'll say terrible things. And historically, that's often been true. Early anonymous platforms like Yik Yak and Secret both shut down partly due to rampant cyberbullying.
At Voyd, we believe this is a false dichotomy. Anonymity and safety can coexist — but only with thoughtful, proactive moderation.
Every message submitted to Voyd passes through a multi-layer moderation pipeline before it reaches the feed. The first layer is OpenAI's content moderation API, which screens for hate speech, harassment, threats, self-harm content, and other categories of harmful content. Messages that fail this screening are blocked immediately.
The second layer checks for personal information. Our system detects and blocks messages that contain names, phone numbers, email addresses, social media handles, or other identifying information. This protects both the poster (who might accidentally dox themselves) and any individuals mentioned in the message.
The third layer is community reporting. Every message in the feed has a report button. When users report a message, it's flagged for review. Messages with multiple reports are automatically removed pending review. This crowd-sourced moderation catches edge cases that automated systems miss.
But moderation isn't just about blocking bad content. It's about creating a culture where harmful content isn't the norm. Voyd's design choices — no comments, no direct messages, no following, no public engagement metrics — eliminate most of the vectors through which cyberbullying typically occurs. You can't harass someone if you can't respond to them.
The one-message-per-day limit also helps. Trolls thrive on volume — they flood platforms with provocative content to get reactions. When you can only post once a day, trolling becomes significantly less appealing. The effort-to-impact ratio just isn't there.
Is our system perfect? No. We continually improve our moderation pipeline and review our policies. But we've proven that it's possible to build an anonymous platform that's overwhelmingly positive, thoughtful, and safe. The secret isn't choosing between anonymity and safety — it's investing in both.