Twitter recently decided to ban a user for inciting harassment. I am not going to weigh in on that as I feel like it is going to be explored enough by others. Instead, I find it interesting that this has created a larger conversation around Twitter’s verified identity program. The program, available only to people of note and companies is designed to allow Twitter readers to feel confident that the person claiming to be someone or a company really is them.

I believe there are two issues that merit discussion here. First, verified identification shouldn’t be restricted just to celebrities and corporations. Second, users should be given more control over their feeds.

I think anyone should be allowed to verify their identity and users should be encouraged, but not required to do so. To reiterate, I am not proposing something like is in place in China and some other countries where every internet user must be identified. Instead, I’d like to see voluntary identification. I envision three tiers:

  1. Verified users who meet today’s verification standards, including the requirement to have a semblance of their real or stage names on their account.

  2. Pseudonym users whose accounts don’t contain their real information in a publicly accessible manner.

  3. Unverified users who just have an account.

I feel like these three options protect privacy while also leaving room for anonymity. Users can choose to talk anonymously, pseudo-anonymously, or in a completely publicly accountable manner.

I believe that having anonymous users can be useful in a conversation. Some of them may be whistle blowers or people hiding from governmental persecution. Some of them will also be asshats. But, at the end of the day, research tells us that people are less likely to contribute to the conversation if anonymity is not an option. Even our most simple opinions could have a negative impact in some situations. This set of options leaves users this option. In particular, the pseudonym option opens up the possibility for someone to be able to be effectively anonymous, assuming the company cannot be compelled to turn over their identity. This set of layers allows for the positive contributions anonymity can bring while still allowing asshats to be held accountable or hidden.

Building on these user types, we can create some new feed filtering options. Specifically, filtering should be available as follows:

  1. No messages, retweets, etc. in a timeline or via direct message except from verified users.

  2. No messages, retweets, etc. in a timeline or via direct message except from verified or pseudonym users.

  3. Messages from all users are allowed in a timeline or via direct message.

Finally, to filter 1 or 2 we can optionally add: Anonymous user messages are allowed in a timeline or via direct message only if that account has crossed some level of followers, retweets, etc. This allows your community to help you find anonymous content of interest. This should provide exposure for the message from the whistle blower while hiding the troll.

I am sure there are challenges here, but this framework seems to work for promoted and suggested content and be fairly portable across social media platforms.

I recognize that this kind of filtering has the potential to increase the isolation or echo chamber effect of social media, however, that is a personal choice. If someone wants to build an echo chamber to live in, freedom says we let them. We can argue that it is bad for lots of reasons, but in the end it is a personal choice. It is not Twitter, or anyone else’s responsibility, to convince people that listening to voices they don’t agree with is a good thing. Regrettably, this is a lesson that must be taught through experience and time.

Note: None of this should be construed as eliminating the requirement that harassment be dealt with.