I asked Twitter your questions about verification – here’s what they said

With the #VerifyDisabledTwitter campaign raising more questions than answers about Twitter’s elusive verification process, campaign founder and Deaf journalist Liam O’Dell secured an exclusive chat with Twitter to learn more about the blue tick.

“Great point, [it’s] something we should fix,” a Twitter spokesperson tells me on a Google Meet call last week. Rarely do individuals have direct access to Twitter, and because of that, very rarely do they agree with you that a problem you’ve identified needs fixing. In my case, on a weekday evening, I had both happen to me.

The issue itself lies in Twitter’s verification system. More than 100 disabled activists and organisations have had requests to be verified denied by the platform. If the former were to apply again now, under the activist criteria, they would be informed that “you aren’t eligible for verification in this category” because they don’t meet the 100k follower threshold. The demographic survey, which they launched to identify any potential algorithmic bias, could no longer be completed by disabled people. They don’t have a “statistically significant volume yet”, either.

“The way we designed [the survey] was that we wanted it not to feel like it was going to be part of the application,” the spokesperson explains. “We wanted it to be like ‘this isn’t going to impact [your application], we’re not going to verify you because of what you put here, or reject you because of what you put here’.”

“So we wanted them to feel really separate and make it come at the end,” they continue. “That’s actually a really good point that we are preventing folks from finishing the application early, and getting them into the survey would be valuable.”

Our conversation comes just weeks after Twitter chose to remove its option in the activist category to have a ‘hashtag movement’. By providing a URL, accounts could try and demonstrate that they have created a hashtag campaign “that is capturing a large volume of conversation within a given community”. After this criteria was removed in mid-July, campaigners have been left with two on-platform options: to be in the top 0.05% for either following (just under 100k in the UK and US) or engagement (which, according to the spokesperson, takes into account “likes, retweets [and] quote tweets”).

I ask them about the 100k follower requirement, an “unobtainable” target which is hard to reach when you’re a marginalised person belonging to a small community. “We added the hashtag requirement particularly with this kind of consideration in mind,” they explain. “We wanted to be able to capture folks who are leading a movement who maybe hadn’t built a large following, but what ended up happening is just that the applications we were getting with the hashtag were really hard to evaluate.”

“There’s a lot of people who can tweet a hashtag for the first time and generate something really notable,” they add, “so it just means that trying to be fair and consistent about the hashtag was trickier than we had anticipated. It was way murkier, in terms of actually being useful.”

“Trying to be fair and consistent about the hashtag was trickier than we had anticipated.”

The verification team – which, I learn, includes around 50 to 100 people working on applications – weren’t able to “make good decisions” with the criteria, and so took it down, leaving the remaining two options.

The spokesperson tells me: “I think that the high follower requirement, you should take the requirements we’ve launched with as sort of the first pass, right? We didn’t want to launch it with something too low, where there were a lot of folks from who didn’t make sense to be verified, from all across Twitter, or getting verified sort of [suspiciously].

“That requirement may end up being too high, we may lower it for everybody,” they continue, “but I think more likely and what the team is focused on is looking for more ways to find the notable folks in these communities and reliably identify them.

“I think, currently, we have a gap in our criteria about blogging,” reveals the spokesperson. “So we do have people who are very notable bloggers, who are building a following and saying important stuff, but it’s not a verified news outlet and we don’t capture that yet. So, I think there’s a lot of room for us to find more and better signals that help us capture the nuance of communities, and it can be tricky.”

One thing which isn’t a signal of that is impersonation. In a Twitter Space in June, an employee confirmed that the action – which is a flagrant breach of the platform’s rules – will not be factored in to verification decisions, even when some might consider it evidence of notability.

“We really do want to enforce against [impersonation] and protect people from it, and the blue check just isn’t really the right solution for it,” the spokesperson says. “The blue check is much more about protecting the conversation, not about enforcing policy for an individual.”

“The blue check is much more about protecting the conversation, not about enforcing policy for an individual.”

They also confirm that the impact of verification on marginalised communities is a high priority. “Of course, from community to community, some of the needs and requirements vary, but we really want to create a fair verification system,” they say. “We know it matters a lot.

“We also know that we can’t get it perfect, so I will say that we want it to prioritise transparency and consistency,” they confirm, “so that we could get to a point where we could have a productive conversation about the gaps in the policy and it wasn’t just, like, ‘well, seems random’ or anything like that, and then focus on improving from there.”

Yet questions have already been raised over consistency when one autistic writer, Pete Wharmby, was verified. With just over 41,000 Twitter followers at the time of writing, he’s far off the 100,000-follower threshold required by the platform. When I asked supporters of the #VerifiedDisabledTwitter campaign to suggest questions for me to ask, one pointed out that they know of only one disabled activist, Imani Barbarin, who is verified with more than 100k followers.

Accompanied by the usual caveat that they can’t talk about specifics, and they don’t know about the above accounts (namely because I failed to recall the exact accounts during the interview), the spokesperson explains: “It’s really hard to know how people are applying for verification. Very often, we’re seeing people get rejected, and there’s an assumption that they applied under a certain category, and that anybody looking at their account, who has followed them for a long time, would know that they meet that category, for whatever reason.

““It’s really hard to know how people are applying for verification.”

“Then I say like, ‘wow, how did we reject that person?’ So I go look, it turns out they applied under a totally different category with requirements that obviously didn’t meet it,” they go on to add. “The person who made the decision, made the right call based on the application, even though, a robust person who has followed them for a long time might say like they obviously meet some other criteria.

“So I’d say that this part of the process is still opaque. They can leave these decisions confusing is how did the person apply? It’s possible that they met the criteria for a different category. It might surprise you,” they concluded.

Even though I was unable to recall specific accounts, in Wharmby’s case, he informs me that the blue tick was given to him following his application under the activist category – a category which, in addition to the on-platform criteria, requires applicants to meet one off-platform option.

This takes the form of a Google Trends profile; a Wikipedia article; a reference to their account, which indicates a leadership position, published on a website known for advocacy work; or three or more appearances in news outlets within the past six months.

Can an application be rejected if, say, there’s 99 pieces of off-platform evidence indicating notability, but they fail to demonstrate an significant on-platform presence? “I’ll say that the straightforward answer is that yes,” the spokesperson replies. “We really want to publish criteria that defines how we make a decision and to be really consistent with it.

“If somebody doesn’t meet the criteria, they will reliably not get the badge,” they add. “When that’s egregious, like when we see folks who should have been verified but got rejected – which we see a bunch – that pushes us to change the policy, not to make exceptions.”

“When we see folks who should have been verified but got rejected – which we see a bunch – that pushes us to change the policy, not to make exceptions.”

The recent situation with Tom Cruise prompts me to ask the question. The Mission Impossible star was ‘deepfaked’ (as in, his face was transposed onto someone else’s, who impersonated him) on a TikTok account known as @DeepTomCruise. Not long after the videos surfaced, Cruise had his own account on the platform – to avoid confusion, most likely – and was verified, even though the actor is still yet to post a single video.

In Twitter’s case, the representative tells me, Cruise would have been captured by their entertainment category, even if he had zero followers on the platform.

Followers being the main point of contention for disabled activists on Twitter. While many accounts are far away from the 100k target to apply for verification, six bot accounts with a few thousand followers gained the blue tick until all of them were taken down. More recently, a fake account for author Cormac McCarthy was verified for a while, before it too was suspended by the platform.

“What I’d say for folks who are passionate about [and] who care about verification who want to change the way it works,” the spokesperson says towards the end of our conversation, “[is that] we know it’s not done, it has a long way to improve. The most helpful feedback is that kind of criteria. How could we know?

“How would we know which folks in the community are deserving, in a way that would like only isolate that, so we can publish that criteria, we can be consistent in evaluating it, and we would get the right people, but it wouldn’t just create a ton of noise? That’s what we struggle with all day and the more a community can help us understand that about their community, the easier it is for us to support them,” they conclude.

When I ask about the best way to share feedback, I’m informed that it’s through a dedicated hashtag, #VerificationFeedback, and by tweeting @Verified, as tweets to that account get put into a report.

It’s also worth tweeting with #VerifyDisabledTwitter, as the spokesperson tells me they are aware of the hashtag, and they have been following it.

Twitter’s full verification policy can be found on its website.


Think Outside the Box...

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: