Published: 16:52, July 15, 2021 | Updated: 17:00, July 15, 2021
Twitter, Facebook struggle to control racist use of emojis
By Bloomberg

England's players watch the penalty shootout during the UEFA EURO 2020 final football match between Italy and England at the Wembley Stadium in London on July 11, 2021. (LAURENCE GRIFFITHS / POOL / AFP)

A wave of online racism aimed at some of England’s Black soccer players has highlighted how social media companies’ content moderation systems are failing to monitor the use of emojis.

Social media companies  have spent years developing algorithms to detect offensive speech so that it can be removed. But experts say that they have put in a smaller effort and developed less expertise in analyzing emoji language - and that has left an opening

On Sunday, England’s men’s soccer team, playing in their first major tournament final since 1966, fell to Italy on penalties. In the aftermath, a wave of racist abuse was leveled at three Black England players - Marcus Rashford, Jadon Sancho and Bukayo Saka - and messages on social networks like Twitter, Facebook and Instagram included monkey and banana emojis.

The digital abuse isn’t a new phenomenon. The Professional Footballers’ Association and data science company Signify found in a 2020 study of tweets sent to some players that there were more than 3,000 explicitly abusive messages, with 29 percent of the racially abusive posts in the form of emojis, the tiny images and symbols used in texts, emails and other digital communications.

“Twitter’s algorithms were not effectively intercepting racially abusive posts that were sent using emojis,” the study found. “This highlights a glaring oversight.”

ALSO READ: The Times: Britain to ask tech firms for details of those who abused players

But despite the long-standing problem, the abuse via emojis has continued. A more recent analysis published Monday flagged almost 2,000 tweets as potentially abusive targeting some black players during the European tournament, and said that although a number of the tweets were deleted, Twitter Inc didn’t permanently suspend the accounts.

Social media companies such as Facebook Inc, Twitter and Google, which owns YouTube, have spent years developing algorithms to detect offensive speech so that it can be removed. But experts say that they have put in a smaller effort and developed less expertise in analyzing emoji language - and that has left an opening.

“It’s OK to send a monkey emoji to someone, but if you call someone a monkey, you get banned so that’s the contradiction,” said Vyvyan Evans, a linguistics expert who wrote a book on the subject. “Insufficient effort to date has been focused on policing emojis.”

Spokespeople for Twitter and Facebook said the companies have been removing posts and disabling accounts since Sunday’s final, with Twitter saying that the network was proactive and removed more than 1,000 tweets and permanently suspended accounts in the hours after the game.

“Using emojis, like monkey or banana emojis, to racially abuse someone is completely against our rules,” said a Facebook company spokesperson. “We use technology to help us review and remove harmful content, but we know these systems aren’t perfect, and we’re constantly working to improve.”

UK leaders condemned the hate speech, with Prime Minister Boris Johnson saying he warned executives from Facebook, Twitter, ByteDance Ltd.’s TikTok, Snapchat Inc and Instagram at a Tuesday meeting that they need to crack down on online abuse.

Players and officials also spoke out, including Rashford in a widely shared statement on social media. “I’ve grown into a sport where I expect to read things written about myself,” he wrote. “I can take critique of my performance all day long, my penalty was not good enough, it should have gone in, but I will never apologise for who I am and where I came from.”

READ MORE: Twitter decries intimidation by India, will press for changes

Bertie Vidgen, a research fellow in online harms at the Alan Turing Institute, has been working with colleagues from Oxford University to test how speech detection models, including one from Google called Jigsaw, respond to offensive emojis. The findings so far have not been encouraging, and Vidgen said it’s not because emojis necessarily pose a more difficult technical challenge.

“They have really low performance. You can say something hateful, which if you wrote out in text would definitely be picked up,” Vidgen said. “There’s zero justification for having that loophole. They just need to enforce their policies.”