Facebook says it’s getting better at automatically detecting and removing hate speech and misinformation about the coronavirus pandemic.
It has pioneered a number of artificial intelligence techniques to help it police content across its social networks, Facebook said Tuesday in a series of blog posts.
The details about the technology Facebook is using came on the same day the company released its latest quarterly update on its efforts to combat hate speech, child pornography, fake accounts, political misinformation, terrorist propaganda, and other violations of its community standards. The report showed the company was combating a big surge in hate speech and COVID-19 related misinformation since the start of the year.
Among the new AI systems Facebook highlighted on Tuesday are systems that better understand the meaning of language and the context in which it is used, as well as nascent systems that combine image and language processing in order to detect harmful memes.
Good news, bad news for Facebook content moderators. You’ll no longer have to worry about getting PTSD while doing your job. You’ve been replaced. The other loser in all this, of course, are the users. While Facebook and other Big Tech giants attempt to create a “safe space” on their platforms, all they are doing is quashing people’s ability to communicate and share information freely. Basically, the smarter the AI gets, the less freedom you’ll have to express yourself.
Our Mission: to bring awareness (and a bit of humor) to dystopian technological innovations that will impact our lives and the world in general.