How AI Is Creating A Safer Online World

0
How AI Is Creating A Safer Online World
How AI Is Creating A Safer Online World

Over the years, the online world has become more dangerous and unsafe. If not cyber bullying, you’ll find misinformation and nudity, among other vices. So, various companies such as Twitter and Facebook have opted for online content moderation to create a safe environment for its users.

The sheer volume of content generated online makes content moderation an uphill task. Moderators must deal with terrorist propaganda, hate speech, and nudity, among many other things. Moreover, the fact that most content is user-generated makes identifying and categorizing challenging.

Even the popular iGaming sector isn’t spared from cyber threats. Online casinos contain vital player information, which is valuable to fraudsters, and that’s where AI comes in. With its machine learning algorithms, AI can keep track of login preferences, personal information, bank credentials, and a lot more keeping online casinos safe.

If there is irregular player behavior, such as unusual withdrawal amounts or account logins, AI can aptly flag any deviations and alert the player immediately. For content-based companies, unsafe content is quickly identified when published instead of relying on human reviews, which may take longer.

Twitter leverages AI to track terrorist propaganda, harassment, and abuse, among other issues. AI reports any tweet that violates its terms of service, and Twitter CTO Elon Musk has vowed to use AI to reign in on people and bots that may perpetuate misinformation. Facebook’s AI flags up to 90% of misinformation on its platform. However, more needs to be done by social media magnates to combat harmful content.

AI-based content moderation still needs to overcome various pitfalls despite its promise. An example is unknowingly flagging safe content as unsafe. A scenario is when Facebook flagged legit news articles concerning COVID-19 at the onset of the pandemic. The Meta-based platform also marked posts about the famed Plymouth Hoe landmark in England as offensive.

On the flip side, failure to capture disastrous content can have devastating consequences. The aggressors of the infamous El Paso and Gilroy shootings had earlier posted their motives on Instagram and 8chan. Racial biases are another impediment AI exhibits that needs attention to ensure there’s no discrimination online.

To deal with hindrances affecting the working of AI, moderation systems need an upgrade in quality training data. Most organizations outsource data to train their AI systems in Third World countries where personnel need more language skills and cultural references to accurately moderate content.

Someone who’s not a native English speaker will over-index profanities and mark them as offensive despite being used in a positive context, such as the Plymouth Hoe situation.

Surge AI, a company designed to train AI in the subtleties of languages, is stepping into the fold to mitigate the above hurdle. For instance, Facebook has faced countless issues aggregating high-quality data to train its moderation systems in essential languages. The Meta-based platform had inconclusive info on toxic slurs in Arabic that made it miss out on violating posts in Afghanistan.

Due to violence against ethnic groups in Assam, Facebook employees were flagging hate speech remarks, though their efforts proved futile since the company didn’t have an Assamese hate speech model.

However, Surge AI aims to develop high-quality datasets that organizations can use to improve the content-moderation algorithms to flag explicit content. OpenAI’s GPT-3 is an impressive example of a state-of-the-art language generation model, and there’s no reason why such large datasets can’t be used in AI moderation.

Better than low-quality datasets, sufficient data will allow machine-learning models to highlight harmful content accurately and without bias.

While AI-aided content moderation isn’t a spot-on solution, it’s a resourceful tool that can assist organizations in keeping their sites safe and reducing abuse on social media. AI tech is continuously advancing, and it can potentially create a safer online world for everyone.