AI can be ‘sword and shield’ against misinformation, Sir Nick Clegg says

3 minutes, 14 seconds Read

[ad_1]

Artificial intelligence (AI) can be a “sword and a shield” against dangerous content material, not only a instrument to unfold it, Sir Nick Clegg has mentioned.

The former Liberal Democrat deputy prime minister is now the pinnacle of world affairs at tech large Meta, the mum or dad firm of Facebook, Instagram and WhatsApp.

Speaking throughout an AI occasion at Meta’s London places of work, Sir Nick mentioned that whereas it was “right” to be “vigilant” about generative AI getting used to create disinformation to disrupt elections, he mentioned AI was the “single biggest reason” Meta was getting higher at lowering the unfold of “bad content” on its platforms.

I’d urge everybody… to think about AI as a sword, not only a defend, in relation to dangerous content material

Sir Nick Clegg

In 2024, billions of persons are set to go to the polls with elections due in numerous the world’s largest democracies, together with the UK, US and India.

It has led some specialists to warn of the potential menace posed by the fast rise of generative AI instruments – together with picture, textual content and audio content material apps – and the potential of them getting used to unfold misinformation and disinformation with the goal of disrupting democratic processes.

Numerous senior UK politicians have already been the themes of so-called deepfakes, which have unfold on social media.

And on Tuesday, fact-checking charity Full Fact mentioned the UK was at present susceptible to misinformation, and extra authorities intervention was wanted on the problem with elections on the horizon.

Sir Nick mentioned give attention to the problem was essential, however argued that good AI was potent safety against dangerous AI, and that Meta and others had the instruments wanted to struggle the unfold of dangerous materials.

“I would urge everyone – yes, there are risks – but to also think of AI as a sword, not just a shield, when it comes to bad content,” he mentioned.

“If you look at Meta, the world’s largest social media platform, the single biggest reason why we’re getting better and better in reducing the bad content that we don’t want on Instagram and Facebook is for one reason; AI.”

He added that using AI to scan Meta’s platforms to seek out and take away dangerous content material had diminished the degrees of dangerous content material by “50 to 60% over the last two years” that means that now “for every 10,000 bits of content, one bit of content might be hate speech”.

“Some of the work teams have been doing inside Meta to improve the way that we use our most advanced AI tools to triage content, so that we make sure that the 40,000 people we have working on content moderation really look at the most acute edge cases and they don’t waste a lot of their time looking at stuff that is inoffensive or not a problem has really improved rapidly in recent months,” he mentioned.

“It is right that there is an increasingly high level of industry wide cooperation, particularly this year because of this unprecedented number of elections.

“We should be vigilant, but I would urge you to also think of AI as a great tool to navigate that difficult landscape and I’m quietly optimistic that the whole industry is trying to really lean into this as cooperatively as possible.”

During the occasion, Sir Nick additionally introduced that Meta’s subsequent AI massive language mannequin – used to energy AI instruments, together with chatbots constructed by Meta and different companies – would be launched shortly.

Sir Nick mentioned the brand new mannequin, referred to as Llama 3, would start to roll out “within the next month, hopefully less” and would proceed over the course of the yr.

[ad_2]

Source hyperlink

Similar Posts