[ad_1]
The Justice Department named its first-ever official targeted on artificial intelligence (AI) on Thursday in anticipation of the quickly evolving know-how’s influence on the legal justice system.
Jonathan Mayer, a professor at Princeton University who focuses on the “intersection of technology and law, with emphasis on national security, criminal procedure, consumer privacy, network management, and online speech,” based on his on-line biography, was chosen to serve as the DOJ’s chief science and know-how adviser and chief AI officer, Reuters reported.
“The Justice Department must keep pace with rapidly evolving scientific and technological developments in order to fulfill our mission to uphold the rule of law, keep our country safe and protect civil rights,” U.S. Attorney General Merrick Garland stated in an announcement.
Mayer beforehand served as the know-how adviser to Vice President Kamala Harris throughout her time as a U.S. senator, and as the Chief Technologist of the Federal Communications Commission Enforcement Bureau. In his new position, he’s anticipated to advise Garland and DOJ management on issues associated to rising applied sciences, together with responsibly combine AI into the division’s investigations and legal prosecutions, based on Reuters.
Mayer is ready to steer a newly shaped board of legislation enforcement and civil rights officers that may advise Garland and others on the Justice Department on the ethics and efficacy of AI methods, based on Reuters. He may also search to recruit extra technological specialists to the division.
U.S. officers have been weighing greatest steadiness benefiting from AI, whereas additionally minimizing the hazards of the loosely regulated and quickly increasing know-how.
During a speech at Oxford University within the United Kingdom final week, U.S. Deputy Attorney General Lisa Monaco stated the Justice Department has already deployed AI to categorise and hint the supply of opioids and different medicine, to assist “triage and understand the more than one million tips submitted to the FBI by the public every year,” and “to synthesize huge volumes of evidence collected in some of our most significant cases, including January 6.”
“Every new technology is a double-edged sword, but AI may be the sharpest blade yet. It has the potential to be an indispensable tool to help identify, disrupt, and deter criminals, terrorists, and hostile nation-states from doing us harm,” Monaco stated.
“Yet for all the promise it offers,” she continued. “AI is also accelerating risks to our collective security. We know it has the potential to amplify existing biases and discriminatory practices. It can expedite the creation of harmful content, including child sexual abuse material. It can arm nation-states with tools to pursue digital authoritarianism, accelerating the spread of disinformation and repression. And we’ve already seen that AI can lower the barriers to entry for criminals and embolden our adversaries. It’s changing how crimes are committed and who commits them — creating new opportunities for wanna-be hackers and supercharging the threat posed by the most sophisticated cybercriminals.”
Monaco highlighted the potential threat to election safety posed by AI, saying how overseas adversaries may radicalize customers on social media with incendiary content material created with generative AI, misinform voters by impersonating trusted sources and spreading deepfakes, and unfold falsehoods utilizing chatbots, faux photographs and even cloned voices.
CLICK HERE TO GET THE FOX NEWS APP
“This year, over half the world’s population – more than four billion people – will have the chance to vote in an election. That includes some of the world’s largest democracies – from the United States to Indonesia and India, from Brazil to here in Britain,” Monaco stated. “We’ve already seen the misuse of AI play out in elections from Chicago and New Hampshire to Slovakia. And I fear it’s just the start. Left without guardrails, AI poses immense challenges for democracies around the world. So, we’re at an inflection point with AI. We have to move quickly to identify, leverage, and govern its positive uses while taking measures to minimize its risks.”
[ad_2]
Source hyperlink