The Federal Trade Commission has released a report calling on lawmakers to consider developing legal frameworks to help ensure that the use of artificial intelligence tools does not cause additional harm.
FTC said Thursday the report was in compliance with a congressional requirement in the 2021 Appropriations Act to assess how AI may be used to detect and address “online harms,” such as misinformation campaigns, online fraud, fake reviews and accounts, hate crimes, media manipulation and bots.
The commission’s report to Congress discussed several concerns with regard to the use of AI tools, including inherent design flaws and inaccuracy, bias and discrimination and commercial surveillance incentives.
“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, director of the FTC’s bureau of consumer protection.
“Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands,” added Levine.
The FTC report also outlined several recommendations, including the need to advance transparency and accountability when it comes to AI use, identification of the source of specific content using authentication tools and the responsibility of data scientists and their employers in protecting privacy and security, particularly in the treatment of training data.
“Any initial legislative focus should prioritize the transparency and accountability of platforms and others that build and use automated systems to address online harms,” the report reads.