February 29, 2024

Microsoft President Brad Smith said Thursday that his biggest concern around artificial intelligence was deep fakes, realistic looking but false content.

In a speech in Washington aimed at addressing the issue of how best to regulate AI, which went from wonky to widespread with the arrival of OpenAI’s ChatGPT, Smith called for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for nefarious purposes.

“We’re going have to address the issues around deep fakes. We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians,” he said.

“We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”

Smith also called for licensing for the most critical forms of AI with “obligations to protect the security, physical security, cybersecurity, national security.”

“We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements,” he said.

For weeks, lawmakers in Washington have struggled with what laws to pass to control AI even as companies large and small have raced to bring increasingly versatile AI to market.

Last week, Sam Altman, CEO of OpenAI, the startup behind ChatGPT, told a Senate panel in his first appearance before Congress that the use of AI interfering with election integrity is a “significant area of concern”, adding that it needs regulation.

Altman, whose OpenAI is backed by Microsoft, also called for global cooperation on AI and incentives for safety compliance.

Smith also argued in the speech, and in a blog post issued on Thursday, that people needed to be held accountable for any problems caused by AI and he urged lawmakers to ensure that safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure so that humans remain in control.

He urged use of a “Know Your Customer”-style system for developers of powerful AI models to keep tabs on how their technology is used and to inform the public of what content AI is creating so they can identify faked videos.

Some proposals being considered on Capitol Hill would focus on AI that may put people’s lives or livelihoods at risk, like in medicine and finance. Others are pushing for rules to ensure AI is not used to discriminate or violate civil rights.

© Thomson Reuters 2023 


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

About Author