Hello, Guest.!
Microsoft’s Brad Smith on 5-Point Blueprint for Public Governance of AI
Brad Smith
/

Microsoft’s Brad Smith on 5-Point Blueprint for Public Governance of AI

2 mins read

Brad Smith, vice chair and president of Microsoft, said there are five actions governments should take when addressing emerging and existing issues related to artificial intelligence through law, public policy and regulations and the first is implementing and building on new government-led AI safety frameworks.

He cited as an example the AI Risk Management Framework launched by the National Institute of Standards and Technology.

Another step governments should take is requiring effective safety brakes for AI platforms that control critical infrastructure, Smith wrote in a blog post published Thursday.

“In this approach, the government would define the class of high-risk AI systems that control critical infrastructure and warrant such safety measures as part of a comprehensive approach to system management,” he noted.

Smith added that governments should direct operators to assess high-risk platforms to ensure the effectiveness of safety measures.

He cited the need to develop a regulatory and legal framework based on the technology architecture for AI and said the blueprint comes with information about some of the key components for developing and using generative AI models.

“Using this as context, it proposes that different laws place specific regulatory responsibilities on the organizations exercising certain responsibilities at three layers of the technology stack: the applications layer, the model layer, and the infrastructure layer,” Smith wrote.

The two other considerations Smith discussed are promoting transparency and ensuring access of academic and nonprofit organizations to AI resources and pursuing public-private partnerships to use AI as a tool to address the societal challenges that come with the adoption of new technology.