Microsoft laid out five principles the government should consider to regulate artificial intelligence as it hurries to ensure laws and enforcement can keep up with the rapidly advancing technology.
The principles Microsoft President Brad Smith announced Thursday are:
— Installing and building on AI safety frameworks led by the government, such as the U.S. National Institute of Standards and Technology AI Risk Management Framework.
— Requiring safety breaks when AI is used to control critical infrastructure.
— Creating a legal and regulatory framework for applications, advanced foundation models and AI infrastructure.
— Promoting transparency and funding academic and nonprofit research.
— Creating public-private partnerships to use AI to address the effect it will have on society, in areas such as democracy and workforce.
Smith suggested AI services should adopt a framework from the financial services sector: Know Your Customer, or KYC. In this case, however, it should be KY3C, meaning AI developers should know their cloud, customers and content to limit fraud or deceptive use.
Smith announced the new framework at an event in Washington, D.C., on Thursday. It’s the latest push from a top player in the industry for the government to create and enforce guardrails on how the technology is used, as others in the field have warned the significant consequences of unregulated development should warrant a pause.
Last week, Sam Altman, CEO of ChatGPT-maker OpenAI, urged a Senate subcommittee to implement protections and guardrails on the technology. While some lawmakers on the panel praised Altman’s openness to regulation, prominent researchers who spoke with CNBC after the hearing warned Congress should not be overly swayed by proposals backed by corporate interests and should instead consider an array of expert voices.
Microsoft has said it’s investing billions of dollars into OpenAI as it seeks to be a leader in the field.
WATCH: Microsoft bringing an A.I. chatbot to data analysis and Bing to ChatGPT