ChatGPT parent OpenAI wants humans to regulate AI, proposes a regulatory body

chatgpt creator OpenAI A new international body has been proposed to regulate artificial intelligence (AI). Led by the company’s CEO Sam Altman, the company said that in the next ten years AI systems could have expert skills in most domains and perform tasks similar to those of the largest corporations today.

In a blog post on Monday, OpenAI explained the reason for regulating AI, the company said, “There should be strong public oversight over the governance of the most powerful systems as well as decisions regarding their deployment. We believe that People around the world should democratically decide on the limits and omissions of AI systems.”

“We don’t yet know how to design such a mechanism, but we plan to experiment with its development.” OpenAI added

The blog post was written by the founder of Open AI Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever. The post compared the ‘superintendence’ to nuclear energy and suggested the creation of an authority similar to the International Atomic Energy Agency to reduce AI risks.

OpenAI plans to address the challenges presented by AI:

OpenAI proposed a three-point agenda to reduce the risks of future superintendent AI systems.

1) Coordination among AI makers: OpenAI’s blog post suggested that companies that build AI systems such as Bard, Anthropic, Bing should make a coordinated effort to ensure that ‘superintelligence’ is developed in a way that ensures safety and enables the inclusion of these in society. Help in smooth integration of systems.

The ChatGPT creator suggested two ways in which this coordination could occur: governments around the world could set up a regulatory system involving the major AI manufacturers, or these companies could agree to limit AI development to a certain rate per year. can be mutually agreed upon.

2) International Regulatory Body: OpenAI has suggested a new international body like the International Atomic Energy Agency to reduce the existential risks posed by superintending AI systems. According to OpenAI, the proposed new body should have the authority to inspect systems, require audits, test for compliance with security standards, impose restrictions on the degree of deployment and level of security, etc.

3) Safe Superintendence: OpenAI says that the company is working on making artificial intelligence systems safe and in line with human values ​​and human intentions.

catch all technology news And updates on Live Mint. download mint news app to receive daily market update & Live business News,

More
Less