What would happen if a super-intelligent AI, smarter than the smartest humans, one day became malicious? A new OpenAI team wants to make sure that we will never find out.
In an announcement made this week, OpenAI said it was putting together a team responsible for “directing and controlling AI systems that are much smarter than we are”.
OpenAI goes on to say that superintelligence, which could emerge in the next decade, would be the most important technology ever created and that it could help solve the most important problems in the world. But she also issues a rather severe warning: “It could also be very dangerous and lead to the deresponsibility of humanity, or even to its extinction”.
Humans can control artificial intelligence because they are smarter
As things stand, the company claims that humans can control artificial intelligence because they are smarter. But what will happen when AI overtakes humans?
This is where the new team comes in, led by the current scientific director of the research laboratory. The team will be composed of the best researchers and engineers from OpenAI, as well as 20% of its current computing power.
The final goal is to create an AI system that achieves a set goal and does not go outside the established parameters. The team plans to achieve this in three stages:
- Understand how AI analyzes other ais without human interaction.
- Use AI to search for problem areas or exploits.
Deliberately training part of the AI incorrectly to see if this is detected.
Humans are using AI to train an AI in order to keep a super intelligent AI under control
In short, a team of humans from OpenAI is using AI to help train the AI in order to keep the super intelligent AI under control.
And they think they can achieve this within four years.
They admit that this is an ambitious goal and success is not guaranteed, but they are confident.
Source: “ZDNet.com “