OpenAI: a Red Team network on AI security, and you can apply

Estimated read time 2 min read

OpenAI: a Red Team network on AI security, and you can apply

OpenAI’s ChatGPT has accumulated more than 100 million users worldwide, highlighting both the positive use cases of AI and the need for greater regulation. So OpenAI is putting together a team to build more secure and robust models.


On Tuesday, OpenAI announced the launch of its OpenAI Red Teaming Network, composed of experts who can help inform risk assessment and mitigation strategies in order to deploy safer models.


This network will transform the way OpenAI conducts its risk assessments into a more formal process involving different stages of the model and product development cycle. A logic in opposition to “one-off commitments and selection processes before major model deployments”, according to OpenAI.

No need to have previous experience with AI systems

OpenAI is looking for experts from all walks of life to form the team, especially in the fields of education, economics, law, languages, political science and psychology, to name a few.

However, OpenAI clarifies that it is not necessary to have prior experience with AI systems or language models.


Members will be compensated for their time and subject to non-disclosure agreements (NDAs). Since they will not be involved in every new model or project, being part of the Red Team (editor’s note. a team that tests the flaws, in opposition to the Blue Team, or the team of defenders in the world of cybersecurity) could represent only a commitment of… five hours a year. You can apply to be part of the network on the OpenAI website.

“A unique opportunity to shape the development of AI technologies and policies”


In addition to OpenAI’s red teaming campaigns, experts can engage with each other on “red teaming practices and results,” according to the blog post. “This network offers a unique opportunity to shape the development of safer AI technologies and policies, as well as the impact that AI can have on the way we live, work and interact,” says OpenAI.


“Red teaming” is an essential process to test the effectiveness and guarantee the safety of new technologies. Other tech giants, such as Google and Microsoft, have similar processes dedicated to their AI models.


Source: “ZDNet.com “

You May Also Like

More From Author