Artificial intelligence industry leader OpenAI LP is calling for a diverse group of experts to join its newly established Red Teaming Network, in order to evaluate and stress test its AI models.
The initiative, announced today, aims to identify potential risks and improve the safety of AI applications such as ChatGPT and DALL-E.
Red teaming is a pivotal step in the AI model development process, and has taken on more importance as generative AI captures the imagination of the public. It involves putting AI models under intense scrutiny, and enables biases and vulnerabilities to be identified and resolved before anyone else discovers them.
That’s particularly important because OpenAI’s DALL-E 2 model has previously been criticized for stereotyping. It’s also useful for text-generating bots like ChatGPT, as it can ensure they adhere to safety rules. ChatGPT has also been criticized in the past, most recently for its alleged gender bias.
In a blog post, OpenAI explained that it has long relied on red teaming to ensure the safety and neutrality of its AI systems. However, though it has previously engaged with AI experts on an ad-hoc basis, the new Red Teaming Network will provide continuous and iterative input from multiple trusted experts at every stage of its models’ development.
OpenAI stressed that the assessment of AI systems requires expertise from a wide variety of domains, from people with “diverse perspectives and lived experiences.” As such, it emphasized that it’s seeking geographic diversity in its network of experts, with expertise across fields such as psychology, healthcare, law, education and more.
It’s striking that OpenAI is asking for experts outside of things like traditional computer science and AI research, and for geographic representation. It suggests a multidisciplinary approach that aims to capture a more comprehensive view of the risks and biases of AI, as well as the opportunities it enables. It’s also an exciting opportunity for individuals to participate in the ongoing development of AI. Those who are interested in joining the network can apply now.
OpenAI said members of its Red Teaming Network will be required to sign nondisclosure agreements. It added that they’ll be compensated for their work on projects it commissions. Network members will remain anonymous, but their research may be published. In the past, OpenAI has published blog posts and articles with numerous insights derived from its red team collaborations.
According to OpenAI, the Red Teaming Network aligns with its mission of developing AI that will be broadly beneficial to everyone. It said it’s seeking participants ranging from individuals who are subject-matter experts to research organizations and even civil society groups.
Once the network is established, it will selectively choose members for its projects based on their skills and expertise. So not every expert will participate in every project. It said it will choose members for the network on a rolling basis until December, after which point it will reevaluate.