At the 2023 Defcon hacker conference in Las Vegas, a groundbreaking initiative took place where AI tech companies partnered with algorithmic integrity and transparency groups to engage thousands of attendees in red-teaming exercises to uncover weaknesses in generative AI platforms. This collaborative effort, supported by the US government, marked a significant step towards allowing greater scrutiny of these influential yet opaque systems. Building on this momentum, the nonprofit organization Humane Intelligence has announced a new initiative to involve the public in evaluating AI office productivity software. By partnering with the US National Institute of Standards and Technology (NIST), Humane Intelligence is democratizing the process of assessing the effectiveness and ethics of AI models.
The call for participation in the red-teaming effort is open to all US residents, including developers and members of the general public. This nationwide initiative, part of NIST’s AI challenges known as Assessing Risks and Impacts of AI (ARIA), aims to enhance the capabilities for rigorously testing the security, resilience, and ethics of generative AI technologies. According to Theo Skeadas, the chief of staff at Humane Intelligence, the goal is to provide individuals with the tools to evaluate whether AI models meet their specific needs. By empowering a diverse range of participants to conduct evaluations, Humane Intelligence is paving the way for greater transparency and accountability in the AI industry.
Participants who advance through the qualifying round will have the opportunity to take part in an in-person red-teaming event at the Conference on Applied Machine Learning in Information Security (CAMLIS) in Virginia. The event will involve dividing participants into red and blue teams, with the red team tasked with attacking AI systems and the blue team focused on defense strategies. Using NIST’s AI risk management framework, specifically the AI 600-1 profile, participants will assess whether the red team can produce outcomes that deviate from the expected behavior of the systems. This structured approach leverages user feedback and real-world applications to drive scientific evaluation of generative AI technologies.
The partnership between Humane Intelligence and NIST is just the beginning of a series of red team collaborations that the organization plans to announce in the coming weeks. These collaborations will involve US government agencies, international governments, and NGOs, with the aim of promoting transparency and accountability in the development of AI models. By introducing mechanisms like “bias bounty challenges,” where individuals can report problems and inequities in AI models and receive rewards, Humane Intelligence is encouraging a broader community to engage in testing and evaluating these systems. Skeadas emphasizes that the involvement of policymakers, journalists, civil society, and non-technical individuals is essential in ensuring the responsible use of AI technology.
Leave a Reply
You must be logged in to post a comment.