William Agnew's research focuses on ensuring that the outputs of large language models are representative of the communities they reference. As a Carnegie Bosch postdoctoral fellow and organizer of Queer in AI, he works on analyzing the integrity of training datasets for large language models, identifying and overcoming biases across various mediums, and empowering marginalized communities in discussions around privacy and security in AI implementation.
Queer in AI is a member of the National Institute of Standards and Technology's AI Safety Institute Consortium, which aims to advance the trustworthiness and safety of AI systems. Queer in AI's role includes working to ensure that AI outputs are representative of marginalized communities and helping AI developers identify and overcome biases across various mediums, with the goal of making technology more equitable in its application.
Agnew's audits address biases in AI datasets by analyzing the integrity of training datasets for large language models, identifying and overcoming biases across mediums like text, images, voice, and music. The goal is to ensure datasets are representative, inclusive, and free from toxic stereotypes, protecting communities' rights over their data and representations.