
The main reasons organizations audit their AI models, as discussed in the VB AI Impact Tour, are to evaluate and ensure their performance, fairness, and ethical standards. Auditing helps organizations identify and mitigate potential risks associated with AI models, such as biases, inaccuracies, and ethical concerns. It also enables them to assess their risk landscape, understand the controls mitigating those risks, and ensure compliance with regulatory frameworks. Additionally, auditing AI models allows organizations to maintain transparency, accountability, and data integrity, while also enhancing the overall trustworthiness of their AI systems.

According to Justin Greenberger, he believes that the risk landscape should now be evaluated almost monthly. This is because the risk landscape is constantly changing, and organizations need to stay up-to-date with the latest developments to ensure they understand their risks and the controls that are mitigating them.

According to Greenberger, human involvement in AI-enabled processes continues to play a critical role in several ways. Firstly, humans are still needed to set the parameters of use cases and determine how they should be implemented, drawing on their contextual understanding and critical thinking skills. Secondly, humans are responsible for overseeing and auditing AI models for bias, performance, and ethical standards, ensuring that they align with societal norms and values. Lastly, humans are currently still involved in the decision-making process, although this may change over time as organizations become more comfortable with AI audit controls and spot checks. Greenberger also suggests that humans may increasingly focus on creative and emotional aspects of work, as these are areas where AI cannot easily replace human expertise.