
The White House-sponsored summit on AI aimed to address the use of AI in supporting American innovation by bringing together experts, industry leaders, and policymakers to discuss and identify ways to harness the potential of AI while managing its risks. The summit highlighted innovative efforts by federal agencies that have already adopted AI and explored opportunities for collaborative partnerships, workforce development, and the development of AI technical standards. The event also focused on engaging with international allies and partners to develop a framework for the responsible deployment and use of AI worldwide.

Helen Toner, a former OpenAI board member and the director of strategy at Georgetown’s Center for Security and Emerging Technology, expressed concern about relying solely on crisis-driven AI legislation3. She highlighted the potential risks of such an approach, which include:
Knee-jerk reactions: If laws are made in response to crises rather than through careful deliberation, they might not be well thought out. This could lead to ineffective or even counterproductive regulations.
Inconsistency and confusion: Crisis-driven legislation often addresses specific incidents or problems rather than establishing broad, coherent frameworks. This could result in a patchwork of inconsistent regulations that are difficult for businesses and consumers to navigate.
Lack of preparedness: Without proactive legislation in place, society may be caught off guard by future AI-related crises. This lack of preparedness could exacerbate the negative impacts of such events.
Stifling innovation: Overly reactive regulations could limit the ability of AI developers and businesses to innovate. This might slow the progress of beneficial AI technologies.
Potential for unintended consequences: Hasty regulations might not fully consider the potential long-term implications of AI use. This could lead to unintended negative consequences that could have been mitigated with more thoughtful policymaking.
Toner argued that even a high-level federal mandate would be preferable to the current state of affairs, where many AI regulations are being proposed and enacted at the state level. She suggested that establishing common-sense guardrails now could help to prevent or mitigate future AI-related crises and reduce the likelihood of needing rapid, poorly-thought-through responses later.

State laws vary in their definition and regulation of "automated decision making" in several ways. Some states may define automated decision making more broadly, encompassing any decision made by an AI algorithm, while others may only consider a decision automated if it is made without any human involvement3. Additionally, some states may regulate the use of automated decision making more strictly, requiring companies to disclose their use of such systems and to ensure they do not produce discriminatory outcomes. Other states may have more lenient regulations in place. This variation in definition and regulation can create a patchwork of laws that may be difficult for businesses to navigate, especially those operating in multiple states.