California Wants To Regulate AI, But Can’t Even Define It


California’s Struggle with AI Regulation: A Need for Clarity

California is on the verge of passing an AI regulation bill currently on Governor Gavin Newsom’s desk. However, the bill suffers from a major flaw—it fails to clearly define what constitutes artificial intelligence. Without a precise definition, a law cannot effectively govern the technology it seeks to regulate.

Absence of Clarity

Industry expert Andrew Ng highlights the fundamental issue, stating that “the bill makes the fundamental mistake of regulating a general-purpose technology rather than the applications of that technology.” Chris Kelly, former chief privacy officer at Facebook, points out that the bill is both “too broad and too narrow in a number of different ways.” Additionally, Ben Brooks from Stability AI warns that certain provisions could “pose a serious threat to open innovation.”

The core problem remains: unclear definitions lead to ineffective regulation. The bill defines AI as “an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” This definition is so vague that it could apply to virtually any software program.

Breaking Down the Definition

Let’s examine why this definition lacks substance:

  • “Varies in its level of autonomy.” This applies to all systems or machines. The level of autonomy depends on human usage.
  • “Can… infer from the input it receives how to generate outputs.” Every computer program operates on this principle: it takes input and produces output.
  • “Can influence physical or virtual environments.” The impact of software outputs relies on their application, which is determined by human action.

The bill attempts to narrow its focus to “AI models” that satisfy certain criteria, yet the result remains unclear. The term “model” could encompass nearly any programming formalism. Moreover, the criteria are based on quantitative factors, like the number of calculations performed, rather than specific AI qualities. This might include programs unrelated to AI, such as those involved in breaking encryption through extensive calculations.

A Call for Specificity

The nebulous language could render the bill ineffective where it’s truly needed while allowing for misuse where it might unfairly target various systems. The wording invites a catch-all defense against broader application—an argument that “this law is too vague to apply to my specific program.” At the same time, it allows for potential selective enforcement across non-AI systems.

European regulators have encountered similar challenges, describing AI in broad terms that unintentionally cover all types of software. Members of the Dutch Alliance on AI criticized this approach, asserting, “This definition… applies to any piece of software ever written, not just AI.”

Conclusion

The central issue is that “AI” is often treated as an ambiguous buzzword that cannot be easily defined. While it’s essential to regulate technology to address concerns like algorithmic bias and oversee autonomous weapons, clarity in definitions is crucial. Using imprecise language like “AI” undermines the effectiveness of regulatory efforts. Regulation is already a complex process, and adding vague terminology only complicates matters further.

In summary, legislation should focus on clearly defined technologies rather than attempting to regulate the elusive concept of AI. Doing so will enhance the credibility and impact of any regulatory initiatives.

© Singularity Chamber of Commerce (SChamber) All Rights Reserved.