The regulation of AI in California, Colorado, and other states could potentially undermine the dominance of U.S. technology companies.
- The recently vetoed state bill on safe and secure artificial intelligence systems in California could jeopardize the state's standing as a technology hub.
- In 2024, AI legislation has been proposed in 47 jurisdictions, including states, territories, and the District of Columbia.
- At the federal level, the U.S. is among the few G20 countries without a comprehensive data privacy law like the EU's GDPR.
Technology hubs like the U.S. and California's Silicon Valley are expected to consistently innovate.
Although California's recently rejected state bill on "safe and secure innovation for frontier artificial intelligence systems," U.S. tech leaders remain concerned about the potential impact on innovation. Opponents to similar legislation argue that California's status as a national and global technology hub could be at risk of stifled innovation.
Yann LeCun, Meta's chief AI scientist, wrote on X that regulating basic technology will stifle innovation.
In 2024, 48 jurisdictions, including Puerto Rico and the U.S. Virgin Islands, have introduced AI bills. The Colorado AI Act, which prohibits algorithmic discrimination in high-risk AI systems, was the first to be enacted in the U.S. and preceded the European Union AI Act.
In September, California Governor Gavin Newsom vetoed a bill but signed into law another one that mandates transparency in generative AI systems. Although the vetoed bill's critiques are still relevant for potential future regulation in California and beyond, the AI Alliance, a group of creators, developers, and adopters in the AI industry, is concerned that certain regulations may hinder innovation, impede advancements in safety and security, and negatively impact California's economic growth.
The bill authored by Democratic California state senator Scott Weiner of District 11, which includes San Francisco, was vetoed. Weiner spoke at the AI Quality Conference in June about the bill and stated, "As human beings, we have a tendency to ignore risk until there's a problem." He clarified that the bill did not intend to interfere with startup innovation but rather keep tabs on "very large, powerful models" by requiring a minimum training budget of $100 million for specified company types.
According to Tatiana Rice, deputy director for U.S. legislation at non-profit, non-partisan think tank Future of Privacy Forum, the U.S. is in a unique position as it is one of the only G20 nations without a comprehensive data privacy law, similar to the EU's General Data Protection Regulation (GDPR). Rice stated that a comprehensive data privacy regime can help mitigate privacy risks associated with AI.
Historically, the U.S. has regulated data privacy through decentralized, state-by-state legislation. However, the American Data Privacy and Protection Act, which was later replaced with the American Privacy Rights Act, faced opposition due to civil rights protections included in the text.
The White House Office of Science and Technology Policy has published a Blueprint for an AI Bill of Rights based on five principles. However, with a new administration on the way and President-elect Trump's approach to regulation expected to be more favorable to corporations, the White House may reverse course and bring the federal approach to AI in line with a broader ethos of minimal government involvement to drive competitive technological innovation. In that case, the onus will remain at the state level, where individual states will have to toe the line between tech hub status and secure innovation.
'Common-sense AI regulation'
Jonas Jacobi, CEO of ValidMind, an AI risk management company for financial institutions, stated that "the incorrect regulation can completely stifle innovation." However, he emphasized that "this does not mean there should be no regulation. There should be sensible regulation, particularly around these large foundational models, which are highly potent."
Kolena CEO Mohamed Elgendy believes that it is not logical to set a threshold on the brain power of a model versus its application. He specifically distinguishes between models like GPT and their applications like ChatGPT.
Elgendy believes that the risks associated with AI are not related to its capabilities but rather to security concerns, specifically the potential for malicious use. Jacobi agrees with this perspective, stating that developers cannot be held responsible for all outcomes resulting from the use of their models.
Rice believes that while U.S. companies are threatening to relocate due to overreach of AI regulation, they have some say in the matter. However, it would take a significant effort for California's tech hub to dissipate, despite state-by-state fragmentation. Despite this, Rice stated that there are individuals working to develop a unified approach.
Senator Robert Rodriguez, a Democratic majority leader of the Colorado Senate, and Democratic Connecticut State Senator James Maroney are two individuals who are "striving to be mindful of avoiding a fragmented approach to AI regulations, as has occurred with data privacy," according to Rice. In relation to Colorado's successful AI bill, Maroney posted on Facebook, "It is unfortunate that Connecticut opted not to join Colorado as a leader in this area. However, we will return with a bill next year."
If a model approach is done incorrectly, it could pose a significant risk to the U.S., especially in the tech industry, as companies may choose to leave the country entirely, which would be a major cybersecurity concern.
Elgendy's company is currently conducting a nine-month case study to establish AI "gold standards" in collaboration with regulatory bodies for the finance and ecommerce sectors. Kolena aims to develop an actionable set of guidelines and standards that will provide clear direction for builders and regulators across these major industries, addressing the void left by federal lawmakers and regulators.
"Industry leaders will make progress even without congressional movement," Elgendy stated.
Technology
You might also like
- Tech bros funded the election of the most pro-crypto Congress in America.
- Microsoft is now testing its Recall photographic memory search feature, but it's not yet flawless.
- Could Elon Musk's plan to reduce government agencies and regulations positively impact his business?
- Some users are leaving Elon Musk's platform due to X's new terms of service.
- The U.S. Cyber Force is the subject of a power struggle within the Pentagon.