Will Artificial Intelligence Save California… or Ruin It?

Technology Policy Brief #111 | By: Mindy Spatt | June 07, 2024
Featured Photo by Indy Silva for U.S. Resist News, 2024

__________________________________

While some are looking to the Artificial Intelligence (AI) industry to revive  California’s faltering tech sector, especially in San Francisco, city Supervisor Dean Preston is worried about its impact on elections.  He held a hearing recently during which he found out that the San Francisco Department of elections has no jurisdiction whatsoever over deep fakes.

“I am alarmed to learn …. there is a lack of real ability to enforce against fake AI content,” Preston, said.  He comes by his concerns honestly.  An online news  outlet, BHH News, posted a false report last year saying Preston had resigned due to attacks on him by Elon Musk.  The story was reposted by MSN before Preston’s office was able to get it removed.  The real story is that Musk, owner of San Francisco based X, disagrees with Preston, a democratic socialist, on key issues and hence is going after him.

Concerns over election misinformation are not new, but are growing along with the explosive growth of the industry; Chat GPT acquired a million users just five days after it was founded in November 2022.  By January 2023 it had a million additional users, and it is now up to over 180.5 million users.  In a newly released poll, the advocacy group Free Press found that 79 percent of people worry that information they find online is “false, fake, or a deliberate attempt to confuse.”

Analysis

While the San Francisco Department of Elections may be unable to act, the California Legislature certainly has the ability to do so.  California has been a leader in privacy, and could become one in AI if a set of bills before the legislature this year passes.  But the bills will have to get past a wealthy and well-connected lobbying effort from an industry that always fights tooth and nail against any oversight.

One major area of contention is large language models (LLM), a type of artificial intelligence program that can recognize and generate text and is  trained on huge sets of data.  California’s proposed Safe and Secure Innovation for Frontier Artificial Intelligence Models Act would establish safeguards before models LLMs could be used, and require reporting of safety incidents.  The dangers include arbitrary code execution, data poisoning, data drift, bias predictions and toxic output and the impacts can be biased algorithms, misinformation and even the creation of lethal weapons.

The California Chamber of Commerce leads a coalition of industry groups hostile to the bill and sent an opposition letter including this choice bit of doublespeak:

This, unfortunately, does not better protect Californians. Instead, by hamstringing businesses from developing the very AI technologies that could protect them from dangerous models developed in territories beyond California’s control, it risks only making them more vulnerable.

Another effort moving forward is the California AI Transparency Act, (CAITA), a bill that requires providers of large generative artificial intelligence systems to label AI-generated images, videos, and audio with embedded disclosures. It also requires a detection tool for users to determine whether content was created by AI.   The author, State Senator Josh Becker (D-Menlo Park) said “AI-generated images, audio and video could be used for spreading political misinformation and creating deep fakes.  CAITA will advance provenance, transparency, accountability, and empower individuals to make choices aligned with their values.”

Other bills under consideration include California Assembly Bill 2930 which would place limitations on the use of automated decision-making tools.  Assemblywoman Buffy Wicks, an east bay democrat, has authored a bill to compel online platforms to add watermarks to images and videos before elections this fall.  And California Senate Bill 893 would create an Artificial Intelligence Research Hub to “facilitate collaboration” and identify risks from AI in both government and the private sector.

Taken together the bills would provide the most robust legislative framework in the country for AI oversight.  That is if they are passed and signed, a scenario Governor Newsom recently cast doubt on.  “We dominate in this space. I want to continue to dominate in this space. I don’t want to cede this space to other states or other countries,” he recently said during an AI summit in San Francisco.   “If we over-regulate, if we overindulge, if we chase a shiny object we could put ourselves in a perilous position.”

But Supervisor Preston may take heart in Newsom’s indication that he’s in favor of laws to prevent election misinformation and deceptive content, because “I’ve got personal reasons to believe that’s legit — the voice, videos, these AI bots, the persuasion campaigns.”  In order to have an impact on November’s election, legislation would have to be approved on an urgency basis which requires a two-thirds vote in both the Assembly and Senate.

Engagement Resources:

Check out UsRenewNews.org/AI  for more news on Artificial Intelligence policies, technologies, and trends.

Stay in-the-know with the latest updates from our
 reporters
 by subscribing to the U.S. Resist Democracy Weekly Newsletter. We depend on support from readers like you to aide in protecting fearless independent journalism, so please consider donating to keep democracy alive today!

DONATE NOW
Subscribe Below to Our News Service
x
x
Support fearless journalism! Your contribution, big or small, dismantles corruption and sparks meaningful change. As an independent outlet, we rely on readers like you to champion the cause of transparent and accountable governance. Every donation fuels our mission for insightful policy reporting, a cornerstone for informed citizenship. Help safeguard democracy from tyrants—donate today. Your generosity fosters hope for a just and equitable society.

Pin It on Pinterest

Share This