I’m beginning to think reality is overrated and AI may not be as awful as we think.
Not a day goes by lately when we don’t see a news story about AI doing something that people fuss about. But are those things really so bad?
Consider Waymo, the self-driving cars. One or two small-time accidents and cities want to ban them.
Now consider how many accidents human drivers get into. Go on, consider it. There are a lot more of those than Waymo accidents and they’re a lot more serious.
Yes, I know there are a lot more human-driven cars out there, but still, considering the stats, why would anyone be brave enough to get into a car driven by a human?
It’s not very different from being afraid to fly and not afraid to ride in a car. We fear the safer alternative.
Yes, there are problems with AI. The loss of human jobs thing is pretty bad. But at least some of the AI issues could be solved if only we properly used AI.
Consider the revelation last week that the South Coast Air Quality Management District may have been persuaded by an avalanche of real-looking emails generated by a computer program. The campaign apparently was created by a company called CiviClick that provides “advocacy software for public affairs professionals.”
This sounds bad and a subversion of democracy with fake democracy, but there’s nothing stopping the opposite side of any debate from hiring CiviClick too. Then you can have the computers fight each other online and us humans can take a break. The playing field is evened once more.
I’d like this concept to apply to social media. Imagine X.com with nothing but AI campaigns from every point of view fighting it out.
No longer would we need to participate and be outraged on X. Let the bots do the arguing while we get on with our lives without worrying about what anyone is saying.
Actually, for all I know, this is already happening on social media. It’s hard to tell if anyone human is on there.
But what about hallucinations, you may ask? Shouldn’t we be concerned about AI making stuff up?
Well, yes, but once again, compare AI with humans. See what I mean?
Consider what we hear from Donald Trump, Kristi Noem, Pam Bondi, Tom Homan, JD Vance, etc., etc. The economy is great. Prices are down. The U.S. is hot, but not in the climate sense. That cell phone looked like a gun. The nurse was a terrorist. Coal is clean.
It’s one hallucination after another. How is AI not more trustworthy?
So maybe we should be kinder to and more understanding of AI programs. After all, they’re only human (or human-like).
Too helpful?
How much AI is too much AI?
I don’t know, but someone in the city of West Covina, California, is very annoyed. A weird petition for a writ of mandate was filed last week on behalf of the city seeking records of a city council member’s use of AI.
I’m guessing the “city” is really the other city council members who seem mad because one of their own, Brian Gutierrez, uses AI a lot. According to the petition, he uses it for “ad nauseum” emails and for questions and answers during council meetings.
I have to pause here to note that many, many years ago when I would report on the occasional local government meeting I’d be astonished at how dumb and uniformed elected officials could be. You’d think people would be pleased to see an official who is informed and willing to do some research.
But, no, they’re not.
Also, it turns out, according to the petition, Gutierrez may be autistic. If true, you’d think they’d cut him some slack and maybe appreciate his skills.
There’s no explanation as to what these AI records would prove other than he used AI a lot (which they already know). Is it illegal to be annoying? There’s no mention of a crime being investigated here.
I’m hoping an AI program files a response to this.
Subscribe to our free newsletters
Our weekly newsletter Closing Arguments offers the latest about ongoing trials, major litigation and rulings in courthouses around the U.S. and the world, while the monthly Under the Lights dishes the legal dirt from Hollywood, sports, Big Tech and the arts.





