More Dogs on Main: Mitigating the risk of extinction

Tom Clyde

Park Record columnist Tom Clyde.
Tom Clyde mug

The hot topic these days is artificial intelligence or AI. I don’t understand it, but apparently it’s computers that don’t just store and retrieve data, but are able to create things from that cache of data. My only direct experience with it is trying to get an AI art app to draw a picture of a Holstein cow driving a red tractor. It was important at the time for reasons I don’t remember now. It came up with a lot of interesting drawings of pleasant farm scenes, but it couldn’t figure out that I wanted the cow on the driver’s seat, hoofs on the steering wheel, putting across the field. The closest it got was a very tiny cow on all fours, grazing on the tractor seat. In other words, it lacked imagination.

Friends had come up with some interesting drawings, a couple of them made it into the Follies show this year, but as soon as we tried to get very specific, it crashed or came up with completely wrong stuff. So I dismissed it as an interesting computer game and moved on.

This week, a group of people who seem to know what they are talking about said it is considerably more than that. They think it has some real risks. They issued a short and chilling statement that said, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

Well, alrighty then, who wants pie?

Extinction is kind of a big deal. At first I assumed this dire pronouncement came from the tinfoil hat brigade. But no, this was put out by the very people who are building AI. Sam Altman, the CEO of the company that makes ChatGPT. Geoffrey Hinton, who is called the godfather of AI. They were joined by hundreds of other people deeply involved in the development. And they are worried about extinction when something goes wrong and the computers take control.

So what are they doing about it? Are they shutting down this evil menace? Hell, no. They are full speed ahead. Have you seen Nvidia’s stock price lately? Might as well make some bank on the way to doomsday. Anything with a tangential connection to AI is on fire. Even veterinary supply companies who make equipment for Artificial Insemination, which is an entirely different thing, are taking off. 

I had kind of assumed that the risk would be a loss of a lot of jobs to computers that could think their way through things. Stock analysts, newspaper columnists, and so on. It presents an opportunity to make customer service even worse. You call about an order for a pair of shoes, and end up with a truckload of snow tires instead.

The big risk is that it could fall into the wrong hands. What if humans get ahold of it, and before long, somebody asks ChatGPT how to arrange a nuclear strike on Provo. The machine generates the whole plot, but goes a step beyond and worms its way into the nuclear codes and before long, it decides on its own that nuking Hideout is a good idea. And does it. 

But why would the AI computer want to destroy the banking system or blow up cities? Who knows. Why do a lot of people want to do that sort of thing? There’s always the idea that if things began to get out of hand, we could just pull the plug on the computers that power it. Unless the machines have come up with some poison pill against that. Pull my plug, you say? Pull my plug and everything is programmed to blow to smithereens. Turn off your air conditioner, the machine demands more power. Touch the red button and humanity is extinct. Have a nice day.

Just recently, in New York, a lawyer used Chat GPT to generate a brief arguing on behalf of a client. It cited several cases that nobody could locate. It turns out that the AI software had “hallucinated” and just made up convincing legal cases to support its argument. When it couldn’t find precedent, it created it out of thin electrons. It even printed out fictitious texts of the made-up court decisions. With a tool like that, who needs Clarence Thomas on the Supreme Court?           

On the most optimistic side, the theory is that if the computer is fed the whole of human knowledge, it can sort through it all and find solutions to very complex problems. Cure cancer, solve climate change, or figure out what to do with the Arts & Culture District. Sounds promising, but the whole of human knowledge includes a lot of really bad stuff, too. With the full benefit of “real” intelligence across the whole sweep of humanity, we’ve managed to do some really stupid, horrible things. Is there any reason to think that an artificial version of our thought processes will be any less nuts?

It’s all very concerning when the people building AI say it presents a serious risk of extinction — and keep right on building it. The problem is building an artificial moral compass to go with it, and human history already proves that’s not an easy proposition.

Turns out the Amish got it right.


Sunday Drive: Blossoming color on the Nebo loop

The sun created a haze over the southern mountains of the Wasatch Range as we cruised through Payson, bypassing a Marie Osmond garage sale to get up onto the mountain road quickly. As we headed around the first curves of the National Scenic Byway, the backlighting  sun lit up the leaves like a stained glass window.

See more

Support Local Journalism

Support Local Journalism

Readers around Park City and Summit County make the Park Record's work possible. Your financial contribution supports our efforts to deliver quality, locally relevant journalism.

Now more than ever, your support is critical to help us keep our community informed about the evolving coronavirus pandemic and the impact it is having locally. Every contribution, however large or small, will make a difference.

Each donation will be used exclusively for the development and creation of increased news coverage.