- Steve Gruber - https://www.stevegruber.com -

Artificial Intelligence May Not Limit Itself To Being A Servant

I’m a big fan of technology. During my life it has transformed the way we live, mostly for the better. But too much of a good thing can be dangerous. Like wiping mankind off the planet dangerous.

No? Ever seen a 1970 film called Collosus: The Forbin Project [1]? Probably not. I’m a connoisseur of bad science fiction and this is an excellent example of it. I’ll let the link above provide details, but suffice it to say it has current relevance.

In it, two AI supercomputers, one American and one Soviet, both built as national defense systems, team up to blackmail mankind into submission to an AI-run world or they will unleash nukes on everybody. Silly, right? Wellllll…

Eliezer Yudkowsky, a decision theorist at the Machine Intelligence Research Institute, thinks we’re going too far too fast with AI. He writes this after Elon Musk and 1000 other tech types also expressed their concern, in an open letter, over the burgeoning advances in Artificial Intelligence.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the Musk letter said.

Yudkowsky doesn’t think Musk and his pals go far enough. “The key issue is not ‘human-competitive’ intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence,” Yudkowsky wrote for Time Magazine.

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he asserts. “Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’

“Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers — in a world of creatures that are, from its perspective, very stupid and very slow,” he writes. “It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence — not perfect safety, safety in the sense of ‘not killing literally everyone’ — could very reasonably take at least half that long.”

Let’s say, given current standard scientific theology, that AI assumes leftist talking points on climate change to be correct. We know it’s mostly a hoax, but indulge me.

And what causes the climate threat, according to the reigning priests of the climate change religion? Well, of course. Man! Thus what would leftist programmed AI do to logically solve this problem and “save the planet” ? Uh huh.

Thus Musk and Yudkowsky are right. We need to put a break on runaway AI and think a bit. I have no problem with AI as a servant. But I’d rather not have it as a smarter quicker thinking master. I already have a fiancée for that.