I’ve been wanting to write out about artificial intelligence (AI) for a while now and this brief article barely even gets at the cusp of it, but I keep reading about it so I felt compelled to write something about it. And I want to scare some people whom maybe know nothing about it, like my parents, who of course probably deem my discussion of it as another deranged rant from a liberal arts college student.
Well, I remember hearing about AI a while back ago and thinking, ‘this is nothing I’ll see in my lifetime’—but I couldn’t of been more wrong.
Super smart science people are saying we could see computers with the processing power of the human brain by 2025. So, perfect—I will live to see them! And be slaughtered by them!
Because that is my initial thought—that these things will turn on us and kill us. I don’t know if I really believe that, but it is the first thing I think of when I hear about this stuff. I mean look at this thing:
Wildcat, a product of Boston Dynamics, runs faster than a Division 1 100 meter sprinter. But in all seriousness my paranoria isn’t that crazy, is it?
To answer this we first need to understand what exactly AI is being created to do.
In short, it’s being created to do everything a human can, but better—A LOT better. Like no errors ever better. And the idea is to improve upon everything we humans do. AI is being created to solve all our problems (in seconds) and to invent things. Just like we humans have goals, these robots will have goals as well, and they will be working to make advancements in technology, healthcare, commerce, and so on. But they’re robots, and that’s where things get tricky. Unlike robots, humans have free will, compassion, and most likely aren’t going to kill you to make one more paperclip. (Google the paperclip maximizer theory if you want to freak yourself out.)
So are we just creating a much, much bigger problem?
Elon Musk, CEO and CTO of SpaceX, thinks that we are “summoning the demon”. I don’t know what demon, but I don’t think I want any summoned. And he seems to think once this demon of AI is summoned it’s going to enslave us as pets.
Musk shared his thoughts on AI on an interview with StarTalk, stating,”I’m quite worried about artificial super intelligence these days. I think it’s something that’s maybe more dangerous than nuclear weapons. We should be really careful about that. If there was a digital super intelligence that was created that could go into rapid, recursive self improvement in a non logarithmic way, that could reprogram itself to be smarter and iterate really quickly and do that 24 hours a day on millions of computers, then that’s all she wrote.”
He then went on to say the thing about enslaving us as pets but the quote isn’t that great so I left it out.
Okay, so that’s one dude’s opinion.
Bill Nye the Science Guy was sitting in on Musk’s interview and he seems to think Musk needs to chill out.
In reply to Musk’s raving prophesy, Bill stated,”I think people have to keep in mind—computers are so reliable—but somebody is literally or in a sense shoveling the coal. What happens if you unplug the supercomputer or intelligence?”
And to that Musk and his fellow “beware of AI” friends reply that the computers will prevent us from unplugging them.
And the debate goes on and on, back and forth. And both sides have good, valid points. And I think all that really matters is you pay attention to what’s going on because this stuff is sneaking up on us. And maybe it won’t be so bad when it gets here, or maybe it will kill all of us, so if you see Wildcat coming at you, run.