It could usher in a golden age in humanity, or destroy us. Is it worth the risk?
At the risk of sounding trite here, perhaps we should cultivate some REAL intelligence first.
Do we have a choice? We are already at the point where our technology is better than humans at creating technology in many aspects. Since we have no real hard definition on what consciousness is and is not, I’d say we’re already flirting with the blurred boundaries of a.i. right now.
Can we?
I’m not convinced we will really have artificial intelligence. I have sincere doubts that AI will be conscious in any way like humans. Sure, it will be “intelligent” in that it will be able to crunch numbers 20,000 times faster than a human brain and will go find a better airline deal way more efficiently than us, but is that actually “intelligence”? Or is it just a good emulation of it?
We may have to worry about a paperclip production AI accidentally turning all of humanity into paperclips because someone left off a line in the code, but otherwise I don’t really worry about AI because I’m not convinced we’ll actually develop it in any remotely close timescale.
Based on what I’ve read, this is already happening and I don’t see any signs that someone will put the brakes on until it’s too late. The recent advances in robotics and artificial intelligence have shown enormous progress. The impact will be huge and nobody in a position of power is taking a hard look at how we will need to respond.
I find it interesting that Bill Gates of all people is raising this concern. Is anyone really worried that anything from Microsoft will ever be too smart?