Will the human race end because of unchecked Venus-like global warming, or the rise of artificial intelligences that will find us disposable, or will aliens simply follow our radio broadcasts, come to Earth, and eat us or subjugate us or merely cause us to lose our will to live because they are so much better than us?
If you listen to Stephen Hawking, at least as he’s covered in the legacy media, the answer is “yes.”
Hawking is certainly the most famous theoretical physicist since Albert Einstein, and rightly so, as he’s been very creative, developed theoretical ideas that have turned out to explain real physical observations — as well a a lot which haven’t been physically verified — and has done so while setting an apparent world record for the longest-surviving Lou Gehrig’s disease patient.
This means that anything Hawking says about any scientific topic is news. On the other hand, that doesn’t make it right, especially as he strays beyond the edges of his own field. (See also, L. Neil Tyson on computer science.) Hawking made headlines in July by claiming “Donald Trump could turn Earth into Venus-like planet with 250C and sulphuric acid rain.”
In the case of the global warming into Venus idea, it was quickly debunked: There simply isn’t enough CO₂ to do the job, and the Paris climate accord would have in any case only reduced the total warming by 0.2 percent. If they were really followed. A notion for which there is little evidence.
It’s similar for the other two Dooms.
Artificial intelligence, as a field, has the odd problem that every time someone manages to make a computer do something we thought was only human, we then discover that the problem wasn’t all that hard after all, and very different from Commander Data, Harlie, or Colossus.
What people are worrying about is what the theoreticians call Strong AI — an artificial intelligence that can operate as a human does, a computer program that is capable of not just reasoning, but intellection, of being something recognizable as “like” a human being. And the honest truth is we not only don’t know how to do Strong AI, we don’t even know if we could tell if we had created Strong AI.
The third Doom was that we might make contact with an alien intelligence, and they’d be Evil. Honestly, this could be trouble. (Hell, I’ve been shopping a screenplay based on it being trouble for years.) Even Hawking just points it out as an interesting possibility that he brought up as he (along with others) was announcing another big project to search for extraterrestrial intelligence. But Hawking’s worry about this has been reported in the sort of legacy media “Brilliant scientist says we’re doomed” way that usually indicates the reporter knows what sells even if they had trouble getting a C in “Basic Science for Arts Students.”
That, I think, is the biggest problem with this, and most other science reporting. Follow science reporting in the legacy media, and you’re continuously seeing stories about a famous scientist predicting impending doom, or a study that shows something we’re eating or drinking will kill us all. They know that impending doom will sell papers, so impending doom is what we get.
Join the conversation as a VIP Member