Here’s a snippet but really this is one to read the whole thing:

It’s called the Fermi Paradox, after the great physicist who once asked, “Where is everybody?” Or as was once elaborated: “All our logic, all our anti-isocentrism, assures us that we are not unique — that they must be there. And yet we do not see them.”

How many of them should there be? Modern satellite data suggest the number should be very high. So why the silence? Carl Sagan (among others) thought that the answer is to be found, tragically, in the high probability that advanced civilizations destroy themselves.

In other words, this silent universe is conveying not a flattering lesson about our uniqueness but a tragic story about our destiny. It is telling us that intelligence may be the most cursed faculty in the entire universe — an endowment not just ultimately fatal but, on the scale of cosmic time, near instantly so.

This is not mere theory. Look around. On the very same day that astronomers rejoiced at the discovery of the two Earth-size planets, the National Science Advisory Board for Biosecurity urged two leading scientific journals not to publish details of lab experiments that just created a lethal and highly transmittable form of bird-flu virus, lest that fateful knowledge fall into the wrong hands.

Is Krauthammer correct that the reason we have not yet encountered intelligent life is because none has lived long enough to reach us without destroying itself?

I’m sympathetic without having the same pessimistic spin. My hypothesis for why we haven’t yet replicated the scene where they meet the Vulcans in First Contact (and why we never will because such creatures do not exist): by the time any extraterrestrial race would have sufficient technology to travel to earth via Star Trek-like star ships the rate of technology’s exponential growth for their species would be moving so fast already that the rate of change would be so drastic they would not resemble us at all. I don’t know how many of the technological predictions I should take seriously in Ray Kurzweil’s The Singularity is Near: When Humans Transcend Biology and its documentary counterpart Transcendent Man. But the starting point for Kurzweil’s analysis — Moore’s Law which observes that technological progress builds on itself, perpetually doubling the strength and halving the cost of technology — is not controversial. It seems to me that any discussion about intelligent life beyond the stars needs to also include this insight about the nature of technological growth.

Or is this principle only applicable for carbon-based life on our planet? Just because we do not yet know how to perceive other forms of life it does not mean they are not perhaps out there watching us…