Get PJ Media on your Apple

PJM Lifestyle

The Greatest Genius No One Has Heard Of

One man in the 20th century has had more effect on our daily lives than any other. He is directly responsible for everything digital, and for much of modern communication. And hardly anyone knows his name.

by
Charlie Martin

Bio

September 13, 2013 - 4:00 pm

shannon

In the 1930s, “computer” was a job description: someone, usually a woman of mathematical bent, with an adding machine and a big sheet of columnar paper who performed a rigorous routine of hand calculations, using paper and pencil, slide rules and tables of logarithms. Stone knives and bearskins weren’t involved, but to modern eyes they might as well have been.

Large research organizations and the Department of War had a few special purpose mechanical computers intended to integrate differential equations. Vannevar Bush (who deserves his own article someday) brought a young grad student to MIT to work on the differential analyzer, a relatively advanced version of these. This video shows a version of the differential analyzer being applied to a problem for which it was utterly unsuited in Earth vs. the Flying Saucers:

YouTube Preview Image

This young man, a recent graduate of the University of Michigan, was named Claude Shannon, Jr. Shannon, while working on the differential analyzer, had the insight that these same computations could be done using combinations of a few simple circuits that performed basic logical operations on true and false values. He described how this could be done, and invented the whole concept of digital circuits, which derive from from Shannon’s thesis on what he called switching theory.

His Master’s thesis.

alan-turing

At about the same time, Alan Turing wrote his series of famous papers on computability; those papers included an idea of how a computer with memory might work, but without Shannon’s switching theory, no one knew how to actually build one. (Google did a great Google Doodle for Turning’s 100th birthday.)

Vannevar Bush then sent Shannon to the Cold Spring Harbor laboratory. Shannon worked with biologists and geneticists, and — remember this was before DNA had been discovered — described how genetics could be understood as an algebra using a small collection of symbols. This was enough to get Shannon a Ph.D. but attracted little attention at the time. However, his Ph.D. is now recognized as pre-figuring what we now call bioinformatics.

During the war, Shannon, still working for the War Department, was put to work on cryptography, where he merely invented a general mathematical basis of nearly all cryptography, and in the meantime proved that there is one and exactly one method of making an unbreakable cipher. This is called a one-time pad.

But this wasn’t enough. He went to work for Bell Labs, and began thinking about radio or telephone signaling. (His original switching theory was already the basic for new telephone switches — direct telephone dialing depended on Shannon’s Master’s.) What was common to all these different ways of signaling we already used: telegraph, telephone, radio, and that new-fangled thing television? Shannon had a surprising insight: what made a signal a signal was whether or not you could predict it.

To understand this, think about a game of 20 questions. You and an opponent are playing. Your opponent thinks of something, you ask the standard first question of “animal, vegetable, or mineral?”, and then you have to guess the opponent’s some “thing” with no more than 19 questions. The only other rules are that your opponent can’t lie, and the questions have to be yes or no questions. If you guess it correctly, you win; if you run out of questions, your opponent wins.

Surprisingly often, a skillful player can guess in considerably fewer than 20 questions, as each question reduces the collection of possible answers.

Now, here’s Shannon’s big insight — and if it doesn’t seem big now, just wait a minute: if you have fewer than about 1.6 million choices (really, 1,572,864) then you can always find the answer in 20 questions, or looking at it the other way, a game of 20 questions can distinguish about 1.6 million possible guesses. So getting a 20 questions game right on the first question is literally a million to one shot.

So, if you have two choices, say Republican or Democrat, then you can predict the answer after one question.

With three or four choices, say Ford, Mercedes Benz, Volkwagen, or Chrysler, you can be sure you have the answer after 2 questions. Eight choices means three questions.

So, if you have two choices, and you guess right the first time, you’re not very surprised. With eight, if you guess right the first time, you’re more surprised. With 1,572,864, if you get it first guess you’re very surprised.

shutterstock_144045535
Shannon’s first insight was that what we call “information” was basically a measure of the size of the surprise, and he could measure that with the number of yes or no questions you need to ask to distinguish among all the possibilities.

We call that count of yes/no questions, this measure of “the size of the surprise”, a bit.

Information theory shows up in communications, too. In communications, the idea is to think of the amount of information as how well you can predict what the next message will be. You can see this every day on the news: watching MSNBC is usually very predictable, but other channels are less so. By the way, when something is completely unpredictable, we call it “random”. A random number is like throwing a fair die: getting a 5 shouldn’t give you any information about what the next throw will be.

Mathematically, this is the logarithm to the base 2 of the number of different possibilities, but if that doesn’t mean anything to you (what do they teach kids in school these days?) don’t worry about it. What matters is that this one insight is the basis of what’s now called information theory, and as time has gone on, information theory shows up over and over again in describing the real world.

This one man, Claude Shannon, is directly responsible for computers, the internet, CDs, digital TV, really for digital anything. Although I haven’t gone into it here, information theory shows up in communications — Shannon’s information theory is directly responsible for the way cell phones and communications with space probes work — in biology, in finance, even in physics, where information theory is at the heart of much of what Stephen Hawking has been doing for the last 20 years. Nearly every bit of technology we use today that’s more complicated than a Phillips head screw is based on what Shannon did. And yet, most people have never heard of him.

Well, now you have.

******

image courtesy shutterstock / ollyy

Charlie Martin writes on science, health, culture and technology for PJ Media. Follow his 13 week diet and exercise experiment on Facebook and at PJ Lifestyle

Comments are closed.

Top Rated Comments   
As an electrical engineer, I've long been a beneficiary of Dr. Shannon's insights. Thank you for talking about them here.

But as someone with fairly suspect genes, it's always been somewhere between terrifying and metaphysically mysterious to contemplate what it must have been like for a genius like him to die of Alzheimer's (a few months before the original 9/11). Especially during its onset, when he would have been aware of what was happening to his brain. Shudder ..
1 year ago
1 year ago Link To Comment
All Comments   (76)
All Comments   (76)
Sort: Newest Oldest Top Rated
For us laymen, I like the response which Ayn Rand gave to Phil Donohue when he posed: "doesn't the order in the universe prove there is a God?" She replied: "What do you think a DISordered universe looks like?"
1 year ago
1 year ago Link To Comment
CE Shannon's works always made me feel that the world was right and happy. Whereas Von Neumann and Turing) were pathological.

Very nice article.
1 year ago
1 year ago Link To Comment
Claude Shannon is well known to the Intelligent Design community. His concepts on information are used frequently as a basis for evaluating just how complex and organized the information in a phenomenon is. As such they are used as a basis for evaluating just how likely the type of information in a phenomenon could have occurred by the laws of nature plus random events.

As an aside: before one starts to equate Intelligent Design with Young Earth Creationism, know that most of scientists using Intelligent Design in their work believe in a 4.5 billion year old earth and a 13+ billion year old universe. The YEC's have certainly glommed onto ID and helped with many of its ideas but the two are not the same.
1 year ago
1 year ago Link To Comment
if you mean the argument at - in effect - the physical structures of the universe are fine-tuned for life and that this is improbable, there are a few obvious problems.

first, we do not know it's improbable. They might not be random at all but the "only way to make universes", so to speak.

Second, it is quite possible that our universe is only one of many in a much larger multiverse, each universe with ots varying constants, and thus it is not at all surprising that we exist in one of the possibly very few universes that can support life (where else could we exist?).

Third, even if this is the only universe and the event happened purely by chance, it proves very little since it is completely a posteriori reasoning. If the universe were *not* hospitable to life, we wouldn't be here to wonder about it. Adter all, we are all here due to a particular sperm meeting a particular egg, an event whose a priori chances are so close to zero as to make winning the lottery seem like a sure bet, but that hardly implies ere was a designer involved in the event.

Fourth, the universe is *not* hospitable for life. Even if life is common in the universe it still exists only on planets that fit certain characteristics around particular types of suns, which mean they are seperated by at the very least dozens or hundreds of light years. And concsiousness evolved at most on a few species (for our purposes i include monkeys and dolphins, but possibly only man) late in the history of life, a probably cmmon ratio of, just about, one conscious species per 10,000,000 or so which existed. A designer who wished to make a life-bearing universe, let a lone a universe with intelligent life, could have done a lot better.

Fifth, and above all, the whole argument is really the same old argument for God - oh I'm sorry, did I say that out loud? I meant the "intelligent designer" we "make no assumptions about" - namely, the argument for incredulity: we cannot explain rit now why natural phenomenon X exists, sp it must be God. In ancient times lightning and thunder were used for this, then solar eclipses, and today the values of the universe's constants. There is no reason to use this failed argument for the umpteenth time.

I, for one, do not see much difference, except for a rather badly-fitting attempt to look "scientific", between the creationists' "in the beginning God created the heaven and the earth so he could make a man in his image" and the IDers' "in the beginning an unnamed intelligent designer dickered with the universe's fundamental constants so that intelligent creatures such as man could eventually evolve".

Seems to me that ID is just creationism in a lab coat and fake glasses carrying a sign saying "I a, too a scientist!".
51 weeks ago
51 weeks ago Link To Comment
On the other hand, the ID information-theoretic arguments are usually pretty weak on the Law of Large Numbers and how ln(P(X)) works where the PDF is *not* uniform, so maybe it balances out.
1 year ago
1 year ago Link To Comment
"are usually pretty weak"

Are they? I am not sure I would agree. But, this is not the place to have this discussion.

All I was saying is that Shannon information plays a part in Intelligent Design so his ideas have been known to those who support ID. Information (which has many definitions) and how it is created is at the core of the ID arguments so it is natural that Shannon's concepts would be part of ID.
1 year ago
1 year ago Link To Comment
Sunday morning and my mind isn't warmed up. But even if it was I'd sure like to hear a bit (or four bits) more on ln(P(X) where non-uniform PDF, and the layout of same mapping to ID.
1 year ago
1 year ago Link To Comment
BTW, proving that some of you folks really are gluttons for punishment, I've had several requests to go on about information theory, so you can expect more on ln(P(X)) in the future.
1 year ago
1 year ago Link To Comment
Basically, the information-theoretic arguments all depend on the notion that all those things, like the value of the fine-structure constant, are somehow improbable. We don't know that, it's assumed a priori. If it turned out that it's not improbable, the "size of the surprise" is smaller. If, as some M-theorists suggest, "universes" with all the different possible values exist, it reduces to the weak anthropic principle: we're in a universe that perfectly matches what is needed for our kind of life because it's one we happen to be in; the one next door is just different.

The other part of that is it's essentially unfalsifiable. Let's say that it turned out that our particular set of values is the *only* one possible. The information content of having that set of values then is zero (ln(1)=0). But from the standpoint of an ID argument, it's *still* evidence for ID, because now isn't it cool that the only possible way physics and work is perfectly suited for us? Must mean a Creator!
1 year ago
1 year ago Link To Comment
"Basically, the information-theoretic arguments all depend on the notion that all those things, like the value of the fine-structure constant, are somehow improbable. "

No, it's really much simpler than that. The bottom line, as Shannon proved, is that information does not arise from randomness. You can't get DNA from random chance. Not that it's highly improbably. It's IMPOSSIBLE.
1 year ago
1 year ago Link To Comment
Sorry, no, you don't know that; you are making a bunch of a priori assumptions. At a first approximation, I see two: that replicating RNA can't arise without some kind of intervention, and that DNA is the one and only molecule capable of doing what DNA does and so it's improbable to arise "spontaneously". The first one has a law of large numbers issue -- there were, oh, 10^40 amino acid molecules available in a post-Urey sort of world, and they only have to combine randomly *once* into a replicating RNA; it then takes over. The second one has the problem that we don't know that -- just because we only know of DNA-based life doesn't mean some other possibility doesn't exist. plus, a non-theological definition of "life" is harder to find than you might think.

But then, from the theological standpoint, even if "life" arose "randomly", it would be because the universe has characteristics that lead to it. (I'm avoiding saying "made that way" because that would be begging the question; grammatically it assumes a Maker.) Now, here you've got your standard omnipotent, omniscient God; it would be a peculiarly limited sort of omnipotence if It couldn't build the universe so the universe, from first principles, does life "spontaneously", and a particularly limited sort of omniscience to not know that the universe It built, with the particular values of those constants It chose, would lead to life arising "spontaneously".

But then if It built the universe in such a way that life is inevitable, then we're back to the case where life arising is no surprise at all, ln(1).

Now, notice I'm not saying this *disproves* a notion of Intelligent Design. I'm just saying the information theoretic approach to *proving* Intelligent Design seems pretty likely to be empty.
1 year ago
1 year ago Link To Comment
"I see two: that replicating RNA can't arise without some kind of intervention, and that DNA is the one and only molecule capable of doing what DNA does and so it's improbable to arise "spontaneously""

No to both. Shannon's work wasn't about DNA or RNA, it was about information, in the purest mathematical sense. Since DNA & RNA are coded information storage and retrieval systems, it follows that any mathematical truths about information apply to DNA & RNA. What information theory proves is that randomness cannot beget information. It's not even a LITTLE bit about probability. It doesn't matter how much time you have. It doesn't matter how favorable your conditions are.

BTW, speaking of probability, Sir Fred Hoyle calculated the probability of getting a living cell together by chance. Took a few years, some supercomputer time, and help from a few other disciplines, but he did come up with a number. The number is quite interesting: 10 ^ 40,000 That's kinda big.

Big, and interesting, but incorrect. Shannon proved the correct number is ZERO. No chance. Can't happen.

It seems Hoyle didn't know about Shannon's work. But his own work made him a believer in pan-spermia. (Like Richard Dawkins. See "Expelled" with Ben Stein.)

Of course, pan-spermia doesn't answer any questions, it just throws them back a few billion years where they are less troubling.
1 year ago
1 year ago Link To Comment
Except, of course, that RNA and replicating molecules DID NOT arise by chance, but by chemical natural selecton.

It is enough that some very * badly* replicating molecules arise by change and that changes in them might make the replicate better, both assumptions being probably true, that a replication "arms race" will begin ending, naturally, with good replicators who dominate the bad ones.

The mistake here is the same old mistake Paley made with his "watch in the desert" argument - watches do not reproduce on their own, nor is there any selection pressume on such mechanical devices to make them keep good time.

When reproduction and mutation, together with natural selection, act in a situation where keeping good time confers evolutionary advantage, we sure as hell DO get accurate natural clocks - such as plants' inner clocks, cicadas' 17-year cycle, and numerous other examples.
51 weeks ago
51 weeks ago Link To Comment
Well, I may not know that, but Shannon proved it mathematically.

Sorry.
1 year ago
1 year ago Link To Comment
Mark, if you think that information theory isn't about randomness, you don't know enough about it to have an opinion. What's more, I even explained it mathematically in these very comments.
1 year ago
1 year ago Link To Comment
By the way, I have no idea if Shannon was aware of the theological implications, or if he was in any way religious. He was just doing science.
1 year ago
1 year ago Link To Comment
"It's assumed a priori"

Is this true? They certainly can make calculations based on different values of the various constants and describe the potential universes that then result. I never saw where anyone said they believed that any of the constants are determined somehow. It would be interesting to explore the rationale behind such a point of view.

"we're in a universe that perfectly matches what is needed for our kind of life because it's one we happen to be in; the one next door is just different."

Three problems here:

First, what is generating all these universes and where did this generating mechanism come from?

Second, why does this mechanism spread the values of the constants around? Why wouldn't this mechanism just generate the same constants?

Finally, one still has to deal with the problem as to how life arose in our universe? No one has a clue how this could happen. Lots of speculation but nothing of consequence. Again information is essential to understanding the complexity of this issue.

"Must mean a Creator!"

No, just that one is very likely and to dismiss the idea of a creator is specious reasoning and certainly not one based on science. And no need to capitalize "creator." There could be more than one and maybe there are generic creators out there.

Your replies leave one with the impression that ID is based on spurious reasoning when in fact one could make a strong argument that criticism of it is often baseless. Again not the place to have this discussion.
1 year ago
1 year ago Link To Comment
Why does it have to be created by some God, perhaps whatever conditions and set of natural laws that generated all these universes always was. And if you reply that everything must have been created by something, that nothing can just always have been, then you must answer the question, who created God.
1 year ago
1 year ago Link To Comment
Wow, you really should go over and look at that Buddhism column: http://pjmedia.com/lifestyle/2013/09/01/there-is-no-god-and-he-is-always-with-you/
1 year ago
1 year ago Link To Comment
Or for the science column, rather. It all blurs together after a while....
1 year ago
1 year ago Link To Comment
See, that's my point. It's certainly true they're assuming the distribution a priori, there's nothing else they can do. What other universe do they have to observe? But even then, you just go back to a First Cause argument, as you did.

This is a bit off track for the diet column. I talked a little bit more about this kind of thing in last week's Buddhism column: http://pjmedia.com/lifestyle/2013/09/01/there-is-no-god-and-he-is-always-with-you/
1 year ago
1 year ago Link To Comment
In my post I was undecided and lazy wrt parenthetic grammar rule, right or left. I think I might have written ".. more on ln(P(X where non-unform PDF, and the .." to be more honest.
1 year ago
1 year ago Link To Comment
Yeah, it's sometimes frustrating that the Evangelical Atheists have such an incomplete and essentially unschooled view of what religious people really think.
1 year ago
1 year ago Link To Comment
The binary nature of the observation in article above is also used in my job with in B-tree type databases and the math that goes with their algorithms to predict their soultion times. Explaining this to my non-math inclined peers and supervisors requires more processing time than log2(n)....
1 year ago
1 year ago Link To Comment
One way to tell a seminal genius is to read the critical paper. Shannon's original information theory is quite readable with but a little knowledge of statistics. That's because it's a whole new field, and thus there's no jargon to know. The other reason is that a genius like Shannon didn't have a need to impress people by encrypting his paper in fancy sounding math and language.

Another 20th century genius, Richard Feynman, applied Shannon's ideas to the makeup of the universe itself. He also used them to derive theoretical lower bounds on the energy required for computations.
1 year ago
1 year ago Link To Comment
I agree that Shannon is an unsung genius, but in reality it was Tommy Flowers the electrical engineer in Britain that built the first vacuum tube programmable computer that is the unsung genius of the twentieth century.

Without him Turing and Co. would have never been able to decode Lorenz messages sent by Hitler.
1 year ago
1 year ago Link To Comment
Thanks for writing this well-deserved tribute to Shannon. I wanted to mention that he's prominently featured in James Gleick's latest book, The Information.
1 year ago
1 year ago Link To Comment
Thanks! You know, I need to read that.
1 year ago
1 year ago Link To Comment
The linkage of information to surprise is interesting after having listened to an interview Jerry Bowyer of Forbes conducted with George Gilder about his new book, Knowledge and Power. Gilder sees Capitalism as a knowledge system and apparently builds most of the argument of the book on information theory. He sees surprise as the best description of discovery of new information and the competitive advantage it brings in the marketplace.

bowyerbriefing.com/upload/gilder_8-7-13.mp3

http://www.forbes.com/sites/jerrybowyer/2013/08/14/george-gilder-has-a-very-big-economy-boosting-idea/
1 year ago
1 year ago Link To Comment
Absolutely. Market theory comes down to a method for exchanging information to arrive at a mutual agreement on price.
1 year ago
1 year ago Link To Comment
In Alabama jobs are being added in a number of locations. GE is doubling the size of the Roper plant in LaFayette, GA. They make ranges there. There are lots of jobs being added in the Southeast and TX. Want to venture to guess why in these areas?
1 year ago
1 year ago Link To Comment
I think you're in the wrong office.
1 year ago
1 year ago Link To Comment
You mean, he came here for an argument, but will get abuse?
51 weeks ago
51 weeks ago Link To Comment
Thank You!
1 year ago
1 year ago Link To Comment
1 2 3 Next View All