The rise of the meme

An editorial meeting at the Belmont ClubThe World Wide Web is almost 20 years old.  It was originally created to solve the problem of sharing and updating information among collaborators. In seemed natural at the time to create a one-to-one correspondence between a document and a specific Uniform Resource Locator (URL).  A document, and hence an information object, lived at a particular address. If you wanted to visit it, you went there. This had the architectural consequence of making all documents — hence ideas and information — technically equal. From the point of view of the computer, all destinations were IP addresses, which are a sequence of characters. Descriptors like “http://www.theatlantic.com/” and “http://pjmedia.com/richardfernandez/” are really just aliases for the actual IP addresses adopted for the convenience of the human user. But technically, they were equal; and more to the point, equally accessible to users on the Internet.

Advertisement

Availability in the print and broadcast age depended on many factors besides the quality of the product. The barriers to entry in the newspaper and broadcast industries were so high that monopoly position, financial clout and even political permission were prime determinants of market share. The advent of World Wide Web equalized physical distribution and meant that content would play an increasingly larger role than logistics in determining the popularity of a given document. Overnight, a number of formerly unknown information providers shouldered their way to the top of the Web traffic heap on the strength of what they wrote, rather than who they wrote for.  The sudden shift of authority away from the publisher to often unheard of authors prompted Peter Steiner of the New Yorker to draw a cartoon in 1993 captioned “On the Internet, nobody knows if you’re a dog.”

But it wasn’t quite true.  Readers could — and did — distinguish between authors who wrote like dogs and those who didn’t.  In the beginning, online reputations were built and spread by word of mouth. But this was a slow process which best served individuals who were “wired” to social networks or members of bulletin boards where such information could be exchanged. For those new to an ever-expanding and soon incomprehensibly large Web the problem of where to start; of what sites to visit that weren’t dogs became a major hurdle.

Advertisement

Enterprising developers who saw a market opportunity in helping readers find ways to find nuggets in the vast ocean of the Web attacked the problem in various ways.  Site counters were developed to show at a glance how heavily visited a site was in order to serve as a proxy for quality. A seven or eight figure site visit count became the Web equivalent to the New York Times masthead.  This was good insofar as it went. But the problem for newbies was how to find such sites to begin with. To answer this challenge developers began to create search engines. The idea of search engines is to allow someone, who doesn’t know anything about a subject, to find the best places to start learning about them. Initially search engines like Altavista relied on linguistic matches to find the most relevant and useful matches. However, even the best linguistic matches returned too many false positives. Altavista was good, but it was not good enough.

The most astute developers realized that the problem of separating dogs from good authors was really about effectively capturing the judgments of human beings about the value of a site. Human beings alone could a tell a dog from an author by reading the content of a site. The reason site counters worked was because they tallied the “votes” of the Internet audience in a way that was visible to the reader. But to find a way to this site required a further step: capturing the human-generated information and embedding it into the search engine itself. Once this problem was solved the result was Google.

Advertisement

Google’s algorithm uses a patented system called PageRank to help rank web pages that match a given search string … a PageRank results from a “ballot” among all the other pages on the World Wide Web about how important a page is. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it (“incoming links”). A page that is linked to by many pages with high PageRank receives a high rank itself. If there are no links to a web page there is no support for that page.

The concept of the Internet as an implicit voting system for ideas is a powerful one which has not yet been brought to its logical and ultimate conclusion. Along the way to reaching it, entrepreneurs and developers will have to overcome a number of challenges principally involving the interaction of human valued information. Part of the problem in today’s online world is that is still driven by concepts inherited from the newspaper and broadcast world. But that’s going to change and change faster than we think.


Tip Jar.


Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement