The Recreation of Time and Space:
Google and Facebook
INTRO, INCLUDING SOME ANCIENT HISTORY
One group of scholars believes that today’s Internet technology is degrading society. The World Wide Web (henceforth “the web”), in their view, enables us to do so many things so quickly that new problems are created and old ones reemerge. Scholars such as Matthew Hindman, Evgeny Morozov, Nicholas Carr, and Eli Pariser espouse this view.
This debate will not likely be concluded until the web has been supplanted by other technologies. However, it is instructive to look at some ancient history for context. In his analysis of the telegraph system—the Victorian Internet—Tom Standage says,
More striking still are the parallels between the social impact of the telegraph and that of the Internet. Public reaction to the new technologies was, in both cases, a confused mixture of hype and skepticism. Just as many Victorians believed the telegraph would eliminate misunderstanding between nations and usher in a new era of world peace, an avalanche of media coverage has lauded the Internet as a powerful new medium that will transform and improve our lives.
Standage implies that the Internet may not have the impact utopians hope for. But at the same time, the Internet will not be as bad as digital skeptics Jaron Lanier or the late Neil Postman would believe it to be.
This essay will argue that today’s web, as it is being used and shaped by society, is recreating time and space. (I inherit David Harvey’s conception of time and space.) Like Randall Munroe’s version of Luke Skywalker in the webcomic above , we struggle to “let go.” We are locked into the web services (which I treat as spaces) we use every day.
First, I give a brief history of the web. After that has been established, I explain Web 1.0 and 2.0, showing the evolution to the type of websites that enable the recreation of time and space we experience today. I then examine at length Google and Facebook, two Web 2.0 services that recreate time and space. This examination shows exactly how they maneuver and filter content in ways that change our spatial and temporal existence.
HISTORY AND AN ELEMENT OF TRUTH IN THE WEB
For the purposes of this essay, history starts with the invention of the web. Physicist Tim Berners-Lee, while working at CERN in Switzerland, realized that CERN needed a standardized way to easily connect computers. The hypertextual software package he wrote was called “Enquire”. This piece of software, along with Berners-Lee’s version of html, was eventually rewritten and open-sourced as the web.
Around 1993, Internet connection numbers were increasing dramatically, despite no graphical web browser being in existence. University of Illinois at Urbana-Champaign undergraduate Marc Andreessen invented the Mosaic web browser, the first graphical web browser for displaying html web pages. Today’s web browsers display html in mainly the same way that Mosaic did.
Thus far, there have been two paradigms of the combined use of the web and html, sensibly called Web 1.0 and 2.0. Web 1.0 was characterized by sites where users simply read content. Links are often present on these pages, but they are not interactive but rather navigational. These pages are often graphically simple and created by single owners. A good example of a Web 1.0 page is the website below.
This type of page enables the broader spread of information. It annihilates time and space because the information is available instantly to anyone with an Internet connection. There is, therefore, an element of truth to the statement that is diametrically opposed to the title of this paper. The Internet does, in some cases (especially Web 1.0 cases), annihilate time and space by making accessing information and communicating instantaneous. Location does not matter when information travels at the speed of light. But, as the effect of Web 2.0 will show, the type of information and the types of experiences we are having as a result of the speed of the Internet actually recreates time and space.
After the dot-com bubble burst in 2000 and 2001, the Web 2.0 paradigm emerged. Tim O’Reilly, the Internet scholar and businessman widely accredited with inventing the term, defines Web 2.0 in terms of how Web 2.0 services go beyond Web 1.0 web pages to connect people to resources and at the same time create communities. Web 2.0 services are interactive, and they are collaborations of users and content providers. This paradigm adds upon the technology platform of the Internet, the web, and graphical web browsers. It added simple syndication, and powerful graphic formatting tools, among many others. Combined, these and other technologies are essentially a new approach to the web. These technologies were invented to allow for the dynamic nature of Web 2.0 services.
CONTENT AND FILTERING
There are many ways that Web 2.0 services that personalize content recreate time and space for their users. Google and Facebook employ two of the most prevalent ways of doing this: algorithmically filtering search results and algorithmically choosing what content to display to users.
Google started as a research paper by Stanford University students Larry Page and Sergey Brin. The math behind Google’s genesis is complex, but essentially Page and Brin realized that hypertext links to and from web pages can be treated as citations. Pages with the most links, like scholarly papers with the most citations, are seen as the most authoritative. Their search engine analyzed the interconnections between the links of as many web pages as they could find and was able to output authoritative results. They called this algorithmic method of generating results PageRank (simple version pictured below).
But PageRank was only the first way in which Google generated results. Today, PageRank is still used, but Google has added other forms of data, called signals, to their ranking scheme. These newer signals allow for the personalization of search results. Individual users see different results now when typing the same queries. These methods of filtering search results isolate people by their locations, by their search history, their interests, and by their circles of friends. There are many other signals used, but Google has not and is not likely to ever fully reveal them to the public beyond what is shown in the video below.
Google’s multi-signal analysis of the web filters results and pushes people toward various niches based on what Google knows about them. And Google continues to learn with every query submitted. One example of this filtering into minute niches is the Internet’s “rule 34,” which is best explained in Randall Munroe’s webcomic below:
Like Google, Facebook started in a much different form than today. According to Ben Mezrich’s novelization of the founding period, Mark Zuckerberg created Facebook “to mimic what went on at college every day—the thing that drove the college social experience, drove people to go out to the clubs and bars and even the classrooms and dining halls”. To start, Facebook was limited to mimicking the experience of one space, college. It has expanded beyond that today since it is no longer limited to people with college email addresses.
The functional model of Facebook limits people in terms of time and space. Users only see information about people who pass the threshold of “friending”. We must have connections with these Facebook friends, and the connection is often one of geographic proximity. Simply put, you have to meet people somewhere. After Facebook gained a huge audience, it added personalization features, just like Google. Facebook added the News Feed, allowing people to see a reverse-chronological stream of people’s posts, rather than having to manually travel to various profile pages. The initial effect of this change was to annihilate time and space on Facebook. People were no longer limited to viewing content from one profile at a time, rather, everyone’s content flooded one page. Everything from everyone became available at one time on one page.
But that oversimplifies the effect of the news feed. The feed does not display all content, instead it displays a portion of it. The news feed, like Google’s results, is powered by an algorithm called EdgeRank. EdgeRank analyzes three signals. Every interaction on Facebook is now ranked in terms of how close users are to the friend in question. Another EdgeRank signal is the relative importance of different kinds of Facebook interactions. The average status update is less important than when someone updates a relationship status, for example. Finally, time is also a factor, with newer content ranking above the older. In its latest redesign, Facebook added the ability for users to correct its EdgeRank rankings. Once enough of these corrections are accumulated, Facebook will be able to filter social interactions in the same way that Google filters links.
Examples and Discussion
As I have briefly mentioned, there is an element of truth to any statement that the Internet has annihilated time and space. We can access information from anywhere, instantly. But the effects of our newly unfettered access to everything is actually to recreate time and space.
Google Search results are different for each of its users. Scholar Eli Pariser believes the problem is so large that it constitutes a bubble enveloping each of us. He asked two friends to search for the term “BP” after the Deepwater Horizon oil spill. One saw news results, the other saw information about investing in the company. This is but one example***, but the results are dramatically different. It is impossible for individuals to determine that they are being filtered in this way. There is no baseline to compare filtered results to. And the methods of this filtering are inscrutable. Therefore, we are all shunted into our own niches, our own filtered bubbles with little hope of escaping them to gain context beside what Google thinks we want to see.
It therefore becomes important to figure out how many of these niches there are. If there are a small number of niches, large numbers of people are forced into the small amount of niche groups available, and socialization occurs. This socialization happens across time and space. But it turns out that the number of niches is nearly limitless. In Free and The Long Tail, Chris Anderson identifies two trends that make the number of niches essentially infinite. Today, the costs of producing content and posting it online are decreasing. And there is pent-up demand for the content that is created because of those decreasing production costs. We always wanted these things, we just could not find them or they were not available. Google makes them available and helps us find them. We separate into these newly available niches, which are essentially mutually exclusive spaces online.
Facebook also appears differently to each of its users. Each user has his own profile and social graph, or network of friends and interactions with them. This system is user-centric; it does not widen ones perception of the world, but instead narrows it. In the example below
you can see my Facebook home page as it appeared when I logged in at 2:48pm 11/19/11. This view is unavailable to any other person in the world. Each person is separated from everyone else, as if they are important. As the ill-fated News Corporation social network asserted, this, essentially, is each user’s individual space, ‘MySpace.’
This is a post that Facebook believes is important to me. Based on its algorithms, Facebook has decided that I should see this post. Facebook literally decides what its users should see. And it even attempts to tell them what is important. This flies in the face of every form of social interaction of the past. There have always been constructs of social interaction that have specific rules and norms. But none of those rules and norms were set in stone. None of them were hidden behind massive proprietary databases in commercially owned server farms.
This is a filtered sub-feed of people in my social graph who live in the Champaign-Urbana area. This is an example of a way in which Facebook both speeds up and slows down time. My interactions with this group are made more efficient since I can click to this feed and see all of their updates consolidated in one place. But, again, any user who uses a location-based list like this is limited. Beyond the location-based signal, EdgeRank still filters the content. Users are limited to the selection of updates that Facebook decides to populate the location-based feed with. Like Google, the exact workings of this system are not public, and there is no baseline feed to compare to. We have no way of even knowing how much we are being isolated by Facebook.
It is also important to consider what exactly people are doing online, what they are sharing, what they are commenting on, what videos and images they see. According to digital scholar Matthew Hindman, some of the most common uses of the Internet are for adult content, webmail, and news. There are individual activities that are sped up. But in speeding up those activities, more space is created between individuals.
In the past we communicated with wide ranges of people about wide ranges of things. Our civic society was vibrant. Ray Oldenburg conceptualized our social spheres in terms of places. The first was the home, the second was the workplace. But the third place was a place that no one was obligated to be in, a place where ideas, opinions, and information were traded freely with few concerns. The third place was where political socialization happened and where much social capital was created. In terms of Robert Putnam’s conception of social capital, it was the important bridging capital that was created in these third places. The various people who fraternized in pubs, coffee shops and bowling alleys were not closely related. They came from different walks of life which were bridged as they interacted in third places. These bridging relationships create social capital.
Google and Facebook, through their filtering schemes, do not send us to the equivalent of third places on the Internet. Simply put, no one is forced to share the same experience online, whereas people in third places cannot escape socialization in third places. Google and Facebook know who our friends are and what we have searched for in the past. Facebook knows what we think is important (what we “like”), and Google is starting to learn that with its new +1 buttons. Google sends us to more and more diverse niches. Neither of these examples of Web 2.0 services help us accumulate social capital through acquiring bridging capital. Instead, they figure out exactly who we are and what we want with their statistical algorithms. And once the computers have made those calculations, Google and Facebook route us appropriately.
The designers of the filtering systems of Google, Facebook, and other web services like them likely started with the goal of simply improving their services for users. But, unfortunately, they did just that. As Langdon Winner would say, Google, Facebook, and others have recreated time and space by design. They improved their sites for individual users, not for the community writ large. These sites now cater to every user individually. Their services have “improved” as a result of the personalization features that are possible in the Web 2.0 paradigm. In short, we are experiencing the beginnings of a communication breakdown, to quote Led Zeppelin. Facebook and Google enable us to annihilate time and space within our niches and cliques. But they recreate time and space by isolating us in those niches and cliques. They separate us from those who do not participate in the same niches and cliques. In the tooltip of his famous webcomic, “Online Communities,” Randall Munroe comments, “I’m waiting for the day when, if you tell someone ‘I’m from the Internet’, instead of laughing they just ask ‘oh, what part?’”. That day is already here, except the "parts" are not visible. The parts of the Internet are amorphous. They are created and destroyed as Google, Facebook, and others constantly personalize and dynamically re-personalize the Internet for us as new information permits. As these personalizing web services learn more from more data, they get increasingly better at separating us into our own spaces online.
 Tom Standage, The Victorian Internet the remarkable story of the telegraph and the nineteenth century’s on-line pioneers (New York: Walker and Company 1998, 2007), pg 207.
 David Harvey, The Condition of Postmodernity An Inquiry into the Origins of Cultural Change (Cambridge: Blackwell Publishers 1989).
 Janna Anderson and Lee Rainie, Imagining the Internet personalities, predictions, perspectives (Lanham: Rowman & Littlefield Publishers 2005), pg 7. .
 Ibid, 9.
 Ibid, 10.
 Graham Cormode and Balachander Krishnamurthy, “Key differences between Web 1.0 and Web 2.0,” First Monday 13.6 (2008), accessed September 9, 2011 http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/21251972/.
 Tim O’Reilly, “What is Web 2.0 design patterns and business models for the next generation of software,” O’Reilly Media, accessed October 16, 2011, http://oreilly.com/web2/archive/what-is-web-20.html.
 Jesse Garrett, “Ajax: A New Approach to Web Applications,” accessed November 19, 2011 http://www.adaptivepath.com/ideas/ajax-new-approach-web-applications.
 Steven Levy, “Exclusive: How Google’s Algorithm Rules the Web,” Wired Magazine 17.12 (2010) accessed November 19, 2011 http://www.wired.com/magazine/2010/02/ff_google_algorithm/all/1.
 Paul Adams, “The Real Life Social Network v2” Google accessed October 3, 2011, http://www.slideshare.net/padday/the-real-life-social-network-v2.
 Ben Mezrich, The Accidental Billionaires the founding of Facebook (New York City: Anchor Books, 2009), pg 93.
 Facebook, “News Feed basics - Facebook Help Center” Facebook accessed December 3, 2011, https://www.facebook.com/help/?page=132070650202524.
 Eli Pariser, The Filter Bubble What the Internet is hiding from you (New York City: The Penguin Press, 2011) pgs 37-38.
 Ibid, 2-3.
 Chris Anderson, Free The Future of a Radical Price, (New York City: Hyperion, 2009), pg 3.
 Chris Anderson, The Long Tail why the future of business is selling less of more, (New York City: Hyperion, 2006), pgs 128-130.
 Matthew Hindman, The Myth of Digital Democracy, (Princeton: Princeton University Press, 2009), pg 61.
 Ray Oldenburg, The Great Good Place, (New York City: Marloe & Company, 1999), ix, xvii-xviii.
 Robert Putnam, Bowling Alone the collapse and revival of American community, (New York City: Simon & Schuster Paperbacks, 2000), 22-24.
 Langdon Winner “Do Artifacts Have Politics?” Daedalus 109.1 (Winter 1980).
Adams, Paul. “The Real Life Social Network v2.” Google. Accessed October 3, 2011, http://www.slideshare.net/padday/the-real-life-social-network-v2.
Anderson, Chris. Free the future of a radical price. New York City: Hyperion, 2009.
Anderson, Chris. The Long Tail why the future of business is selling less of more. New York City: Hyperion, 2006.
Brin, Sergey, and Larry Page. “The PageRank Citation Ranking: Bringing Order to the Web.” January 29, 1998, accessed November 19, 2011, http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf.
“Find Friends.” Facebook. Accessed December 3, 2011, https://www.facebook.com/find-friends/browser/.
Garrett, Jesse. “Ajax: A New Approach to Web Applications.” Accessed November 19, 2011, http://www.adaptivepath.com/ideas/ajax-new-approach-web-applications.
“Google +1 Button.” Google. Accessed December 3, 2011, http://www.google.com/+1/button/.
Hindman, Matthew. The Myth of Digital Democracy. Princeton: Princeton University Press, 2009.
Horling, Bryan and Robby Bryant. “Personalized Search.” Google. Accessed December 3, 2011, http://www.youtube.com/watch?v=EKuG2M6R4VM.
Kleehammer, Michelle. “History 342 – Cultural History of Technoscience.” University of Illinois-at Urbana Champaign. Accessed November 19, 2011, https://netfiles.uiuc.edu/kleehmmr/www/342/.
“Like.” Facebook. Accessed December 3, 2011, https://www.facebook.com/help/like.
Munroe, Randall. “Let Go.” xkcd. Accessed November 19, 2011, http://.xkcd.com/862/.**
Munroe, Randall. “Online Communities.” xkcd. Accessed November 19, 2011, http://.xkcd.com/256/.**
Munroe, Randall. “Rule 34.” xkcd. Accessed November 19, 2011, http://.xkcd.com/305/.**
“News Feed basics.” Facebook. Accessed December 3, 2011, https://www.facebook.com/help/?page=132070650202524.
O’Reilly, Tim. “What is Web 2.0 design patterns and business models for the next generation of software.” O’Reilly Media, September 30, 2005, accessed October 16, 2011.
Pariser, Eli. The Filter Bubble what the Internet is hiding from you. New York City: The Penguin Press, 2011.
Wilson, Brian. “Facebook Screenshots” Facebook. Accessed November 19, 2011. https://www.facebook.com/brianjameswilson/.*
Anderson, Janna and Lee Rainie. Imagining the Internet personalities, predictions, perspectives. Lanham: Rowman & Littlefield Publishers, 2005.
Cormode, Graham and Balachander Krishnamurthy. “Key differences between Web 1.0 and Web 2.0.” First Monday 13.6 (2008), accessed September 9, 2011, http://ww.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/21251972/.
Henry, David. The Condition of Postmodernity an Inquiry into the Origins of Cultural change. Cambridge: Blackwell Publishers, 1989.
Levy, Steven. “Exclusive: How Google’s Algorithm Rules the Web.” Wired Magazine 17.12 (2010). Accessed November 19, 2011, http://www.wired.com/magazine/2010/02/ff_google_algorithm/all/1.
Mezrich, Ben. The Accidental Billionaires the founding of Facebook. New York City: Anchor Books, 2009.
Oldenburg, Ray. The Great Good Place. New York City: Marloe & Company, 1999.
Putnam, Robert. Bowling Alone the collapse and revival of American community. New York City: Simon & Schuster Paperbacks, 2000.
Standage, Tom. The Victorian Internet the remarkable story of the telegraph and the nineteenth century’s on-line pioneers. New York: Walker and Company 1998, 2007.
Winner, Langdon, “Do Artifacts Have Politics?” Daedalus 109.1 (Winter 1980).
*as described in the essay, the exact content is only available to the author. The screenshots were edited to protect the privacy of Brian’s friends.
**webcomics reproduced in this essay.
*** specific screenshot examples of personalized Google search results are omitted from this essay because they are simply not as explicit as the Facebook personalization examples.
© Brian Wilson 2011
last updated 12/3/2011