As designed by Jeremy Bentham in 1791

As designed by Jeremy Bentham in 1791

If one examines modern day popular press’ position on the effects of current technology on knowledge (or intelligence), one will find an ontological struggle occurring between two dominant systems of thinking – that of the technocrats, technofundamentalists, or cyber-utopianists, whose belief systems rely primarily on scientific and mathematical inquiry, and that of post-Marxists critical analysts, who examine historical, economic, and political implications of technology.  Most recently this debate was popularized in an article in The Atlantic Monthly, where writer Nicholas Carr bemoaned the positivist thinking that a search engine like Google is knowledge at our fingertips, in his article “Is Google Making us Stupid?”  As a reaction to this popular article, WIRED magazine released a response bemoaning the bemoaners as “reflexive anti-intellectualism,” “cancerous irrationalism,” and “moronic” (Wolman, ¶10).  Clearly, Carr’s article struck a chord with the self-described ‘egghead’ rag and opens a great debate over not who is right, but how each position reflects a metanarrative about knowledge, power, and its relation to technology as examined by writers like Lyotard, Foucault, Hayles, Benjamin, and so on. The debate over Google search engine’s long-term affects on society is also a debate of who wields power over knowledge and what that power is beginning to look like in a post-Industrial Internet society.

According to Carr, the Internet alters the ways we read and think. The economics and architecture of the Internet means we constantly are performing quick scans versus in depth readings.  He cites behavioral psychologist Maryanne Wolf who says, ‘we are not only what we read…we are how we read’, and is concerned about the Internet’s preference for quick and efficient information gathering at the cost of deep textual analysis and attention spans. Carr cites a neuroscientist who claims our brains have the ability to adapt and ‘reprogram’ quickly, where our brains have the capacity to think like our technologies – what sociologist David Bell calls ‘intellectual technologies’.   According to Lyotard, “technical devices originated as prosthetic aids for the human organs or as physiological systems whose function it is to receive data or condition the context” (44).   Hayles states in How We Became PostHuman, “We become the codes we punch…the computer molds the human even as the human builds the computer” (46 – 7).   Carr echoes this prosthetic brain question when quoting the creators of Google in a 2004 interview ‘certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off” (¶27).  Clearly, such a belief system is endemic to Internet positivists like Google’s founders and at the heart of Carr’s ontological query.

But what of the effects of this prosthetic brain on our own human brains and thought processes, Carr asks.  In what ways does the surrender of personal inquiry and exploration via the modern conveniences of the Internet’s vast network of search engines surrender our own forms of autonomy?  According to Walter Benjamin, “the representation of human beings by means of an apparatus has made possible a highly productive use of the human being’s self-alienation” (32).  Such shifts in the way we think are not new phenomenon, says Carr.  When humans began using the clock on a wide-scale basis, we began to change our internal habits of eating and sleeping based around the times of the clock (¶15).  Lewis Mumford states in Technics and Civilization, how the clock under monastic obligation gave “human enterprise the regular collective beat and rhythm of the machine; for the clock is not merely a means of keeping track of the hours, but of synchronizing the actions of men” (14).  Carr echoes McLuhan’s “Medium is the Message”, but not in a positivist “global village” light.  Rather, Carr sees the Internet medium as an absorbent of most mediums, but in the likeness of its own image, full of distracting hyperlinks, ad banners, and “other digital gewgaws” which dilute our attention and concentration (¶19).   These distractions, argues Carr, inhibit our personal performance for the benefit of the machine’s performance, and are inherent in the long history of industrial manufacturing.  In the instance of the Internet and Google’s search engines, it is our minds that are up for grabs on the auction block.

Carr sees a direct relationship between Frederick Winslow Taylor’s “Principles of Scientific Efficiency” that could “bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency” and the current methodologies around Internet technology, (¶23).  Such models of efficiency in production and industry are well documented and theorized.  Lyotard examines these models of  “optimal performance” where one is “maximizing output (the information or modifications obtained) and minimizing input (the energy expended in the process)” (44).  Within Lyotard’s framework of “language games” (40) where many ways of denoting and connoting “proof” emerge, truth is replaced with providing efficient methods, and in such a shift, technology becomes a performative measure and therefore deemed to be ‘good’ (44).  When optimal speed of performance is deemed positive and is historically entrenched in modern Industrial society, it is easy to see how such an argument supports Wolman’s thesis in WIRED’s “The Critics Need a Reboot”.

David Wolman is a regular contributing writer to WIRED, a magazine dedicated to popular technological and computer innovation.  As a response to the Carr article, Wolman’s vitriolic disdain for Carr’s concerns draws the line in the sand clearly between those who praise and those who question the Internet’s benefits. By arguing of a “collective brainpower” inherent in Wikis and the like, Wolman is engaging in what Lyotard calls “sociopolitical legitimacy,” in which the people’s opinion creates consensus and eventually “proofs” (¶6, 30).  Clearly favoring scientific and mathematic inquiry over others, Wolman concludes, “we need… a renewed commitment to reason and scientific rigor so that people can distinguish knowledge from garbage” (¶7).  Assuming the “garbage” Wolman refers to is Carr’s essay, or the essay’s concern about knowledge, the ontological seeds have been planted – knowledge and what it entails remains up for debate.  It also leaves in question the implications of such “collective brainpower” that Wolman refers to.  Do these systems of knowledge transmission reflect systems of power and control, or is the Internet as we know it truly a semiotic democracy?

Foucault examines how the classical age brought forth the objectification of the body and the target of power (136).  Carr’s argument brings into question whether the Internet Age’s target of power and objectification is the mind.   As an object of control, the mind then is reduced to an economy, “an efficiency of movements” (Foucault 137).  Carr points to this tension between the economics of the mind and that of the Internet.  He argues, “The faster we surf across the Web — the more links we click and pages we view — the more opportunities Google and other companies gain to collect information about us and to feed us advertisements” (¶31).

Lyotard echoes this economic shift in learning systems when he prophetically claims how a computerized society will equivocally affect the ways we learn as did modern innovations in transportation and media technologies (4).   Knowledge, as an economic commodity system, “has become the principle force of production” and “will continue to be, a major – perhaps the major – stake in the worldwide competition for power” (5).   The relationship between knowledge and power then is “indispensable” in their relationship to systems of production.  Google’s ability to increase production of knowledge through scientific and mathematic experimentation and algorithms creates a system of power, control, and knowledge in the image of its creators.   Carr, rather than asking “does Google make us stupid?” may want to ask a version of what Lyotard asks: Does Google “decide what knowledge is and who knows what needs to be decided?” (8).

If one sees the Internet then as a mechanics of control and power relationships, then Foucault examines the disciplinary and disempowering affects of isolation on humans within a panoptic Institution.  Arguably, if Foucault was alive today, he would likely claim the Internet is a system of control and discipline perfected.  As a perfect disciplinary tool, the Internet and Google’s control over much of the content, is a hyperextension of the industrial factories, a partitioned and analytical space, cellular, functional, and coded, one which “might isolate and map” individuals (143 – 4).  Its machinery’s aim is “to establish presences and absences, to know where and how to locate individuals, to set up useful communications, to interrupt others, to be able at each moment to supervise the conduct of each individual, to assess it, to judge it, to calculate its qualities” (143).  Such a prescient thought is clearly exemplified in today’s Internet, with the use of Google’s behavioral algorithms, GSP and ISP tracking of one’s location, to follow our every search move, while we privately “work” in our homes on our personal computers.

While the computer has been portrayed as a tool of leisure, the ability for companies like Google to track our every move by using their browser essentially lends our movements as free labor input into their system of behavior analysis, which in turn shape our spaces. Google has the capacity to reduce complexity of the vast network of information through its search filters while improving upon itself by better understanding “the adaptation of individual aspirations to its own ends,” the mutual benefit of increased power for Google’s searching capacities and the individuals increased speed at which they may access the information they are searching, increasing performativity and reducing the time at which Google gains access to our browsing information. The speed at which the Internet operates “is a power component of the system” (Lyotard 61).

“It was more the desire for wealth than the desire for knowledge that initially forced upon technology the imperative of performance improvement and product realization” (Lyotard 45).

“Power is not only good performativity, but also effective verification and good verdicts. It is self-legitimating, in the same way a system organized around performance maximization seems to be… The performativity of an utterance… increases proportionally to the amount of information about its referent one has at one’s disposal.  Thus the growth of power, and its self-legitimation, are now taking the route of data storage and accessibility, and the operativity of information” (47).

Google’s paid advertisements invoke a preferential system based on the economics of wealth and control.  So, the ways in which the scientific algorithms are an attempt to fairly distribute information via complex scientific and mathematic systems, the arguments in their defense become more of a “game of scientific language… of the rich” where “an equation between wealth, efficiency, and truth is thus established” (Lyotard 45).  Furthermore, the ways in which Google’s science and corporate control through paid ad spaces benefit each other are when “a portion of the sale is recycled into a research fund dedicated to further performance improvement.  It is at this precise moment that science becomes a force of production, in other words, a moment in the circulation of capital” (45). In turn, as the production of thought and ideas become more entrenched in Google’s research and experimentation in Artificial Intelligence through behavioral algorithms, a permanent link is created between our thought and the political economic agendas of those who wield financial and informational control over our computing systems.

“The perfect disciplinary apparatus would make it possible for a single gaze to see everything constantly” – Foucault Discipline & Punish, pp. 173

The Internet then becomes a perfect training tool.  Whether we like it or not, it has absorbed all medias, as Carr has explained, and we are in many instances forced to refer to its functions.  In using the Internet as a primary tool, we need to train ourselves in its mechanics and all it’s details.  As Carr and many media theorists have stated, tools of thought do not come naturally to humans.  Dating as far back as writing, reading, and typing, humans must learn how to interpret symbols into distinct thought processes and forms.  The same method is necessary in order to navigate the Internet.

In our training, we engage in a complex panoptic institution of hierarchical observation, one in which “eyes that must see without being seen; using techniques of subjection and methods of exploitation” and in effect,  “a new knowledge of man”  (Foucault 171).  The architecture of the Internet “render visible those who are inside it…transform individuals…, to carry the effects of power right to them, to make it possible to know them, to alter them” (Foucault 172).  Through complex networks of information, socializations, and the capacity for the companies that provide these systems ability to monitor this behavior, we can be altered, and in effect, conditioned.   In Foucault’s perspective, such disciplinary tools have traditionally results in obedient and moral citizens – “a political utopia” (174).  As subjects of the Internet, knowing that we are constantly monitored by these systems in turn regulates our behavior and subjects us to certain norms (Foucault 187).  Such power though, is anonymous, rather than literal.  It is hard to argue concretely without sounding in some way anti-technological that Google’s intentions are purely antagonistic, but nor is it hard to argue that Google is, by sheer numbers, “the Internet’s high church” (Carr ¶25) and therefore must be examined as an institution of immense power.  In its ability to anonymously survey our browsing habits, “those on whom it is exercised tend to be more strongly individualized” (Foucault 193).   Such individualization is clear when we examine the vast economies of personal computers, social networks whose titles provide layers of meaning to the names “MySpace,” “YouTube,” “iTunes,” “Wii,” and so on. According to Lyotard, such “administrative procedures should make individuals ‘want’ what the system needs in order to perform well” (62).  By giving us what we “want” through individualization via isolated and controlled environments, we are supporting the performance and economies of the Internet.  We are choice laden actors within these environments, and in choosing them, we “assume responsibility for the constraints of power, inscribe in [ourselves] the power relation in which [we] simultaneously play both roles; [we] become the principle of [our] own subjection” (Foucault 203).

This is the nature of the Panopticon, in which the Internet is its perfect contemporary example, because it reduces the number of surveillors and increases the number of those surveilled, leading to “‘power of mind over mind’” (Foucault 206).  Within a panoptic Internet system, our autonomy is both prescribed to us and similarly taken away and abstracted, enacting what Lyotard calls “a vanguard machine dragging humanity after it, dehumanizing it in order to rehumanize it at a different level of normative capacity” (63).

Lyotard’s postmodern reflection on knowledge provides a silver lining from Foucault’s historical and structural analysis of institutional systems of power and control.  That of rehumanization, logical positivism, entropy, metanarratives, language games and paralogies provide perspectives on how computerized systems can function to humanize and support knowledge within a postindustrial society.   Negentropy is the idea that performance is stable and predictable if all variables are known and follows patterns of logic, physics, and mechanics – an argument thus far supported by Foucault and Carr.  However, Brillouin argues that “perfect control over a system, which is supposed to improve its performance, is inconsistent with respect to the law of contradiction: it in fact lowers the performance level it claims to raise” (55).  This can mean two things in relation to the Internet and Google: 1) To support Carr’s arguments – that the high performativity of the Internet lowers the performativity of that which it claims to raise  (human intelligence), or 2) To support Wolman’s argument: the Internet will never perfectly control knowledge or intelligence, will not replace entirely current medias of knowledge like books, but perhaps could renew (in a contradictory fashion – think DIY) old systems of media.

Says Lyotard in conclusion:

“The line to follow for computerization to take…is quite simple: give the public free access to the memory and data banks.  Language games would then be games of perfect information at any given moment.  But they would also be non-zero-sum games, and by virtue of that fact discussion would never risk fixating in a position of minimax equilibrium because it had exhausted its stakes.  For the stakes would be knowledge (or information).. and the reserve of knowledge – language’s reserve of possible utterances – is inexhaustible”  (67).

If the Internet is a disciplinary tool of perfected observation of which we are clearly aware, then the mind is undergoing, in a sense, self-inflicted containment and imprisonment; subject to the panopticism of normalized technologies – the Internet, personal computers, Google’s search engines, social networks, and so on.  Rather than arguing whether such technologies effects on our intelligence are “good” or “bad” – Lyotard and Foucault allow us to examine the nature and history of normalized machines meant to support the functions of everyday life, the economies of thought production, and the discourses of the institutions, their subjects’ and audiences’ relations to the former.

Rather than questioning his own intellectual freedom or autonomy in challenging the positivist claims of the “good” the Internet brings to us, Carr appeals to grand narratives and key players of the past history of thought and technology – Aristotle, Socrates, Neitzshe, Guttenberg – to legitimate his argument.  By accepting the “good” of Google’s Artificial Intelligence, as Wolman does, where we allow the machine to think for us in order to gain more individualized attention, via algorithms, search surveillance, social networks and the like, we must ask ourselves not “Does Google make us Stupid?” but whether Google renegotiates rights to free human inquiry for the purposes of scientific efficiency and the general good of the state – in effect, “Does Google (re)Make us?”

References

Benjamin, Walter.  The Work of Art in the age of its technological reproducibility, and other writings on media. Cambridge, MA: Harvard Press, 2008.

Carr, Nicholas. Jul/Aug 2008. The Atlantic Monthly. “Is Google Making Us Stupid?” v. 302 no1 56-8, 60, 62-3.

Hayles, N. Katherine. How we Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago, IL: University of Chicago Press, ??. (from Course Packet)
Foucault, Michel. Discipline & Punish: The Birth of the Prison.  New York: Vintage Books, 1979.
Lyotard, Jean-Francois.  The Postmodern Condition: A Report on Knowledge. Minneapolis, MN: University of Minnesota Press, 1979.
Wolman, David. 18 Aug. 2008. “The Critics Need a Reboot. The Internet Hasn’t Led us into the New Dark Age”. WIRED Magazine, accessed on 12/15/08 <http://www.wired.com/culture/culturereviews/magazine/16-09/st_essay&gt;

Advertisement