Archives for category: Philosophy

Katie McGowan
Aesthetics of Interactive Design: David Parisi
Response Paper: July 3, 200

Peters traces the history of the term “information” – and reflects on four critical stages of its development – from hylomorphism, to empiricism, to statistics, and finally, to modern computing.  “A term that does not like history”, information has gradually changed meaning, and has greatly affected societies’ concepts of embodiment of form, meaning, and communication (Peters 10).   At the crux of the word, comes simply the roots of these terms – in and form – both which Peters examines at some length in relationship to modern philosophical thought.  Munster, on the other hand, examines how modern computing aesthetics need to be examined on the account and speed with which information is “constantly updating and transforming itself” (151).  One can draw parallels to both authors on account of their concern for the embodiment of communicative form, known as “information”.

During the late Middle Ages, Thomas Aquinas introduced information as a means towards ordering the universe, or in other words, giving form to (informing) the matter that surrounds us with identity (Peters 10).   Known as hylomorphism, this concept was drastically different than current means of understanding “information”.    In this case, information gives life to, or animates, matter and objects outside of ourselves.  But during the 16 th and 17th Centuries, thinkers such as Descartes, Hume, Kant, and Locke shifted the discussion of the external universe of forms toward that of the forms within ones mind.

“Cogito ergo sum”, coined by Descartes, translated as “I think, therefore, I am” helped shift the discourse from the external existence of objects outside of oneself – internally.   Now more than ever, as Munster claims, “the machine is more intimately… an arranger of our perceptual apparatus”, and we are tasked with examining Cartesian philosophy of embodied forms, in which the act of thinking has been so influenced by the digital form (151).  She claims while we ourselves, through digital connectivity and closer proximity, are becoming more socially and politically dependant of others – we are similarly accessing this connection in solitude.

But Peters traces how we went from this external/internal world paradigm.  As the world became a world of nation-states, as bureaucracies developed, modern Western thinkers like Hume began to move away from the external noise of the world towards one where the mind became the focus for order.   Known as empiricism, the discourse on information shifted from forming or shaping objects and matter towards making sense of our perceptual senses, or ones own consciousness (13). Furthermore, as information became more of a way of making sense of our own internal worlds, through empiricism, statistics became a way of making sense of the political world surrounding us.

During the same period, Emmanuel Kant’s Critique, introduced aesthetics as “the science of the beautiful”, expanding the Platonic definition of aesthetics as forms and mimesis to include “the logical and the perceptual” (Fishwick 4).   So, aesthetics and information become, over time, an interconnected relation to form, logic, and perception, making computer aesthetics, or information aesthetics, central to the discourse on human interaction and sensual perception.

“Statistics offer a kind of gnosis, a mystic transcendence of
individuality, a tasting of the forbidden fruit of knowledge.” (Peters 15)

Information, during this important historical shift, “refers to the possible experience of no body” (Peters 15).  As computers become the direct result of statistics and the formation of modern-nation states, the more disembodied becomes our interpersonal communication, which is replaced by an embodied medium that takes on an ethological quality (Peters 15, Munster 152).  As the Industrial Revolution began to shift tasks towards machines, the mind became further alienated from information.  This human-to-information alienation continued as the computer machine developed.  The Industrial Revolution saw a direct aesthetic mimesis of the human mind through information technology and development of machines.   Vannevar Bush presciently saw the scientific application of the computing machine as means of collecting, storing, and sharing the human experience in ways that would significantly advance the progress of society and nation-states.   To Bush, with logic, math, databasing, and codes, the “memex” machine would act as “an enlarged intimate supplement to his [man’s] memory” (45). At the same time, computer information removed itself from all aspects of sensory perceptions or feelings, and merely became “a network with discrete interconnected nodes” (Murray 9). Such anthropomorphic attribution to a machine begs the question brought forth by writers like Munster, Manovich, and Vesna – who are we in light of a newly embodied medium and how do we relate to it?

Licklider laid claim to an interdependent relationship between the man and machine, or symbiosis, where one cannot function without the other (74).  The “symbiotic” relationship works towards the benefit and utilizes the strengths of each other.  A computer will perform the complicated algorithms and logic that would take days for a human to solve, and a human will function as the decision maker, motivated by an end goal (77).   Over time, Licklider predicted, it would be impossible to separate the two, and does not seem at all inaccurate a prediction, as I type on my computer for my online course before e-mailing it to my professor.  Other than the obvious (a class using computers to talk about interactive technology), would the class enrollment be the same were the online option not available to the students, some of which are located all across the country?

While the capacity for increasing this network of information grew, as Licklider and Bush predicted, computers began to expand out from science and technology towards humanities and the arts (Murray 10).  Such engagement from these communities has led towards both design and interactive oriented programs that reflect the aesthetics of both old and new mediums, ones that have both made the transition from forms more acceptable and worth examining through aesthetics.

When reflecting on the Visible Human Project, Vesna sees how technology can significantly improve the understanding of human anatomy.   But she also sees how this emphasis on “information” gathering further disembodies us from the sensual aesthetic of feeling and understanding our own bodies (7).   Humans are no longer feeling or beings with consciousness, but stored and sorted information, in the modern computerized sense that Peters refers to and Bush and Licklider advocate.  Everything that identifies “me” is codified, stored, accessed, and detached from emotional human experience.

The human relationship to information and knowledge has shifted from a curated and subjective narrative (like a museum whose contents are stored in the ether) towards one that erodes the form into its more basic objective and less connected paradigm (Vesna 29, Manovich 49).  Like the problem faced by empiricists and statisticians, computers seek ways to make order and meaning out of the vast amounts of information readily accessible to human thought, a place where the interactive aesthetic of the computer becomes extremely significant. Aesthetic attempts to build a psychological narrative adjunct to information on the computer are attributed to human dissatisfaction with the computer’s encyclopedic essence (Manovich 54).

No longer does the Vannevar Bush form of computing for scientific gain hold all the water. Enter humanists and artists, who begin to explore and expose ways in which the psychological and aesthetic relationship between human and computer are formalized.  For example, Munster sees Graham Harwood’s Uncomfortable Proximities as releasing “momentary flashes of astonishment, discomfort, and squeamishness by mobilizing the capacities of digital technologies to forge extreme juxtapositions, unbearable proximities and unspeakable intimacies” (155), thereby challenging digital mediums as being strictly neutral spaces to hold information (157).   Harwood and other digital artists, like Carnivore PE, are repoliticizing and rearranging alienated distances between information spaces and their participants (Munster 156).

While Harwood attempts to expose uncomfortable distances between a participant and a digital medium, Krueger seeks to build more virtual landscapes that establish working relationships between what the participant and digital piece.   Krueger’s virtual worlds invoke what he describes as “dialogue…a personal amplifier…an environment… a game” and “an experimental parable” which can be used towards improving cognition and learning, participating in art, psychotherapy, among other relationships (386 – 389).   Krueger seeks the aesthetic benefits of interactive digital environments in which “the computer should perceive as much as possible… the participant’s behavior” in order for a meaningful “responsive environment” (379).  These aesthetic benefits, however, are more often seen today for commercial, not artistic, purposes, and have resulted in a critical introspection towards a creation of work that has cultural and meaningful implications (378).  Still, artificial intelligence, like Google’s AdSense programs, prove how computing technology’s ability to perceive behavior is not always meaningful or culturally advantageous, only promoting consumer culture, and disembodied communication with algorithm technologies.

Despite the structure of computers privileging database and algorithmic form, humans seek cinematic and narrative landscapes (Manovich 46).  Interactive designers understand this paradigm, by creating human-computer interfaces that reflect and mimic human desire and processes.   From the basic word processing functions of typing an essay, to more “direct manipulation” programming that Shneiderman speaks of, like Photoshop or a computer game, interfaces are constantly revising and rewriting forms that use aesthetics to connect to humans (485). These interface programs employ aesthetic conventions like, “presence, engagement, and immersion which facilitate human sensory connection to otherwise invisible information, or information that has minimal sensory qualities” (Fishwick 8).

Expanding on Fishwick, three components of aesthetics – modality, culture, and quality –he applies to computers.  In that order, these components mean the “ways in which we interface and interact with objects… specific artists, art movements, and genres” and finally hold “qualities such as mimesis, symmetry, complexity, parsimony, minimalism, and beauty” (Fishwick 13).  So, whether it is a concern for the historical foundation of the term “information”, as Peters examines, the ways in which artists are using the digital medium as influenced by art movements – as Vesna, Manovich, and Munster describe – or the forms with which interactive technologies mimic or relate to humans, as Krueger saw in the virtual environments, each examines these relationships to new media through a historical aesthetic lens.   The humanists and artists in a new media sphere are beginning to examine the world that technology minded thinkers like Bush, Licklider, and Krueger predicted.  In doing so, new mediums and aesthetics are being formed, and classical concepts like embodiment, form, information, art and beauty are being transformed.

Works Cited

“Cogito ergo sum.” Wikipedia, The Free Encyclopedia. 25 Jun 2009, 05:27 UTC. 25 Jun 2009 <;.

Bush, Vannevar. “As We May Think” The New Media Reader. Noah Wardrip-Fruin and Nick Montfort, eds. (Cambridge, MA: MIT Press, 2003) 35 – 47.

Fishwick, Paul. “An Introduction to Aesthetic Computing.”

Krueger, Myron W. “Responsive Environments” The New Media Reader Noah Wardrip-Fruin and Nick Montfort, eds. (Cambridge, MA: MIT Press, 2003) 379 – 389.

Licklider, J.C.R. “Man-Computer Symbiosis” The New Media Reader Noah Wardrip-Fruin and Nick Montfort, eds. (Cambridge, MA: MIT Press, 2003) 73 – 82.

Manovich, Lev. “Database as Symbolic Form.” Vesna, Victoria ed. Database Aesthetics: Art in the Age of Information Overflow (Minneapolis: University of Minnesota Press, 2007).

Munster, Anna. Materializing New Media: Embodiment in Information Aesthetics (Hanover, NH: Dartmouth College Press, 2006).

Peters, John Durham. “Information: Notes Towards a Critical History.” Journal of Communication Inquiry 1988 12; 9-23.

Shneiderman, Ben. “Direct Manipulation: A Step Beyond Programming Languages” The New Media Reader Noah Wardrip-Fruin and Nick Montfort, eds. (Cambridge, MA: MIT Press, 2003) 485 – 498.

Vesna, Victoria.  “Seeing the World in a Grain of Sand.” Vesna, Victoria ed. Database Aesthetics: Art in the Age of Information Overflow (Minneapolis: University of Minnesota Press, 2007).

As designed by Jeremy Bentham in 1791

As designed by Jeremy Bentham in 1791

If one examines modern day popular press’ position on the effects of current technology on knowledge (or intelligence), one will find an ontological struggle occurring between two dominant systems of thinking – that of the technocrats, technofundamentalists, or cyber-utopianists, whose belief systems rely primarily on scientific and mathematical inquiry, and that of post-Marxists critical analysts, who examine historical, economic, and political implications of technology.  Most recently this debate was popularized in an article in The Atlantic Monthly, where writer Nicholas Carr bemoaned the positivist thinking that a search engine like Google is knowledge at our fingertips, in his article “Is Google Making us Stupid?”  As a reaction to this popular article, WIRED magazine released a response bemoaning the bemoaners as “reflexive anti-intellectualism,” “cancerous irrationalism,” and “moronic” (Wolman, ¶10).  Clearly, Carr’s article struck a chord with the self-described ‘egghead’ rag and opens a great debate over not who is right, but how each position reflects a metanarrative about knowledge, power, and its relation to technology as examined by writers like Lyotard, Foucault, Hayles, Benjamin, and so on. The debate over Google search engine’s long-term affects on society is also a debate of who wields power over knowledge and what that power is beginning to look like in a post-Industrial Internet society.

According to Carr, the Internet alters the ways we read and think. The economics and architecture of the Internet means we constantly are performing quick scans versus in depth readings.  He cites behavioral psychologist Maryanne Wolf who says, ‘we are not only what we read…we are how we read’, and is concerned about the Internet’s preference for quick and efficient information gathering at the cost of deep textual analysis and attention spans. Carr cites a neuroscientist who claims our brains have the ability to adapt and ‘reprogram’ quickly, where our brains have the capacity to think like our technologies – what sociologist David Bell calls ‘intellectual technologies’.   According to Lyotard, “technical devices originated as prosthetic aids for the human organs or as physiological systems whose function it is to receive data or condition the context” (44).   Hayles states in How We Became PostHuman, “We become the codes we punch…the computer molds the human even as the human builds the computer” (46 – 7).   Carr echoes this prosthetic brain question when quoting the creators of Google in a 2004 interview ‘certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off” (¶27).  Clearly, such a belief system is endemic to Internet positivists like Google’s founders and at the heart of Carr’s ontological query.

But what of the effects of this prosthetic brain on our own human brains and thought processes, Carr asks.  In what ways does the surrender of personal inquiry and exploration via the modern conveniences of the Internet’s vast network of search engines surrender our own forms of autonomy?  According to Walter Benjamin, “the representation of human beings by means of an apparatus has made possible a highly productive use of the human being’s self-alienation” (32).  Such shifts in the way we think are not new phenomenon, says Carr.  When humans began using the clock on a wide-scale basis, we began to change our internal habits of eating and sleeping based around the times of the clock (¶15).  Lewis Mumford states in Technics and Civilization, how the clock under monastic obligation gave “human enterprise the regular collective beat and rhythm of the machine; for the clock is not merely a means of keeping track of the hours, but of synchronizing the actions of men” (14).  Carr echoes McLuhan’s “Medium is the Message”, but not in a positivist “global village” light.  Rather, Carr sees the Internet medium as an absorbent of most mediums, but in the likeness of its own image, full of distracting hyperlinks, ad banners, and “other digital gewgaws” which dilute our attention and concentration (¶19).   These distractions, argues Carr, inhibit our personal performance for the benefit of the machine’s performance, and are inherent in the long history of industrial manufacturing.  In the instance of the Internet and Google’s search engines, it is our minds that are up for grabs on the auction block.

Carr sees a direct relationship between Frederick Winslow Taylor’s “Principles of Scientific Efficiency” that could “bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency” and the current methodologies around Internet technology, (¶23).  Such models of efficiency in production and industry are well documented and theorized.  Lyotard examines these models of  “optimal performance” where one is “maximizing output (the information or modifications obtained) and minimizing input (the energy expended in the process)” (44).  Within Lyotard’s framework of “language games” (40) where many ways of denoting and connoting “proof” emerge, truth is replaced with providing efficient methods, and in such a shift, technology becomes a performative measure and therefore deemed to be ‘good’ (44).  When optimal speed of performance is deemed positive and is historically entrenched in modern Industrial society, it is easy to see how such an argument supports Wolman’s thesis in WIRED’s “The Critics Need a Reboot”.

David Wolman is a regular contributing writer to WIRED, a magazine dedicated to popular technological and computer innovation.  As a response to the Carr article, Wolman’s vitriolic disdain for Carr’s concerns draws the line in the sand clearly between those who praise and those who question the Internet’s benefits. By arguing of a “collective brainpower” inherent in Wikis and the like, Wolman is engaging in what Lyotard calls “sociopolitical legitimacy,” in which the people’s opinion creates consensus and eventually “proofs” (¶6, 30).  Clearly favoring scientific and mathematic inquiry over others, Wolman concludes, “we need… a renewed commitment to reason and scientific rigor so that people can distinguish knowledge from garbage” (¶7).  Assuming the “garbage” Wolman refers to is Carr’s essay, or the essay’s concern about knowledge, the ontological seeds have been planted – knowledge and what it entails remains up for debate.  It also leaves in question the implications of such “collective brainpower” that Wolman refers to.  Do these systems of knowledge transmission reflect systems of power and control, or is the Internet as we know it truly a semiotic democracy?

Foucault examines how the classical age brought forth the objectification of the body and the target of power (136).  Carr’s argument brings into question whether the Internet Age’s target of power and objectification is the mind.   As an object of control, the mind then is reduced to an economy, “an efficiency of movements” (Foucault 137).  Carr points to this tension between the economics of the mind and that of the Internet.  He argues, “The faster we surf across the Web — the more links we click and pages we view — the more opportunities Google and other companies gain to collect information about us and to feed us advertisements” (¶31).

Lyotard echoes this economic shift in learning systems when he prophetically claims how a computerized society will equivocally affect the ways we learn as did modern innovations in transportation and media technologies (4).   Knowledge, as an economic commodity system, “has become the principle force of production” and “will continue to be, a major – perhaps the major – stake in the worldwide competition for power” (5).   The relationship between knowledge and power then is “indispensable” in their relationship to systems of production.  Google’s ability to increase production of knowledge through scientific and mathematic experimentation and algorithms creates a system of power, control, and knowledge in the image of its creators.   Carr, rather than asking “does Google make us stupid?” may want to ask a version of what Lyotard asks: Does Google “decide what knowledge is and who knows what needs to be decided?” (8).

If one sees the Internet then as a mechanics of control and power relationships, then Foucault examines the disciplinary and disempowering affects of isolation on humans within a panoptic Institution.  Arguably, if Foucault was alive today, he would likely claim the Internet is a system of control and discipline perfected.  As a perfect disciplinary tool, the Internet and Google’s control over much of the content, is a hyperextension of the industrial factories, a partitioned and analytical space, cellular, functional, and coded, one which “might isolate and map” individuals (143 – 4).  Its machinery’s aim is “to establish presences and absences, to know where and how to locate individuals, to set up useful communications, to interrupt others, to be able at each moment to supervise the conduct of each individual, to assess it, to judge it, to calculate its qualities” (143).  Such a prescient thought is clearly exemplified in today’s Internet, with the use of Google’s behavioral algorithms, GSP and ISP tracking of one’s location, to follow our every search move, while we privately “work” in our homes on our personal computers.

While the computer has been portrayed as a tool of leisure, the ability for companies like Google to track our every move by using their browser essentially lends our movements as free labor input into their system of behavior analysis, which in turn shape our spaces. Google has the capacity to reduce complexity of the vast network of information through its search filters while improving upon itself by better understanding “the adaptation of individual aspirations to its own ends,” the mutual benefit of increased power for Google’s searching capacities and the individuals increased speed at which they may access the information they are searching, increasing performativity and reducing the time at which Google gains access to our browsing information. The speed at which the Internet operates “is a power component of the system” (Lyotard 61).

“It was more the desire for wealth than the desire for knowledge that initially forced upon technology the imperative of performance improvement and product realization” (Lyotard 45).

“Power is not only good performativity, but also effective verification and good verdicts. It is self-legitimating, in the same way a system organized around performance maximization seems to be… The performativity of an utterance… increases proportionally to the amount of information about its referent one has at one’s disposal.  Thus the growth of power, and its self-legitimation, are now taking the route of data storage and accessibility, and the operativity of information” (47).

Google’s paid advertisements invoke a preferential system based on the economics of wealth and control.  So, the ways in which the scientific algorithms are an attempt to fairly distribute information via complex scientific and mathematic systems, the arguments in their defense become more of a “game of scientific language… of the rich” where “an equation between wealth, efficiency, and truth is thus established” (Lyotard 45).  Furthermore, the ways in which Google’s science and corporate control through paid ad spaces benefit each other are when “a portion of the sale is recycled into a research fund dedicated to further performance improvement.  It is at this precise moment that science becomes a force of production, in other words, a moment in the circulation of capital” (45). In turn, as the production of thought and ideas become more entrenched in Google’s research and experimentation in Artificial Intelligence through behavioral algorithms, a permanent link is created between our thought and the political economic agendas of those who wield financial and informational control over our computing systems.

“The perfect disciplinary apparatus would make it possible for a single gaze to see everything constantly” – Foucault Discipline & Punish, pp. 173

The Internet then becomes a perfect training tool.  Whether we like it or not, it has absorbed all medias, as Carr has explained, and we are in many instances forced to refer to its functions.  In using the Internet as a primary tool, we need to train ourselves in its mechanics and all it’s details.  As Carr and many media theorists have stated, tools of thought do not come naturally to humans.  Dating as far back as writing, reading, and typing, humans must learn how to interpret symbols into distinct thought processes and forms.  The same method is necessary in order to navigate the Internet.

In our training, we engage in a complex panoptic institution of hierarchical observation, one in which “eyes that must see without being seen; using techniques of subjection and methods of exploitation” and in effect,  “a new knowledge of man”  (Foucault 171).  The architecture of the Internet “render visible those who are inside it…transform individuals…, to carry the effects of power right to them, to make it possible to know them, to alter them” (Foucault 172).  Through complex networks of information, socializations, and the capacity for the companies that provide these systems ability to monitor this behavior, we can be altered, and in effect, conditioned.   In Foucault’s perspective, such disciplinary tools have traditionally results in obedient and moral citizens – “a political utopia” (174).  As subjects of the Internet, knowing that we are constantly monitored by these systems in turn regulates our behavior and subjects us to certain norms (Foucault 187).  Such power though, is anonymous, rather than literal.  It is hard to argue concretely without sounding in some way anti-technological that Google’s intentions are purely antagonistic, but nor is it hard to argue that Google is, by sheer numbers, “the Internet’s high church” (Carr ¶25) and therefore must be examined as an institution of immense power.  In its ability to anonymously survey our browsing habits, “those on whom it is exercised tend to be more strongly individualized” (Foucault 193).   Such individualization is clear when we examine the vast economies of personal computers, social networks whose titles provide layers of meaning to the names “MySpace,” “YouTube,” “iTunes,” “Wii,” and so on. According to Lyotard, such “administrative procedures should make individuals ‘want’ what the system needs in order to perform well” (62).  By giving us what we “want” through individualization via isolated and controlled environments, we are supporting the performance and economies of the Internet.  We are choice laden actors within these environments, and in choosing them, we “assume responsibility for the constraints of power, inscribe in [ourselves] the power relation in which [we] simultaneously play both roles; [we] become the principle of [our] own subjection” (Foucault 203).

This is the nature of the Panopticon, in which the Internet is its perfect contemporary example, because it reduces the number of surveillors and increases the number of those surveilled, leading to “‘power of mind over mind’” (Foucault 206).  Within a panoptic Internet system, our autonomy is both prescribed to us and similarly taken away and abstracted, enacting what Lyotard calls “a vanguard machine dragging humanity after it, dehumanizing it in order to rehumanize it at a different level of normative capacity” (63).

Lyotard’s postmodern reflection on knowledge provides a silver lining from Foucault’s historical and structural analysis of institutional systems of power and control.  That of rehumanization, logical positivism, entropy, metanarratives, language games and paralogies provide perspectives on how computerized systems can function to humanize and support knowledge within a postindustrial society.   Negentropy is the idea that performance is stable and predictable if all variables are known and follows patterns of logic, physics, and mechanics – an argument thus far supported by Foucault and Carr.  However, Brillouin argues that “perfect control over a system, which is supposed to improve its performance, is inconsistent with respect to the law of contradiction: it in fact lowers the performance level it claims to raise” (55).  This can mean two things in relation to the Internet and Google: 1) To support Carr’s arguments – that the high performativity of the Internet lowers the performativity of that which it claims to raise  (human intelligence), or 2) To support Wolman’s argument: the Internet will never perfectly control knowledge or intelligence, will not replace entirely current medias of knowledge like books, but perhaps could renew (in a contradictory fashion – think DIY) old systems of media.

Says Lyotard in conclusion:

“The line to follow for computerization to take…is quite simple: give the public free access to the memory and data banks.  Language games would then be games of perfect information at any given moment.  But they would also be non-zero-sum games, and by virtue of that fact discussion would never risk fixating in a position of minimax equilibrium because it had exhausted its stakes.  For the stakes would be knowledge (or information).. and the reserve of knowledge – language’s reserve of possible utterances – is inexhaustible”  (67).

If the Internet is a disciplinary tool of perfected observation of which we are clearly aware, then the mind is undergoing, in a sense, self-inflicted containment and imprisonment; subject to the panopticism of normalized technologies – the Internet, personal computers, Google’s search engines, social networks, and so on.  Rather than arguing whether such technologies effects on our intelligence are “good” or “bad” – Lyotard and Foucault allow us to examine the nature and history of normalized machines meant to support the functions of everyday life, the economies of thought production, and the discourses of the institutions, their subjects’ and audiences’ relations to the former.

Rather than questioning his own intellectual freedom or autonomy in challenging the positivist claims of the “good” the Internet brings to us, Carr appeals to grand narratives and key players of the past history of thought and technology – Aristotle, Socrates, Neitzshe, Guttenberg – to legitimate his argument.  By accepting the “good” of Google’s Artificial Intelligence, as Wolman does, where we allow the machine to think for us in order to gain more individualized attention, via algorithms, search surveillance, social networks and the like, we must ask ourselves not “Does Google make us Stupid?” but whether Google renegotiates rights to free human inquiry for the purposes of scientific efficiency and the general good of the state – in effect, “Does Google (re)Make us?”


Benjamin, Walter.  The Work of Art in the age of its technological reproducibility, and other writings on media. Cambridge, MA: Harvard Press, 2008.

Carr, Nicholas. Jul/Aug 2008. The Atlantic Monthly. “Is Google Making Us Stupid?” v. 302 no1 56-8, 60, 62-3.

Hayles, N. Katherine. How we Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago, IL: University of Chicago Press, ??. (from Course Packet)
Foucault, Michel. Discipline & Punish: The Birth of the Prison.  New York: Vintage Books, 1979.
Lyotard, Jean-Francois.  The Postmodern Condition: A Report on Knowledge. Minneapolis, MN: University of Minnesota Press, 1979.
Wolman, David. 18 Aug. 2008. “The Critics Need a Reboot. The Internet Hasn’t Led us into the New Dark Age”. WIRED Magazine, accessed on 12/15/08 <;