Finding A Place For Thought

Yesterday, I wrote briefly about how difficult it can be to find a place for thought when our attention, in both its mental and emotional dimensions, is set aimlessly adrift on the currents of digital media. Digital media, in fact, amounts to an environment that is inhospitable and, indeed, overtly hostile to thought.

Many within the tech industry are coming to a belated sense of responsibility for this world they helped fashion. A recent article in the Guardian tells their story. They include Justin Rosenstein, who helped design the “Like” button for Facebook but now realizes that it is common “for humans to develop things with the best of intentions and for them to have unintended, negative consequences” and James Williams, who worked on analytics for Google but who experienced an epiphany “when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on.”

Better late than never one might say, or perhaps it is too late. As per usual, there is a bit of ancient wisdom that speaks to the situation. In this case, the story of Pandora’s Box comes to mind. Nonetheless, when so many in the industry seem bent on evading responsibility for the consequences of their work, it is mildly refreshing to read about some who are at least willing to own the consequences of their work and even striving to somehow make ammends.

It is telling, though, that, as the article observes, “These refuseniks are rarely founders or chief executives, who have little incentive to deviate from the mantra that their companies are making the world a better place. Instead, they tend to have worked a rung or two down the corporate ladder: designers, engineers and product managers who, like Rosenstein, several years ago put in place the building blocks of a digital world from which they are now trying to disentangle themselves.”

Tristan Harris, formerly at Google, has been especially pointed in his criticism of the tech industries penchant for addictive design. Perhaps the most instructive part of Harris’s story is how he experienced a promotion to ethics position within Google as, in effect, a marginalization and silencing.

(It is also edifying to consider the steady drumbeat of stories about how tech executives stringently monitor and limit the access their own children have to devices and the Internet and why they send their children to expensive low tech schools.)

Informed as my own thinking has been by the work of Hannah Arendt, I see this hostility to thought as a serious threat to our society. Arendt believed that thinking was somehow intimately related to our moral judgment and an inability to think a gateway to grave evils. Of course, it was a particular kind of thinking that Arendt had in mind–thinking, one might say, for thinking’s sake. Or, thinking that was devoid of instrumentality.

Writing in Aeon recently, Jennifer Stitt drew on Arendt to argue for the importance of solitude for thought and thought for conscience and conscience for politics. As Stitt notes, Arendt believed that “living together with others begins with living together with oneself.” Here is Stitt’s concluding paragraph:

But, Arendt reminds us, if we lose our capacity for solitude, our ability to be alone with ourselves, then we lose our very ability to think. We risk getting caught up in the crowd. We risk being ‘swept away’, as she put it, ‘by what everybody else does and believes in’ – no longer able, in the cage of thoughtless conformity, to distinguish ‘right from wrong, beautiful from ugly’. Solitude is not only a state of mind essential to the development of an individual’s consciousness – and conscience – but also a practice that prepares one for participation in social and political life.

Solitude, then, is at least one practice that can help create a place for thought.

Paradoxically, in a connected world it is challenging to find either solitude or companionship. If we submit to a regime of constant connectivity, we end up with hybrid versions of both, versions which fail to yield their full satisfactions.

Additionally, as someone who works one and a half jobs and is also raising a toddler and an infant, I understand how hard it can be to find anything approaching solitude. In a real sense it is a luxury, but it is a necessary luxury and if the world won’t offer it freely then we must fight for it as best we can.

There was one thing left in Pandora’s Box after all the evils had flown irreversibly into the world: it was hope.

Resisting the Habits of the Algorithmic Mind

Algorithms, we are told, “rule our world.” They are ubiquitous. They lurk in the shadows, shaping our lives without our consent. They may revoke your driver’s license, determine whether you get your next job, or cause the stock market to crash. More worrisome still, they can also be the arbiters of lethal violence. No wonder one scholar has dubbed 2015 “the year we get creeped out by algorithms.” While some worry about the power of algorithms, other think we are in danger of overstating their significance or misunderstanding their nature. Some have even complained that we are treating algorithms like gods whose fickle, inscrutable wills control our destinies.

Clearly, it’s important that we grapple with the power of algorithms, real and imagined, but where do we start? It might help to disambiguate a few related concepts that tend to get lumped together when the word algorithm (or the phrase “Bid Data”) functions more as a master metaphor than a concrete noun. I would suggest that we distinguish at least three realities: data, algorithms, and devices. Through the use of our devices we generate massive amounts of data, which would be useless were it not for analytical tools, algorithms prominent among them. It may be useful to consider each of these separately; at least we should be mindful of the distinctions.

We should also pay some attention to the language we use to identify and understand algorithms. As Ian Bogost has forcefully argued, we should certainly avoid implicitly deifying algorithms by how we talk about them. But even some of our more mundane metaphors are not without their own difficulties. In a series of posts at The Infernal Machine, Kevin Hamilton considers the implications of the popular “black box” metaphor and how it encourages us to think about and respond to algorithms.

The black box metaphor tries to get at the opacity of algorithmic processes. Inputs are transformed into outputs, but most of us have no idea how the transformation was effected. More concretely, you may have been denied a loan or job based on the determinations of a program running an algorithm, but how exactly that determination was made remains a mystery.

In his discussion of the black box metaphor, Hamilton invites us to consider the following scenario:

“Let’s imagine a Facebook user who is not yet aware of the algorithm at work in her social media platform. The process by which her content appears in others’ feeds, or by which others’ material appears in her own, is opaque to her. Approaching that process as a black box, might well situate our naive user as akin to the Taylorist laborer of the pre-computer, pre-war era. Prior to awareness, she blindly accepts input and provides output in the manufacture of Facebook’s product. Upon learning of the algorithm, she experiences the platform’s process as newly mediated. Like the post-war user, she now imagines herself outside the system, or strives to be so. She tweaks settings, probes to see what she has missed, alters activity to test effectiveness. She grasps at a newly-found potential to stand outside this system, to command it. We have a tendency to declare this a discovery of agency—a revelation even.”

But how effective is this new way of approaching her engagement with Facebook, now informed by the black box metaphor? Hamilton thinks “this grasp toward agency is also the beginning of a new system.” “Tweaking to account for black-boxed algorithmic processes,” Hamilton suggests, “could become a new form of labor, one that might then inevitably find description by some as its own black box, and one to escape.” Ultimately, Hamilton concludes, “most of us are stuck in an ‘opt-in or opt-out’ scenario that never goes anywhere.”

If I read him correctly, Hamilton is describing an escalating, never-ending battle to achieve a variety of desired outcomes in relation to the algorithmic system, all of which involve securing some kind of independence from the system, which we now understand as something standing apart and against us. One of those outcomes may be understood as the state Evan Selinger and Woodrow Hartzog have called obscurity, “the idea that when information is hard to obtain or understand, it is, to some degree, safe.” “Obscurity,” in their view, “is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power.”

Another desired outcome that fuels resistance to black box algorithms involves what we might sum up as the quest for authenticity. Whatever relative success algorithms achieve in predicting our likes and dislikes, our actions, our desires–such successes are often experienced as an affront to our individuality and autonomy. Ironically, the resulting battle against the algorithm often secures the their relative victory by fostering what Frank Pasquale has called the algorithmic self, constantly modulating itself in response/reaction to the algorithms it encounters.

More recently, Quinn Norton expressed similar concerns from a slightly different angle: “Your internet experience isn’t the main result of algorithms built on surveillance data; you are. Humans are beautifully plastic, endlessly adaptable, and over time advertisers can use that fact to make you into whatever they were hired to make you be.”

Algorithms and the Banality of Evil

These concerns about privacy or obscurity on the one hand and agency or authenticity on the other are far from insignificant. Moving forward, though, I will propose another approach to the challenges posed by algorithmic culture, and I’ll do so with a little help from Joseph Conrad and Hannah Arendt.

In Conrad’s Heart of Darkness, as the narrator, Marlow, makes his way down the western coast of Africa toward the mouth of the Congo River in the service of a Belgian trading company, he spots a warship anchored not far from shore: “There wasn’t even a shed there,” he remembers, “and she was shelling the bush.”

“In the empty immensity of earth, sky, and water,” he goes on, “there she was, incomprehensible, firing into a continent …. and nothing happened. Nothing could happen.” “There was a touch of insanity in the proceeding,” he concluded. This curious and disturbing sight is the first of three such cases encountered by Marlow in quick succession.

Not long after he arrived at the Company’s station, Marlow heard a loud horn and then saw natives scurry away just before witnessing an explosion on the mountainside: “No change appeared on the face of the rock. They were building a railway. The cliff was not in the way of anything; but this objectless blasting was all the work that was going on.”

These two instances of seemingly absurd, arbitrary action are followed by a third. Walking along the station’s grounds, Marlow “avoided a vast artificial hole somebody had been digging on the slope, the purpose of which I found it impossible to divine.” As they say: two is a coincidence; three’s a pattern.

Nestled among these cases of mindless, meaningless action, we encounter as well another kind of related thoughtlessness. The seemingly aimless shelling he witnessed at sea, Marlow is assured, targeted an unseen camp of natives. Registering the incongruity, Marlow exclaims, “he called them enemies!” Later, Marlow recalls the shelling off the coastline when he observed the natives scampering clear of each blast on the mountainside: “but these men could by no stretch of the imagination be called enemies. They were called criminals, and the outraged law, like the bursting shells, had come to them, an insoluble mystery from the sea.”

Taken together these incidents convey a principle: thoughtlessness couples with ideology to abet violent oppression. We’ll come back to that principle in a moment, but, before doing so, consider two more passages from the novel. Just before that third case of mindless action, Marlow reflected on the peculiar nature of the evil he was encountering:

“I’ve seen the devil of violence, and the devil of greed, and the devil of hot desire; but, by all the stars! these were strong, lusty, red-eyed devils, that swayed and drove men–men, I tell you. But as I stood on this hillside, I foresaw that in the blinding sunshine of that land I would become acquainted with a flabby, pretending, weak-eyed devil of rapacious and pitiless folly.”

Finally, although more illustrations could be adduced, after an exchange with an insipid, chatty company functionary, who is also an acolyte of Mr. Kurtz, Marlow had this to say: “I let him run on, the papier-mâché Mephistopheles, and it seemed to me that if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”

That sentence, to my mind, most readily explains why T.S. Eliot chose as an epigraph for his 1925 poem, “The Hollow Men,” a line from Heart of Darkness: “Mistah Kurtz – he dead.” This is likely an idiosyncratic reading, so take it with the requisite grain of salt, but I take Conrad’s papier-mâché Mephistopheles to be of a piece with Eliot’s hollow men, who having died are remembered “Not as lost

Violent souls, but only
As the hollow men
The stuffed men.”

For his part, Conrad understood that these hollow men, these flabby devils were still capable of immense mischief. Within the world as it is administered by the Company, there is a great deal of doing but very little thinking or understanding. Under these circumstances, men are characterized by a thoroughgoing superficiality that renders them willing, if not altogether motivated participants in the Company’s depredations. Conrad, in fact, seems to have intuited the peculiar dangers posed by bureaucratic anomie and anticipated something like what Hannah Arendt later sought to capture in her (in)famous formulation, “the banality of evil.”

If you are familiar with the concept of the banality of evil, you know that Arendt conceived of it as a way of characterizing the kind of evil embodied by Adolph Eichmann, a leading architect of the Holocaust, and you may now be wondering if I’m preparing to argue that algorithms will somehow facilitate another mass extermination of human beings.

Not exactly. I am circumspectly suggesting that the habits of the algorithmic mind are not altogether unlike the habits of the bureaucratic mind. (Adam Elkus makes a similar correlation here, but I think I’m aiming at a slightly different target.) Both are characterized by an unthinking automaticity, a narrowness of focus, and a refusal of responsibility that yields the superficiality or hollowness Conrad, Eliot, and Arendt all seem to be describing, each in their own way. And this superficiality or hollowness is too easily filled with mischief and cruelty.

While Eichmann in Jerusalem is mostly remembered for that one phrase (and also for the controversy the book engendered), “the banality of evil” appears, by my count, only once in the book. Arendt later regretted using the phrase, and it has been widely misunderstood. Nonetheless, I think there is some value to it, or at least to the condition that it sought to elucidate. Happily, Arendt returned to the theme in a later, unfinished work, The Life of the Mind.

Eichmann’s trial continued to haunt Arendt. In the Introduction, Arendt explained that the impetus for the lectures that would become The Life of the Mind stemmed from the Eichmann trial. She admits that in referring to the banality of evil she “held no thesis or doctrine,” but she now returns to the nature of evil embodied by Eichmann in a renewed attempt to understand it: “The deeds were monstrous, but the doer … was quite ordinary, commonplace, and neither demonic nor monstrous.” She might have added: “… if I tried I could poke my forefinger through him, and would find nothing inside but a little loose dirt, maybe.”

There was only one “notable characteristic” that stood out to Arendt: “it was not stupidity but thoughtlessness.” Arendt’s close friend, Mary McCarthy, felt that this word choice was unfortunate. “Inability to think” rather than thoughtlessness, McCarthy believed, was closer to the sense of the German word Gedankenlosigkeit.

Later in the Introduction, Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”

Arendt explained that it was this “absence of thinking–which is so ordinary an experience in our everyday life, where we have hardly the time, let alone the inclination, to stop and think–that awakened my interest.” And it posed a series of related questions that Arendt sought to address:

“Is evil-doing (the sins of omission, as well as the sins of commission) possible in default of not just ‘base motives’ (as the law calls them) but of any motives whatever, of any particular prompting of interest or volition?”

“Might the problem of good and evil, our faculty for telling right from wrong, be connected with our faculty of thought?”

All told, Arendt arrived at this final formulation of the question that drove her inquiry: “Could the activity of thinking as such, the habit of examining whatever happens to come to pass or to attract attention, regardless of results and specific content, could this activity be among the conditions that make men abstain from evil-doing or even actually ‘condition’ them against it?”

It is with these questions in mind–questions, mind you, not answers–that I want to return to the subject with which we began, algorithms.

Outsourcing the Life of the Mind

Momentarily considered apart from data collection and the devices that enable it, algorithms are principally problem solving tools. They solve problems that ordinarily require cognitive labor–thought, decision making, judgement. It is these very activities–thinking, willing, and judging–that structure Arendt’s work in The Life of the Mind. So, to borrow the language that Evan Selinger has deployed so effectively in his critique of contemporary technology, we might say that algorithms outsource the life of the mind. And, if Arendt is right, this outsourcing of the life of the mind is morally consequential.

The outsourcing problem is at the root of much of our unease with contemporary technology. Machines have always done things for us, and they are increasingly doing things for us and without us. Increasingly, the human element is displaced in favor of faster, more efficient, more durable, cheaper technology. And, increasingly, the displaced human element is the thinking, willing, judging mind. Of course, the party of the concerned is most likely the minority party. Advocates and enthusiasts rejoice at the marginalization or eradication of human labor in its physical, mental, emotional, and moral manifestations. They believe that the elimination of all of this labor will yield freedom, prosperity, and a golden age of leisure. Critics meanwhile, and I count myself among them, struggle to articulate a compelling and reasonable critique of this scramble to outsource various dimensions of the human experience.

But perhaps we have ignored another dimension of the problem, one that the outsourcing critique itself might, possibly, encourage. Consider this:  to say that algorithms are displacing the life of the mind is to unwittingly endorse a terribly impoverished account of the life of the mind. For instance, if I were to argue that the ability to “Google” whatever bit of information we happen to need when we need it leads to an unfortunate “outsourcing” of our memory, it may be that I am already giving up the game because I am implicitly granting that a real equivalence exists between all that is entailed by human memory and the ability to digitally store and access information. A moments reflection, of course, will reveal that human remembering involves considerably more than the mere retrieval of discreet bits of data. The outsourcing critique, then, valuable as it is, must also challenge the assumption that the outsourcing occurs without remainder.

Viewed in this light, the problem with outsourcing the life of the mind is that it encourages an impoverished conception of what constitutes the life of the mind in the first place. Outsourcing, then, threatens our ability to think not only because some of our “thinking” will be done for us; it will do so because, if we are not careful, we will be habituated into conceiving of the life of the mind on the model of the problem-solving algorithm. We would thereby surrender the kind of thinking that Arendt sought to describe and defend, thinking that might “condition” us against the varieties of evil that transpire in environments of pervasive thoughtlessness.

In our responses to the concerns raised by algorithmic culture, we tend to ask, What can we do? Perhaps, this is already to miss the point by conceiving of the matter as a problem to be solved by something like a technical solution. Perhaps the most important and powerful response is not an action we take but rather an increased devotion to the life of the mind. The phrase sounds quaint, or, worse, elitist. As Arendt meant it, it was neither. Indeed, Arendt was convinced that if thinking was somehow essential to moral action, it must be accessible to all: “If […] the ability to tell right from wrong should turn out to have anything to do with the ability to think, then we must be able to ‘demand’ its exercise from every sane person, no matter how erudite or ignorant, intelligent or stupid, he may happen to be.”

And how might we pursue the life of the mind? Perhaps the first, modest step in that direction is simply the cultivation of times and spaces for thinking, and perhaps also resisting the urge to check if there is an app for that.


What Do We Think We Are Doing When We Are Thinking?

Over the past few weeks, I’ve drafted about half a dozen posts in my mind that, sadly, I’ve not had the time to write. Among those mental drafts in progress is a response to Evgeny Morozov’s latest essay. The piece is ostensibly a review of Nick Carr’s The Glass Cage, but it’s really a broadside at the whole enterprise of tech criticism (as Morozov sees it). I’m not sure about the other mental drafts, but that is one I’m determined to see through. Look for it in the next few days … maybe.

In the meantime, here’s a quick reaction to a post by Steve Coast that has been making the rounds today.

In “The World Will Only Get Weirder,” Coast opens with some interesting observations about aviation safety. Taking the recent spate of bizarre aviation incidents as his point of departure, Coast argues that rules as a means of managing safety will only get you so far.

The history of aviation safety is the history of rule-making and checklists. Over time, this approach successfully addressed the vast majority of aviation safety issues. Eventually, however, you hit peak rules, as it were, and you enter a byzantine phase of rule making. Here’s the heart of the piece:

“We’ve reached the end of the useful life of that strategy and have hit severely diminishing returns. As illustration, we created rules to make sure people can’t get in to cockpits to kill the pilots and fly the plane in to buildings. That looked like a good rule. But, it’s created the downside that pilots can now lock out their colleagues and fly it in to a mountain instead.

It used to be that rules really helped. Checklists on average were extremely helpful and have saved possibly millions of lives. But with aircraft we’ve reached the point where rules may backfire, like locking cockpit doors. We don’t know how many people have been saved without locking doors since we can’t go back in time and run the experiment again. But we do know we’ve lost 150 people with them.

And so we add more rules, like requiring two people in the cockpit from now on. Who knows what the mental capacity is of the flight attendant that’s now allowed in there with one pilot, or what their motives are. At some point, if we wait long enough, a flight attendant is going to take over an airplane having only to incapacitate one, not two, pilots. And so we’ll add more rules about the type of flight attendant allowed in the cockpit and on and on.”

This struck me as a rather sensible take on the limits of a rule-oriented, essentially bureaucratic approach to problem solving, which is to say the limits of technocracy or technocratic rationality. Limits, incidentally, that apply as well to our increasing dependence on algorithmic automation.

Of course, this is not to say that rule-oriented, bureaucratic reason is useless. Far from it. As a mode of thinking it is, in fact, capable of solving a great number of problems. It is eminently useful, if also profoundly limited.

Problems arise, however, when this one mode of thought crowds out all others, when we can’t even conceive of an alternative.

This dynamic is, I think, illustrated by a curious feature of Coast’s piece. The engaging argument that characterizes the first half or so of the post gives way to a far less cogent and, frankly, troubling attempt at a solution:

“The primary way we as a society deal with this mess is by creating rule-free zones. Free trade zones for economics. Black budgets for military. The internet for intellectual property. Testing areas for drones. Then after all the objectors have died off, integrate the new things in to society.”

So, it would seem, Coast would have us address the limits of rule-oriented, bureaucratic reason by throwing out all rules, at least within certain contexts until everyone gets on board or dies off. This stark opposition is plausible only if you can’t imagine an alternative mode of thought that might direct your actions. We only have one way of thinking seems to be the unspoken premise. Given that premise, once that mode of thinking fails, there’s nothing left to do but discard thinking altogether.

As I was working on this post I came across a story on NPR that also illustrates our unfortunately myopic understanding of what counts as thought. The story discusses a recent study that identifies a tendency the researchers labeled “algorithm aversion”:

“In a paper just published in the Journal of Experimental Psychology: General, researchers from the University of Pennsylvania’s Wharton School of Business presented people with decisions like these. Across five experiments, they found that people often chose a human — themselves or someone else — over a model when it came to making predictions, especially after seeing the model make some mistakes. In fact, they did so even when the model made far fewer mistakes than the human. The researchers call the phenomenon ‘algorithm aversion,’ where ‘algorithm’ is intended broadly, to encompass — as they write — ‘any evidence-based forecasting formula or rule.'”

After considering what might account for algorithm aversion, the author, psychology professor Tania Lombrozo, closes with this:

“I’m left wondering how people are thinking of their own decision process if not in algorithmic terms — that is, as some evidence-based forecasting formula or rule. Perhaps the aversion — if it is that — is not to algorithms per se, but to the idea that the outcomes of complex, human processes can be predicted deterministically. Or perhaps people assume that human ‘algorithms’ have access to additional information that they (mistakenly) believe will aid predictions, such as cultural background knowledge about the sorts of people who select different majors, or about the conditions under which someone might do well versus poorly on the GMAT. People may simply think they’re implementing better algorithms than the computer-based alternatives.

So, here’s what I want to know. If this research reflects a preference for ‘human algorithms’ over ‘nonhuman algorithms,’ what is it that makes an algorithm human? And if we don’t conceptualize our own decisions as evidence-based rules of some sort, what exactly do we think they are?”

May be it’s just me, but it seems Lombrozo can’t quite imagine how people might understand their own thinking if they are not understanding it on the model of an algorithm.

These two pieces raise a series of questions for me, and I’ll leave you with them:

What is thinking? What do we think we are doing when we are thinking? Can we imagine thinking as something more and other than rule-oriented problem solving or cost/benefit analysis? Have we surrendered our thinking to the controlling power of one master metaphor, the algorithm?

(Spoiler alert: I think the work of Hannah Arendt is of immense help in these matters.)

Reframing Technological Phenomena

I’d not ever heard of Michael Heim until I stumbled upon his 1987 book, Electric Language: A Philosophical Study of Word Processing, at a used book store a few days ago; but, after reading the Introduction, I’m already impressed by the concerns and methodology that inform his analysis.

Yesterday, I passed along his defense of philosophizing about a technology at the time of its appearance. It is at this juncture, he explains, before the technology has been rendered an ordinary feature of our everyday experience, that it is uniquely available to our thinking. And it is with our ability to think about technology that Heim is chiefly concerned in his Introduction. Without too much additional comment on my part, I want to pass along a handful of excerpts that I found especially valuable.

Here is Heim’s discussion of reclaiming phenomena for philosophy. By this I take it that he means learning to think about cultural phenomena, in this case technology, without leaning on the conventional framings of the problem. It is a matter of learning to see the phenomena for what it is by first unseeing the a variety of habitual perspectives.

“By taking over pregiven problems, an illusion is created that cultural phenomena are understood philosophically, while in fact certain narrow conventional assumptions are made about what the problem is and what alternate solutions to it might be. Philosophy is then confused with policy, and the illumination of phenomena is exchanged for argumentation and debate [….] Reclaiming the phenomena for philosophy today means not assuming that a phenomenon has been perceived philosophically unless it has first been transformed thoroughly by reflection; we cannot presume to perceive a phenomenon philosophically if it is merely taken up ready-made as the subject of public debate. We must first transform it thoroughly by a reflection that is remote from partisan political debate and from the controlled rhetoric of electronic media. Nor can we assume we have grasped a phenomenon by merely locating its relationship to our everyday scientific mastery of the world. The impact of cultural phenomena must be taken up and reshaped by speculative theory.”

At one point, Heim offered some rather prescient anticipations of the future of writing and computer technology:

“Writing will increasingly be freed from the constraints of paper-print technology; texts will be stored electronically, and vast amounts of information, including further texts, will be accessible immediately below the electronic surface of a piece of writing. The electronically expanding text will no longer be constrained by paper as the telephone and the microcomputer become more intimately conjoined and even begin to merge. The optical character reader will scan and digitize hard-copy printed texts; the entire tradition of books will be converted into information on disk files that can be accessed instantly by computers. By connecting a small computer to a phone, a professional will be able to read ‘books’ whose footnotes can be expanded into further ‘books’ which in turn open out onto a vast sea of data bases systemizing all of human cognition. The networking of written language will erode the line between private and public writings.”

And a little later on, Heim discusses the manner in which we ordinarily (fail to) apprehend the technologies we rely on to make our way in the world:

“We denizens of the late twentieth century are seldom aware of our being embedded in systematic mechanisms of survival. The instruments providing us with technological power seldom appear directly as we carry out the personal tasks of daily life. Quotidian survival brings us not so much to fear autonomous technological systems as to feel a need to acquire and use them. During most of our lives our tools are not problematic–save that we might at a particular point feel need for or lack of a particular technical solution to solve a specific human problem. Having become part of our daily needs, technological systems seem transparent, opening up a world where we can do more, see more, and achieve more.

Yet on occasion we do transcend this immersion in the technical systems of daily life. When a technological system threatens our physical life or threatens the conditions of planetary life, we then turn to regard the potential agents of harm or hazard. We begin to sense that the mechanisms which previously provided, innocently as it were, the conditions of survival are in fact quasi-autonomous mechanisms possessing their own agency, an agency that can drift from its provenance in human meanings and intentions.”

In these last two excerpts, Heim describes two polarities that tend to frame our thinking about technology.

“In a position above the present, we glimpse hopefully into the future and glance longingly at the past. We see how the world has been transformed by our creative inventions, sensing–more suspecting than certain–that it is we who are changed by the things we make. The ambivalence is resolved when we revert to one or another of two simplistic attitudes: enthusiastic depiction of technological progress or wholesale distress about the effects of a mythical technology.”

And,

“Our relationship to technological innovations tends to be so close that we either identify totally with the new extensions of ourselves–and then remain without the concepts and terms for noticing what we risk in our adaption to a technology–or we react so suspiciously toward the technology that we are later engulfed by the changes without having developed critical countermeasures by which to compensate for the subsequent losses in the life of the psyche.”

Heim practices what he preaches. His book is divided into three major sections: Approaching the Phenomenon, Describing the Phenomenon, and Evaluating the Phenomenon. The three chapters of the first section are “designed to gain some distance,” to shake loose the ready-made assumptions so as to clearly perceive the technological phenomenon in question. And this he does by framing word processing within longstanding trajectories of historical and philosophical of inquiry. Only then can the work of description and analysis begin. Finally, this analysis grounds our evaluations. That, it seems, to me is a useful model for our thinking about technology.

(P.S. Frankenstein blogging should resume tomorrow.)

A Thought About Thinking

Several posts in the last few months have touched on the idea of thinking, mostly with reference to the work of Hannah Arendt. “Thinking what we are doing” was a recurring theme in her writing, and it could very easily serve as a slogan, along with the line from McLuhan below the blog’s title, for what I am trying to do here.

Thinking, though, is one of those things that we do naturally, or so we believe, so it is therefore one of those things for which we have a hard time imagining an alternative mode. Let me try putting that another way. The more “natural” a fact about the world seems to us, the harder it is for us to imagine that it could be otherwise. What’s more, thinking about our own thinking is a dynamic best captured by trying to imagine jumping over our own shadow, although, finally, not impossible in the same way.

We all think, if by “thinking” we simply mean our stream of consciousness, our unending internal monologue. But, having thoughts does not necessarily equal thinking. That’s neither a terribly profound observation nor a controversial one. But what, then, does constitute thinking?

Here’s one line of thought in partial response. It’s tempting to associate thinking with “problem solving.” Thinking in these cases takes as its point of departure some problem that needs to be solved. Our thinking then sets out to understand the problem, perhaps by identifying its causes, before proceeding to propose solutions, solutions which usually involve the weighing of pros and cons.

This is the sort of thinking that we tend to prize, and for obvious reasons. When there are problems, we want solutions. We might call this sort of thinking technocratic thinking, or thinking on the model of engineering. By calling it this I don’t intend to disparage it. We need this sort of thinking, no doubt. But if this is the only sort of thinking we do, then we’ve impoverished the category.

But what’s the alternative?

The technocratic mode of thinking makes the assumption that all problems have solutions and all questions have answers. Or, what’s worse, that the only problems worth thinking about are those we can solve and the only questions worth asking are those that we can definitively answer. The corollary temptation is that we begin to look at life merely as a series of problems in search of a solution. We might call this the engineered life.

All of this further assumes that thinking itself is not inherently valuable; it is valuable only as a means to an end: in this case, either the solution or the answer.

We need, instead, to insist on the value of thinking as an end in itself. We might make a start by distinguishing between questions we answer and questions we live with–that is, questions we may never fully answer, but whose contemplation enriches our lives. We may further distinguish between problems we solve and problems we simply inhabit as a condition of being human.

This needs to be further elaborated, but I’ll leave that to your own thinking. I’ll also leave you with another line that has meant a lot to me over the years. It’s taken from a poem by Wendell Berry:

“We live the given life, not the planned.”