Evaluating the Promise of Technological Outsourcing

“It is crucial for a resilient democracy that we better understand how these powerful, ubiquitous websites are changing the way we think, interact and behave.” The websites in question are chiefly Google and Facebook. The admonition to better understand their impact on our thinking and civic deliberations comes from an article in The Guardian by Evan Selinger and Brett Frischmann, “Why it’s dangerous to outsource our critical thinking to computers.”

Selinger and Frischmann are the authors of one the forthcoming books I am most eagerly anticipating, Being Human in the 21st Century to be published by Cambridge University Press. I’ve frequently cited Selinger’s outsourcing critique of digital technology (e.g., here and here), which the authors will be expanding and deepening in this study. In short, Selinger has explored how a variety of apps and devices outsource labor that is essential or fundamental to our humanity. It’s an approach that immediately resonated with me, primed as I had been for it by Albert Borgmann’s work. (You can read about Borgmann in the latter link above and here.)

In this case, the crux of Selinger and Frischmann’s critique can be found in these two key paragraphs:

Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.

This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerised alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.

Their concern, then, is that we may be encouraged to outsource an essential skill to a device or application that promises to do the work for us. In this case, the skill we are tempted to outsource is a critical component of a healthy citizenry. As they put it, “Democracies don’t simply depend on well-informed citizens – they require citizens to be capable of exerting thoughtful, independent judgment.”

As I’m sure Selinger and Frischmann would agree, this outsourcing dynamic is one of the dominant features of the emerging techno-social landscape, and we should work hard to understand its consequences.

As some of you may remember, I’m fond of questions. They are excellent tools for thinking, including thinking about the ethical implications of technology. “Questioning is the piety of thought,” Heidegger once claimed in a famous essay about technology. With that in mind I’ll work my way to a few questions we can ask of outsourcing technologies.

My approach will take its point of departure from Marshall McLuhan’s Laws of Media, sometimes called the Four Effects or McLuhan’s tetrad. These four effects were offered by McLuhan as a compliment to Aristotle’s Four Causes and they were presented as a paradigm by which we might evaluate the consequences of both intellectual and material things, ideas and tools.

The four effects were Retrieval, Reversal, Obsolescence, and Enhancement. Here are a series of questions McLuhan and his son, Eric McLuhan, offered to unpack these four effects:

A. “What recurrence or RETRIEVAL of earlier actions and services is brought into play simultaneously by the new form? What older, previously obsolesced ground is brought back and inheres in the new form?”

B. “When pushed to the limits of its potential, the new form will tend to reverse what had been its original characteristics. What is the REVERSAL potential of the new form?”

C. “If some aspect of a situation is enlarged or enhanced, simultaneously the old condition or un-enhanced situation is displaced thereby. What is pushed aside or OBSOLESCED by the new ‘organ’?”

D. “What does the artefact ENHANCE or intensify or make possible or accelerate? This can be asked concerning a wastebasket, a painting, a steamroller, or a zipper, as well as about a proposition in Euclid or a law of physics. It can be asked about any word or phrase in any language.”

These are all useful questions, but for our purposes the focus will be on the third effect, Obsolescence. It’s in this class of effects that I think we can locate what Selinger calls digital outsourcing. I began by introducing all four, however, so that we wouldn’t be tempted to think that displacement or outsourcing is the only dynamic to which we should give our attention.

When McLuhan invites us to ask what a new technology renders obsolete, we may immediately imagine older technologies that are set aside in favor of the new. Following Borgmann, however, we can also frame the question as a matter of human labor or involvement. In other words, it is not only about older tools that we set aside but also about human faculties, skills, and subjective engagement with the world–these, too, can be displaced or outsourced by new tools. The point, of course, is not to avoid every form of technological displacement, this would be impossible and undesirable. Rather, what we need is a better way of thinking about and evaluating these displacements so that we might, when possible, make wise choices about our use of technology.

So we can begin to elaborate McLuhan’s third effect with this question:

1. What kind of labor does the tool/device/app displace? 

This question yields at least five possible responses:

a. Physical labor, the work of the body
b. Cognitive labor, the work of the mind
c. Emotional labor, the work of the heart
d. Ethical labor, the work of the conscience
e. Volitional labor, the work of the will

The schema implied by these five categories is, of course, like all such schemas, too neat. Take it as a heuristic device.

Other questions follow that help clarify the stakes. After all, what we’re after is not only a taxonomy but also a framework for evaluation.

2. What is the specific end or goal at which the displaced labor is aimed?

In other words, what am I trying to accomplish by the use the technology in question? But the explicit objective I set out to achieve may not be the only effect worth considering; there are implicit effects as well. Some of these implicit effects may be subjective and others may be social; in either case they are not always evident and may, in fact, be difficult to perceive. For example, in using GPS, navigating from Point A to Point B is the explicit objective. However, the use of GPS may also impact my subjective experience of place, for example, and this may carry political implications. So we should also consider a corollary question:

2a. Are there implicit effects associated with the displaced labor?

Consider the work of learning: If the work of learning is ultimately subordinate to becoming a certain kind of person, then it matters very much how we go about learning. This is because  the manner in which we go about acquiring knowledge constitutes a kind of practice that over the long haul shapes our character and disposition in non-trivial ways. Acquiring knowledge through apprenticeship, for example, shapes people in a certain way, acquiring knowledge through extensive print reading in another, and through web based learning in still another. The practice which constitutes our learning, if we are to learn by it, will instill certain habits, virtues, and, potentially, vices — it will shape the kind of person we are becoming.

3. Is the labor we are displacing essential or accidental to the achievement of that goal?

As I’ve written before, when we think of ethical and emotional labor, it’s hard to separate the labor itself from the good that is sought or the end that is pursued. For example, someone who pays another person to perform acts of charity on their behalf has undermined part of what might make such acts virtuous. An objective outcome may have been achieved, but at the expense of the subjective experience that would constitute the action as ethically virtuous.

A related question arises when we remember the implicit effects we discussed above:

3a. Is the labor essential or accidental to the implicit effects associated with the displaced labor?

4. What skills are sustained by the labor being displaced? 

4a. Are these skills valuable for their own sake and/or transferable to other domains?

These two questions seem more straightforward, so I will say less about them. The key point is essentially the one made by Selinger and Frischmann in the article with which we began: the kind of critical thinking that demigrated require of their citizens should be actively cultivated. Outsourcing that work to an algorithm may, in fact, weaken the very skill it seeks to support.

These questions should help us think more clearly about the promise of technological outsourcing. They may also help us to think more clearly about what we have been doing all along. After all, new technologies often cast old experiences in new light. Even when we are wary or critical of the technologies in question, we may still find that their presence illuminates aspects of our experience by inviting us to think about what we had previously taken for granted.

More on Mechanization, Automation, and Animation

As I follow the train of thought that took the dream of a smart home as a point of departure, I’ve come to a fork in the tracks. Down one path, I’ll continue thinking about the distinctions among Mechanization, Automation, and Animation. Down the other, I’ll pursue the technological enchantment thesis that arose incidentally in my mind as a way of either explaining or imaginatively characterizing the evolution of technology along those three stages.

Separating these two tracks is a pragmatic move. It’s easier for me at this juncture to consider them separately, particularly to weigh the merits of the latter. It may be that the two tracks will later converge, or it may be that one or both are dead ends. We’ll see. Right now I’ll get back to the three stages.

In his comment on my last post, Evan Selinger noted that my schema was Borgmannesque in its approach, and indeed it was. If you’ve been reading along for awhile, you know that I think highly of Albert Borgmann’s work. I’ve drawn on it a time or two of late. Borgmann looked for a pattern that might characterize the development of technology, and he came up with what he called the device paradigm. Succinctly put, the device paradigm described the tendency of machines to become simultaneously more commodious and more opaque, or, to put it another way, easier to use and harder to understand.

In my last post, I used heating as an example to walk through the distinctions among mechanization, automation, and animation. Borgmann also uses heating to illustrate the device paradigm: lighting and sustaining a fire is one thing, flipping a switch to turn on the furnace is another. Food and music also serve as recurring illustrations for Borgmann. Preparing a meal from scratch is one thing, popping a TV dinner in the microwave is another. Playing the piano is one thing, listening to an iPod is another. In each case a device made access to the end product–heat, food, music–easier, instantaneous, safer, more efficient. In each case, though, the workings of the device beneath the commodious surface became more complex and opaque. (Note that in the case of food preparation, both the microwave and the TV dinner are devices.) Ease of use also came at the expense of physical engagement, which, in Borgmann’s view, results in an impoverishment of experience and a rearrangement of the social world.

Keep that dynamic in mind as we move forward. The device paradigm does a good job, I think, of helping us think about the transition to mechanization and from mechanization to automation and animation, chiefly by asking us to consider what we’re sacrificing in exchange for the commodiousness offered to us.

Ultimately, we want to avoid the impulse to automate for automation’s sake. As Nick Carr, whose forthcoming book, The Glass Cage: Automation and Us, will be an excellent guide in these matters, recently put it, “What should be automated is not what can be automated but what should be automated.”

That principle came at the end of a short post reflecting on comments made by Google’s “Android guru,” Sundar Pichai. Pichai offered a glimpse at how Google envisions the future when he described how useful it would be if your car could sense that your child was now inside and automatically changed the music playlists accordingly. Here’s part of Carr’s response:

“With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be ‘freed up’ to do something ‘more valuable.’ Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.”

But how do we come to know what should be automated? I’m not sure there’s a short answer to that question, but it’s safe to say that we’re going to need to think carefully about what we do and why we do it. Again, this is why I think Hannah Arendt was ahead of her time when she undertook the intellectual project that resulted in The Human Condition and the unfinished The Life of the Mind. In the first she set out to understand our doing and in the second, our thinking. And all of this in light of the challenges presented by emerging technological systems.

One of the upshots of new technologies, if we accept the challenge, is that they lead us to look again at what we might have otherwise taken for granted or failed to notice altogether. New communication technologies encourage us to think again about the nature of human communication. New medical technologies encourage us to think again about the nature of health. New transportation technologies encourage us to think again about the nature of place. And so on.

I had originally used the word “forced” where I settled for the word “encourage” above. I changed the wording because, in fact, new technologies don’t force us to think again about the realms of life they impact. It is quite easy, too easy perhaps, not to think at all, simply to embrace and adopt the new technology without thinking at all about its consequences. Or, what amounts to the same thing, it is just as easy to reject new technologies out of hand because they are new. In neither case would we be thinking at all. If we accept the challenge to think again about the world as new technologies cast aspects of it in a new light, we might even begin to see this development as a great gift by leading us to value, appreciate, and even love what was before unnoticed.

Returning to the animation schema, we might make a start at thinking by simply asking ourselves what exactly is displaced at each transition. When it comes to mechanization, it seems fairly straightforward. Mechanization, as I’m defining it, ordinarily displaces physical labor.

Capturing what exactly is displaced when it comes to automation is a bit more challenging. In part, this is because the distinctions I’m making between mechanization and automation on the one hand and automation and animation on the other are admittedly fuzzy. In fact, all three are often simply grouped together under the category of automation. This is a simpler move, but I’m concerned that we might not get a good grasp of the complex ways in which technologies interact with human action if we don’t parse things a bit more finely.

So let’s start by suggesting that automation, the stage at which machines operate without the need for constant human input and direction, displaces attention. When something is automated, I can pay much less attention to it, or perhaps, no attention at all. We might also say that automation displaces will or volition. When a process is automated, I don’t have to will its action.

Finally, animation– the stage at which machines not only act without direct human intervention, but also “learn” and begin to “make decisions” for themselves–displaces agency and judgment.

By noting what is displaced we can then ask whether the displaced element was an essential or inessential aspect of the good or end sought by the means, and so we might begin to arrive at some more humane conclusions about what ought to be automated.

I’ll leave things there for now, but more will be forthcoming. Right now I’ll leave you with a couple of questions I’ll be thinking about.

First, Borgmann distinguished between things and devices (see here or here). Once we move from automation to animation, do we need a new category?

Also, coming back to Arendt, she laid out two sets of three categories that overlap in interesting ways with the three stages as I’m thinking of them. In her discussion of human doing, she identifies labor, work, and action. In her discussion of human thinking, she identifies thought, will, and judgment. How can her theorizing of these categories help us understand what’s at stake in drive to automate and animate?

It’s Alive, It’s Alive!

Your home, that is. It soon may be, anyway.

Earlier this week at the Worldwide Developers Conference, Apple introduced HomeKit, an iOS 8 application that will integrate the various devices and apps which together transform an ordinary home into a “smart home.”

The “smart home,” like the flying car, has long been a much anticipated component of “the future.” The Jetsons had one, and, more recently, the Iron Man films turned Tony Stark’s butler, Edwin Jarvis, into JARVIS, an AI system that powers Stark’s very smart home. Note, in passing, the subtle tale of technological unemployment.

But the “smart home” is a more plausible element of our future than the flying car. Already in 1990, the Unity System offered a rather rudimentary iteration. And, as early as 1999, in the pages of Newsweek, Steven Levy was announcing the immanent arrival of what is now commonly referred to as the Internet of Things, the apotheosis of which would be the “smart home.” Levy didn’t call it the “smart home,” although he did refer to the “smart toilet,” but a “smart home” is what he was describing:

“Your home, for instance, will probably have one or more items directly hot-wired to the Internet: a set-top television box, a game console, a server sitting in the basement, maybe even a traditional PC. These would be the jumping-off points for a tiny radio-frequency net that broadcasts throughout the house. That way the Internet would be, literally, in the air. Stuff inside the house would inhale the relevant bits. Your automatic coffee maker will have access to your online schedule, so if you’re out of town it’ll withhold the brew. Your alarm clock might ring later than usual if it logs on to find out that you don’t have to get the kids ready for school–snow day! And that Internet dishwasher? No, it won’t be bidding on flatware at eBay auctions. Like virtually every other major appliance in your home, its Internet connection will be used to contact the manufacturer if something goes wrong.”

Envisioning this “galaxy” of digitally networked things, Levy already hints at the challenge of getting everything to work together in efficient and seamless fashion. That’s exactly were Apple is hoping to step in with HomeKit. At WDC, Apple’s VP humbly suggested that his company could “bring some rationality to this space.” Of course, as Megan Garber puts it, “You could see it as Apple’s attempt to turn the physical world into a kind of App Store: yet another platform. Another area whose gates Apple keeps.”

When news broke about HomeKit, I was reminded of an interview the philosopher of technology Albert Borgmann gave several years ago. It was that interview, in fact, that led me to the piece by Levy. Borgmann was less than impressed with the breathless anticipation of the “smart home.”

“In the perfectly smart home,” Borgmann quipped, “you don’t do anything.”

Writing in the Wall Street Journal, Geoffrey Fowler, gave one example of what Apple projected HomeKit could do:  “Users would be able to tell their Siri virtual assistant that they are ‘going to bed’ and their phone would dim the lights, lock your doors and set the thermostat, among other tasks.”

There’s apparently something alluring and enchanting about such a scenario. I’m going to casually suggest that the allure might be conceived as arising from a latent desire to re-enchant the world. The advent of modernity disenchanted the pre-modern world according to a widely accepted socio-historical account of the modern world. Gone were the spirits and spiritual forces at work in the world. Gone were the angles and witches and fairies. Gone was the mysticism that inspired both fear and wonder. All that remained was the sterile world of lifeless matter … and human beings alone in a vast universe that took no notice of them.

Technologies that make the environment responsive to our commands and our presence, tools that would be, presumably, alert to our desires and needs, even those we’ve not yet become aware of–such technologies promise to re-enchant the world, to make us feel less alone perhaps. They are the environmental equivalent of the robots that promise to become our emotional partners.

Borgmann, however, is probably right about technologies of this sort, “After a week you don’t notice them anymore. They mold into the inconspicuous normalcy of the background we now take for granted. These are not things that sustain us.”

Christopher Mims landed even nearer to the mark when he recently tweeted, “Just think how revolutionary the light switch would seem if until now we’d all been forced to control our homes through smartphones.”

Finally, in his WSJ story, Fowler wrote, “[Apple] is hoping it can become a hub of connected devices that, on their own, don’t do a very good job of helping you control a home.”

That last phrase is arresting. Existing products don’t do a very good job of helping you control your home. Interestingly though, I’ve never really thought of my home as something I needed to control. The language of control suggests that a “smart home” is an active technological system that requires maintenance and regulation. It’s a house come alive. Of course, it’s worth remembering that the pursuit of control is always paired with varying degrees of anxiety.

Why A Life Made Easier By Technology May Not Necessarily Be Happier

Tim Wu, of the Columbia Law School, has been writing a series of reflections on technological evolution for Elements, the New Yorker’s science and technology blog. In the first of these, “If a Time Traveller Saw a Smartphone,” Wu offers what he calls a modified Turing test as a way of thinking about the debate between advocates and critics of digital technology (perhaps, though, it’s more like Searle’s Chinese room).

Imagine a time-traveller from 1914 (a fateful year) encountering a woman behind a veil. This woman answer all sorts of questions about history and literature, understands a number of languages, performs mathematical calculations with amazing rapidity, etc. To the time-traveller, the woman seems to possess a nearly divine intelligence. Of course, as you’ve already figured out, she is simply consulting a smartphone with an Internet connection.

Wu uses this hypothetical anecdote to conclude, “The time-traveller scenario demonstrates that how you answer the question of whether we are getting smarter depends on how you classify ‘we.’ This is why [Clive] Thompson and [Nicholas] Carr reach different results: Thompson is judging the cyborg, while Carr is judging the man underneath.” And that’s not a bad way of characterizing the debate.

Wu closes his first piece by suggesting that our technological augmentation has not been secured without incurring certain costs. In the second post in the series, Wu gives us a rather drastic case study of the kind of costs that sometimes come with technological augmentation. He tells the story of the Oji-Cree people, who until recently lived a rugged, austere life in northern Canada … then modern technologies showed up:

“Since the arrival of new technologies, the population has suffered a massive increase in morbid obesity, heart disease, and Type 2 diabetes. Social problems are rampant: idleness, alcoholism, drug addiction, and suicide have reached some of the highest levels on earth. Diabetes, in particular, has become so common (affecting forty per cent of the population) that researchers think that many children, after exposure in the womb, are born with an increased predisposition to the disease. Childhood obesity is widespread, and ten-year-olds sometimes appear middle-aged. Recently, the Chief of a small Oji-Cree community estimated that half of his adult population was addicted to OxyContin or other painkillers.

Technology is not the only cause of these changes, but scientists have made clear that it is a driving factor.”

Wu understands that this is an extreme case. Some may find that cause to dismiss the Oji-Cree as outliers whose experience tell us very little about the way societies ordinarily adapt to the evolution of technology. On the other hand, the story of the Oji-Cree may be like a time-lapse video which reveals aspects of reality ordinarily veiled by their gradual unfolding. In any case, Wu takes the story as a warning about the nature of technological evolution.

“Technological evolution” is, of course, a metaphor based on the processes of biological evolution. Not everyone, however, sees it as a metaphor. Kevin Kelly, who Wu cites in this second post, argues that technological evolution is not a metaphor at all. Technology, in Kelly’s view, evolves precisely as organisms do. Wu rightly recognizes that there are important differences between the two, however:

“Technological evolution has a different motive force. It is self-evolution, and it is therefore driven by what we want as opposed to what is adaptive. In a market economy, it is even more complex: for most of us, our technological identities are determined by what companies decide to sell based on what they believe we, as consumers, will pay for.”

And this leads Wu to conclude, “Our will-to-comfort, combined with our technological powers, creates a stark possibility.” That possibility is a “future defined not by an evolution toward superintelligence but by the absence of discomforts.” A future, Wu notes, that was neatly captured by the animated film WALL•E.

Wall-E-2

Wu’s conclusion echoes some of the concerns I raised in an earlier post about the future envisioned by the transhumanist project. It also anticipates the third post in the series, “The Problem With Easy Technology.” In this latest post, Wu suggests that “the use of demanding technologies may actually be important to the future of the human race.”

Wu goes on to draw a distinction between demanding technologies and technologies of convenience. Demanding technologies are characterized by the following: “technology that takes time to master, whose usage is highly occupying, and whose operation includes some real risk of failure.” Convenience technologies, on the other hand, “require little concentrated effort and yield predictable results.”

Of course, convenience technologies don’t even deliver on their fundamental promise. Channelling Ruth Cowan’s More Work For Mother: The Ironies Of Household Technology From The Open Hearth To The Microwave, Wu writes,

“The problem is that, as every individual task becomes easier, we demand much more of both ourselves and others. Instead of fewer difficult tasks (writing several long letters) we are left with a larger volume of small tasks (writing hundreds of e-mails). We have become plagued by a tyranny of tiny tasks, individually simple but collectively oppressive.”

But, more importantly, Wu worries that technologies of convenience may rob our action “of the satisfaction we hoped it might contain.” Toward the end of his post, he urges readers to “take seriously our biological need to be challenged, or face the danger of evolving into creatures whose lives are more productive but also less satisfying.”

I trust that I’ve done a decent job of faithfully capturing the crux of Wu’s argument in these three pieces, but I encourage you to read all three in their entirety.

I also encourage you to read the work of Albert Borgmann. I’m not sure if Wu has read Borgmann or not, but his discussion of demanding technologies was anticipated by Borgmann nearly 30 years ago in Technology and the Character of Contemporary Life: A Philosophical Inquiry. What Wu calls demanding technology, Borgmann called focal things, and these entailed accompanying focal practices. Wu’s technologies of convenience are instances of what Borgmann called the device paradigm.

In his work, Borgmann sought to reveal the underlying pattern that modern technologies exhibited–Borgmann is thinking of technologies dating back roughly to the Industrial Revolution. The device paradigm was his name for the pattern that he discerned.

Borgmann arrived at the device paradigm by first formulating the notion of availability. Availability is a characteristic of technology which answers to technology’s promise of liberation and enrichment. Something is technologically available, Bormann explains, “if it has been rendered instantaneous, ubiquitous, safe, and easy.” At the heart of the device paradigm is the promise of increasing availability.

Borgmann goes on to distinguish between things and devices. While devices tend toward technological availability, what things provide tend not to be instantaneous, ubiquitous, safe, or easy. The difference between a thing and a device is a matter of the sort of engagement that is required of user. The difference is such that user might not even be the best word to describe the person who interacts with a thing. In another context, I’ve suggested that practitioner might be a better way of putting it, but that does not always yield elegant phrasing.

A thing, Borgmann writes, “is inseparable from its context, namely, its world, and from our commerce with the thing and its world, namely, engagement.” And immediately thereafter, Borgmann adds, “The experience of a thing is always and also a bodily and social engagement with the things world.” Bodily and social engagement–we’ll come back to that point later. But first, a concrete example to help us better understand the distinctions and categories Borgmann is employing.

Borgmann invites us to consider how warmth might be made available to a home. Before central heating, warmth might be provided by a stove or fireplace. This older way of providing warmth, Borgmann reminds us, “was not instantaneous because in the morning a fire first had to be built in the stove or fireplace. And before it could be built, trees had to be felled, logs had to be sawed and split, the wood had to be hauled and stacked.” Borgmann continues:

“Warmth was not ubiquitous because some rooms remained unheated, and none was heated evenly …. It was not entirely safe because one could get burned or set the house on fire. It was not easy because work, some skills, and attention were constantly required to build and sustain a fire.”

The contrasts at each of these points with central heating are obvious. Central heating illustrates the device paradigm by the manner in which it secures the technological availability of warmth. It conceals the machinery, the means we might say, while perfecting what Borgmann calls the commodity, the end. Commodity is Borgmann’s word for “what a device is there for,” it is the end that the means are intended to secure.

The device paradigm, remember, is a pattern that Borgmann sees unfolding across the modern technological spectrum. The evolution of modern technology is characterized by the progressive concealment of the machinery and the increasingly available commodity. “A commodity is truly available,” Borgmann writes, “when it can be enjoyed as a mere end, unencumbered by means.” Flipping a switch on a thermostat clearly illustrates this sort of commodious availability, particularly when contrasted with earlier methods of providing warmth.

It’s important to note, too, what Borgmann is not doing. He is not distinguishing between the technological and the natural. Things can be technological. The stove is a kind of technology, after all, as is the fireplace. Borgmann is distinguishing among technologies of various sorts, their operational logic, and the sort of engagement that they require or invite. Nor, while we’re at it, is Borgmann suggesting that modern technology has not improved the quality of life. There can be no human flourishing were people are starving or dying of disease.

But, like Tim Wu, Borgmann does believe that the greater comfort and ease promised by technology does not necessarily translate into greater satisfaction or happiness. There is a point at which, the gains made by technology stop yielding meaningful satisfaction. Wu believes this is so because of “our biological need to be challenged.” There’s certainly something to that. I made a similar argument some time ago in opposing the idea of a frictionless life. Borgmann’s analysis, however, adds two more important considerations: bodily and social engagement.

“Physical engagement is not simply physical contact,” Borgmann explains, “but the experience of the world through the manifold sensibility of the body.” He then adds, “sensibility is sharpened and strengthened in skill … Skill, in turn, is bound up with social engagement.”

Consider again the example of the wood-burning stove or fireplace as a means of warmth. The more intense physical engagement may be obvious, but Borgmann invites us to consider the social dimensions as well:

“It was a focus, a hearth, a place that gathered the work and leisure of a family and gave the house its center. Its coldness marked the morning, and the spreading of its warmth the beginning of the day. It assigned to the different family members tasks that defined their place in the household. The mother built the fire, the children kept the firebox filled, and the father cut the firewood. It provided for the entire family a regular and bodily engagement with the rhythm of the seasons that was woven together of the threat of cold and the solace of warmth, the smell of wood smoke, the exertion of sawing and of carrying, the teaching of skills, and the fidelity to daily tasks.”

Borgmann’s vision of a richer, more fulfilling life secures its greater depth by taking seriously both our embodied and social status. This vision goes against the grain of modernity’s account of the human person which is grounded in a Cartesian dismissal of the body and a Lockean conception of the autonomous individuality. To the degree that this is an inadequate account of the human person, a technological order that is premised upon it will always undermine the possibility of human flourishing.

Wu and Borgmann have drawn our attention to what may be an important source of our discontent with the regime of contemporary technology. As Wu points out in his third piece, the answer is not necessarily an embrace of all things that are hard and arduous or a refusal of all the advantages that modern technology has secured for us. Borgmann, too, is concerned with distinguishing between different kinds of troubles: those that we rightly seek to ameliorate in practice and in principle and those we do well to accept in practice and in principle. Making that distinction will help us recognize and appreciate what may be gained by engaging with what Borgmann has called the commanding presence of focal things and what Wu calls demanding technologies.

Admittedly, that can be a challenging distinction to make, but learning to make that distinction may be the better part of wisdom given the technological contours of contemporary life, at least for those who have been privileged to enjoy the benefits of modern technology in affluent societies. And I’m of the opinion that the work of Albert Borgmann is one of the more valuable resources available to us as seek to make sense of the challenges posed by the character of contemporary technology.

_______________________________________________________

For more on Borgmann, take a look at the following posts:

Low-tech Practice and Identity
Troubles We Must Not Refuse
Resisting Disposable Reality 

Troubles We Must Not Refuse

If you’re not paying attention to Evan Selinger’s work, you’re missing out on some of the best available commentary on the ethical implications of contemporary technology. Last week I pointed you to his recent essay, “The Outsourced Lover,” on a morally questionable app designed to automate romantic messages to your significant other. In a more recent editorial at Wired, “Today’s Apps Are Turning Us Into Sociopaths,” Selinger provides another incisive critique of an app that similarly automates aspects of interpersonal relationships.

Selinger approached his piece by interviewing the app designers in order to understand the rationale behind their product. This leads into an interesting and broad discussion about technological determinism, technology’s relationship to society, and ethics.

I was particularly intrigued by how assumptions of technological inevitability were deployed. Take the following, for example:

“Embracing this inevitability, the makers of BroApp argue that ‘The pace of technological change is past the point where it’s possible for us to reject it!’”

And:

“’If there is a niche to be filled: i.e. automated relationship helpers, then entrepreneurs will act to fill that niche. The combinatorial explosion of millions of entrepreneurs working with accessible technologies ensures this outcome. Regardless of moral ambiguity or societal push-back, if people find a technology useful, it will be developed and adopted.’”

It seems that these designers have a pretty bad case of the Borg Complex, my name for the rhetoric of technological determinism. Recourse to the language of inevitability is the defining symptom of a Borg Complex, but it is not the only one exhibited in this case.

According to Selinger, they also deploy another recurring trope: the dismissal of what are derisively called “moral panics” based on the conclusion that they amount to so many cases of Chicken Little, and the sky never falls. This is an example of another Borg Complex symptom: “Refers to historical antecedents solely to dismiss present concerns.” You can read my thoughts on that sort of reasoning here.

Do read the whole of Selinger’s essay. He’s identified an important area of concern, the increasing ease with which we may outsource ethical and emotional labor to our digital devices, and he is helping us think clearly and wisely about it.

About a year ago, Evgeny Morozov raised related concerns that prompted me to write about the inhumanity of smart technology. A touch of hyperbole, perhaps, but I do think the stakes are high. I’ll leave you with two points drawn from that older post.

The first:

“Out of the crooked timber of humanity no straight thing was ever made,” Kant observed. Corollary to keep in mind: If a straight thing is made, it will be because humanity has been stripped out of it.

The second relates to a distinction Albert Borgmann drew some time ago between troubles we accept in practice and those we accept in principle. Those we accept in practice are troubles we need to cope with but which we should seek to eradicate, take cancer for instance. Troubles we accept in principle are those that we should not, even if we were able, seek to abolish. These troubles are somehow essential to the full experience of our humanity and they are an irreducible component of those practices which bring us deep joy and satisfaction.

That’s a very short summary of a very substantial theory. You can read more about it in that earlier post and in this one as well. I think Borgmann’s point is critical. It applies neatly to the apps Selinger has been analyzing. It also speaks to the temptations of smart technology highlighted by Morozov, who rightly noted,

“There are many contexts in which smart technologies are unambiguously useful and even lifesaving. Smart belts that monitor the balance of the elderly and smart carpets that detect falls seem to fall in this category. The problem with many smart technologies is that their designers, in the quest to root out the imperfections of the human condition, seldom stop to ask how much frustration, failure and regret is required for happiness and achievement to retain any meaning.”

From another angle, we can understand the problem as a misconstrual of the relationship between means and ends. Technology, when it becomes something more than an assortment of tools, when it becomes a way of looking at the world, technique in Jacques Ellul’s sense, fixates on means at the expense of ends. Technology is about how things get done, not what ought to get done or why. Consequently, we are tempted to misconstrue means as ends in themselves, and we are also encouraged to think of means as essentially interchangeable. We simply pursue the most efficient, effective means. Period.

But means are not always interchangeable. Some means are integrally related to the ends that they aim for. Altering the means undermines the end. The apps under consideration, and many of our digital tools more generally, proceed on the assumption that means are, in fact, interchangeable. It doesn’t matter whether you took the time to write out a message to your loved one or whether it was an automated app that only presents itself as you. So long as the end of getting your loved one a message is accomplished, the means matter not.

This logic is flawed precisely because it mistakes a means for an end and sees means as interchangeable. The real end, of course, in this case anyway, is a loving relationship not simply getting a message that fosters the appearance of a loving relationship. And the means toward that end are not easily interchangeable. The labor, or, to use Borgmann’s phrasing, the trouble required by the fitting means cannot be outsources or eliminated without fatally undermining the goal of a loving relationship.

That same logic plays out across countless cases where a device promises to save us or unburden us from moral and emotional troubles. It is a dehumanizing logic.