Hello, Blue! If you missed last week's edition – why we fall in love, the psychology of why frustration is essential for satisfaction, how relationships affect our immune system, what motivates us to work, and more – you can read it right here. And if you're enjoying this, please consider supporting with a modest donation – every little bit helps, and comes enormously appreciated.
“Live as if you were living already for the second time," Viktor Frankl wrote in his 1946 masterwork on the human search for meaning, "and as if you had acted the first time as wrongly as you are about to act now!" And yet we only live once, with no rehearsal or reprise – a fact at once so oppressive and so full of possibility that it renders us, in the sublime words of Polish poet Wislawa Szymborska, “ill-prepared for the privilege of living.” All the while, we walk forward accompanied by the specters of versions of ourselves we failed to or chose not to become. “Our lived lives," wrote psychoanalyst Adam Phillips in his magnificent manifesto for missing out, "might become a protracted mourning for, or an endless tantrum about, the lives we were unable to live. But the exemptions we suffer, whether forced or chosen, make us who we are." We perform this existential dance of yeses and nos to the siren song of one immutable question: How do we know what we want, what to want?
Art by Ralph Steadman from a rare edition of Alice's Adventures in Wonderland
Czech-French writer Milan Kundera examines our ambivalent amble through life with unparalleled grace and poetic precision in his 1984 novel The Unbearable Lightness of Being (public library) – one of the most beloved and enduringly rewarding books of the past century.
Because love heightens all of our senses and amplifies our existing preoccupations, it is perhaps in love that life's central ambivalences grow most disorienting – something the novel's protagonist, Tomáš, tussles with as he finds himself consumed with the idea of a lover he barely knows:
He had come to feel an inexplicable love for this all but complete stranger.
But was it love? ... Was it simply the hysteria of a man who, aware deep down of his inaptitude for love, felt the self-deluding need to simulate it? ... Looking out over the courtyard at the dirty walls, he realized he had no idea whether it was hysteria or love.
The woman eventually becomes Tomáš's wife, which only further affirms that even the rightest choice can present itself to us shrouded in uncertainty and doubt at the outset, its rightness only crystallized in the clarity of hindsight. Kundera captures the universal predicament undergirding Tomáš's particular perplexity:
We can never know what to want, because, living only one life, we can neither compare it with our previous lives nor perfect it in our lives to come.
There is no means of testing which decision is better, because there is no basis for comparison. We live everything as it comes, without warning, like an actor going on cold. And what can life be worth if the first rehearsal for life is life itself? That is why life is always like a sketch. No, "sketch" is not quite the word, because a sketch is an outline of something, the groundwork for a picture, whereas the sketch that is our life is a sketch for nothing, an outline with no picture.
The Unbearable Lightness of Being, it bears repeating, is one of the most life-magnifying books one could ever read. Complement this particular point of inflection with Donald Barthelme on the art of not-knowing and Adam Phillips on the rewards of the unlived life.
:: FORWARD TO A FRIEND :: SHARE / READ MORE
"Pay no attention to appearing," young André Gide wrote in his rules of moral conduct in 1889. "Being is alone important." But even for the most idealistic among us, real life – the act of moving as an embodied being through a world of appearances – makes the two increasingly difficult to disentwine.
That's what the great German-American political theorist Hannah Arendt (October 14, 1906–December 4, 1975), one of the clearest and most transcendent thinkers of the twentieth century, explores in a section of The Life of the Mind (public library) – the immensely mind-stretching book based on her 1973 Gifford Lecture, which rendered her the first woman to speak at the prestigious event. Established in 1888 in an effort "to promote and diffuse the study of natural theology in the widest sense of the term" by bringing together influential thinkers across science, philosophy, and spirituality, the series had previously hosted such luminaries as William James, Werner Heisenberg, and Niels Bohr, and later gave us Carl Sagan's Varieties of Scientific Experience.
Arendt considers the notion of appearing as central to our experience of being:
Nothing could appear, the word “appearance” would make no sense, if recipients of appearances did not exist – living creatures able to acknowledge, recognize, and react to – in flight or desire, approval or disapproval, blame or praise – what is not merely there but appears to them and is meant for their perception. In this world which we enter, appearing from a nowhere, and from which we disappear into a nowhere, Being and Appearing coincide... Nothing and nobody exists in this world whose very being does not presuppose a spectator. In other words, nothing that is, insofar as it appears, exists in the singular; everything that is is meant to be perceived by somebody... Plurality is the law of the earth.
Virginia Woolf called this world of appearances "the cotton wool" and argued that behind it is hidden the true pattern of being. But, for Arendt, the cotton wool and the pattern are inseparable from one another:
Since sentient beings – [humans] and animals, to whom things appear and who as recipients guarantee their reality – are themselves also appearances, meant and able both to see and be seen, hear and be heard, touch and be touched, they are never mere subjects and can never be understood as such; they are no less “objective” than stone and bridge. The worldliness of living things means that there is no subject that is not also an object and appears as such to somebody else, who guarantees its “objective” reality. What we usually call “consciousness,” the fact that I am aware of myself and therefore in a sense can appear to myself, would never suffice to guarantee reality... Seen from the perspective of the world, every creature born into it arrives well equipped to deal with a world in which Being and Appearing coincide; they are fit for worldly existence.
Illustration by Mimmo Paladino for a rare edition of James Joyce's Ulysses
This interplay of Being and Appearing, she argues, is what frames our very existence. In a sentiment that calls to mind Alan Lightman on why we yearn for immortality in a universe defined by impermanence and echoes Virginia Woolf's observation of the elasticity of time, Arendt writes:
To be alive means to live in a world that preceded one’s own arrival and will survive one’s own departure. On this level of sheer being alive, appearance and disappearance, as they follow upon each other, are the primordial events, which as such mark out time, the time span between birth and death. The finite life span allotted to each living creature determines not merely its life expectancy but also its time experience; it provides the secret prototype for all time measurements no matter how far these then may transcend the allotted life span into past and future. Thus, the lived experience of the length of a year changes radically throughout our life. A year that to a five-year-old constitutes a full fifth of his existence must seem much longer than when it will constitute a mere twentieth or thirtieth of his time on earth. We all know how the years revolve quicker and quicker as we get older, until, with the approach of old age, they slow down again because we begin to measure them against the psychologically and somatically anticipated date of our departure.
Art by Lisbeth Zwerger for a special edition of Alice in Wonderland
She returns to this notion of spectatorship as affirmation of existence – Appearing as proof of Being:
To appear always means to seem to others, and this seeming varies according to the standpoint and the perspective of the spectators. In other words, every appearing thing acquires, by virtue of its appearingness, a kind of disguise that may indeed – but does not have to – hide or disfigure it. Seeming corresponds to the fact that every appearance, its identity notwithstanding, is perceived by a plurality of spectators.
The urge toward self-display – to respond by showing to the overwhelming effect of being shown – seems to be common to [humans] and animals. And just as the actor depends upon stage, fellow-actors, and spectators, to make his entrance, every living thing depends upon a world that solidly appears as the location for its own appearance, on fellow-creatures to play with, and on spectators to acknowledge and recognize its existence. Seen from the viewpoint of the spectators to whom it appears and from whose view it finally disappears, each individual life, its growth and decline, is a developmental process in which an entity unfolds itself in an upward movement until all its properties are fully exposed; this phase is followed by a period of standstill – its bloom or epiphany, as it were – which in turn is succeeded by the downward movement of disintegration that is terminated by complete disappearance. There are many perspectives in which this process can be seen, examined, and understood, but our criterion for what a living thing essentially is remains the same: in everyday life as well as in scientific study, it is determined by the relatively short time span of its full appearance, its epiphany.
We, too, are appearances by virtue of arriving and departing, of appearing and disappearing; and while we come from a nowhere, we arrive well equipped to deal with whatever appears to us and to take part in the play of the world.
The Life of the Mind, which also gave us Arendt on what free will really means and the crucial difference between thinking and knowing, is a spectacular read in its entirety. Complement this particular portion with an animated synthesis of Plato's Allegory of the Cave, which remains history's most influential figurative inquiry into the interplay of Being and Appearing, and Mary Oliver on how to "peek under the veil of all appearances."
:: FORWARD TO A FRIEND :: SHARE / READ MORE
"Never be hard upon people who are in your power," Charles Dickens counseled in a letter of advice to his young son. And yet power has a way of calling forth the hardest and most unhandsome edges of human nature – something John F. Kennedy observed in his spectacular eulogy to Robert Frost, lamenting that power "leads men towards arrogance" and "narrows the areas of man’s concern." Redemption, he argued, is only possible when we recognize that "what counts is the way power is used – whether with swagger and contempt, or with prudence, discipline and magnanimity."
It's a difficult lesson to impart even on the most intelligent and receptive of grownups, and one especially crucial in planting the seeds of good personhood in childhood, when we first brush with power dynamics in ways so real and raw that they can imprint us for life.
That's what French illustrator Olivier Tallec accomplishes with extraordinary humor, sensitivity, and warmth in Louis I, King of the Sheep (public library) – one of the loveliest children's books I've ever encountered.
Inspired by watching children tussle with power on the playground, it tells the story of a humble sheep named Louis who becomes self-appointed king after a fickle gust of wind deposits a royal crown at his feet.
As Louis I rises to power by nothing more than chance, he gradually transmogrifies into an entitled and arrogant tyrant – a woefully familiar behavioral pattern calling to mind the legendary Stanford Prison Experiment, that cornerstone of social psychology in which students were randomly assigned to be either prison guards or prisoners in a mock-jail and the "guards" proceeded to exploit their randomly assigned power to a point of devastating inhumanity.
Intoxicated with his newfound authority, Louis I goes on to find himself a throne "from which to hand down justice," begins addressing the people, and embarks upon such royal activities as hunting – even for lions.
He receives the world's greatest artists at his palace and esteemed ambassadors from distant lands come to bow at his feet.
Eventually, he becomes so drunk on power that he decides he must bring order to his dominion by driving out all sheep who don't resemble him – perhaps Tallec's subtle invitation to parents to teach kids about the Holocaust, that darkest of episodes in the history of human nature, undergirded by the very same atrocious impulses.
And then, just like that, another fickle gust of wind takes the crown away.
The story is at heart an imaginative and intelligent parable of the inherent responsibility that comes with power. Embedded in it is also a reminder that we are separated from those less fortunate than us by little more than unmerited cosmic odds, even if it's more flattering to believe otherwise.
Complement this immeasurably delightful gem with Adrienne Rich on what power really means and JFK on poetry and power, then revisit Tallec's playful and poignant illustrated allegory of why we fight.
Louis I, King of the Sheep comes from Enchanted Lion Books, the independent Brooklyn-based powerhouse behind such uncommonly wonderful picture-books as The Lion and the Bird, Beastly Verse, Little Boy Brown, and the illustrated biography of E.E. Cummings.
:: FORWARD TO A FRIEND :: SHARE / READ MORE
When Ada Lovelace and Charles Babbage invented the world's first computer, their "Analytical Engine" became the evolutionary progenitor of a new class of human extensions – machines that think. A generation later, Alan Turing picked up where they left off and, in laying the foundations of artificial intelligence with his Turing Test, famously posed the techno-philosophical question of whether a computer could ever enjoy strawberries and cream or compel you to fall in love with it.
From its very outset, this new branch of human-machine evolution made it clear that any answer to these questions would invariably alter how we answer the most fundamental questions of what it means to be human.
That's what Edge founder John Brockman explores in the 2015 edition of his annual question, inviting 192 of today's most prominent thinkers to tussle with these core questions of artificial intelligence and its undergirding human dilemmas. The answers, collected in What to Think About Machines That Think: Today's Leading Thinkers on the Age of Machine Intelligence (public library), come from such diverse contributors as physicist and mathematician Freeman Dyson, music pioneer Brian Eno, biological anthropologist Helen Fisher, Positive Psychology founding father Martin Seligman, computer scientist and inventor Danny Hillis, TED curator Chris Anderson, neuroscientist Sam Harris, legendary curator Hans Ulrich Obrist, media theorist Douglas Rushkoff, cognitive scientist and linguist Steven Pinker, and yours truly.
Illustration by Syndey Padua from The Thrilling Adventures of Lovelace and Babbage
The answers are strewn with a handful of common threads, a major one being the idea that artificial intelligence isn't some futuristic abstraction but a palpably present reality with which we're already living.
Beloved musician and prolific reader Brian Eno looks at the many elements of his day, from cooking porridge to switching on the radio, that work seamlessly thanks to an invisible mesh of connected human intelligence – a Rube Goldberg machine of micro-expertise that makes it possible for the energy in a distant oil field to power the stove built in a foreign factory out of components made by scattered manufacturers, and ultimately cook his porridge. In a sentiment that calls to mind I, Pencil – that magnificent vintage allegory of how everything is connected – Eno explains why he sees artificial intelligence not as a protagonist in a techno-dystopian future but as an indelible and fruitful part of our past and present:
My untroubled attitude results from my almost absolute faith in the reliability of the vast supercomputer I’m permanently plugged into. It was built with the intelligence of thousands of generations of human minds, and they’re still working at it now. All that human intelligence remains alive, in the form of the supercomputer of tools, theories, technologies, crafts, sciences, disciplines, customs, rituals, rules of thumb, arts, systems of belief, superstitions, work-arounds, and observations that we call Global Civilization.
Global Civilization is something we humans created, though none of us really know how. It’s out of the individual control of any of us – a seething synergy of embodied intelligence that we’re all plugged into. None of us understands more than a tiny sliver of it, but by and large we aren’t paralyzed or terrorized by that fact – we still live in it and make use of it. We feed it problems – such as “I want some porridge” – and it miraculously offers us solutions that we don’t really understand.
We’ve been living happily with artificial intelligence for thousands of years.
Art by Laura Carlin for The Iron Giant by Ted Hughes
In one of the volume's most optimistic essays, TED curator Chris Anderson, who belongs to the increasingly endangered tribe of public idealists, considers how this "hive mind" of semi-artificial intelligence could provide a counterpoint to some of our worst human tendencies and amplify our collective potential for good:
We all know how flawed humans are. How greedy, irrational, and limited in our ability to act collectively for the common good. We’re in danger of wrecking the planet. Does anyone thoughtful really want humanity to be evolution’s final word?
Intelligence doesn’t reach its full power in small units. Every additional connection and resource can help expand its power. A person can be smart, but a society can be smarter still...
By that logic, intelligent machines of the future wouldn’t destroy humans. Instead, they would tap into the unique contributions that humans make. The future would be one of ever richer intermingling of human and machine capabilities. I’ll take that route. It’s the best of those available.
Together we’re semiunconsciously creating a hive mind of vastly greater power than this planet has ever seen – and vastly less power than it will soon see.
“Us versus the machines” is the wrong mental model. There’s only one machine that really counts. Like it or not, we’re all – us and our machines – becoming part of it: an immense connected brain. Once we had neurons. Now we’re becoming the neurons.
Art from Neurocomic, a graphic novel about how the brain works
Astrophysicist and philosopher Marcelo Gleiser, who has written beautifully about how to live with mystery in a culture obsessed with knowledge, echoes this idea by pointing out the myriad mundane ways in which "machines that think" already permeate our daily lives:
We define ourselves through our techno-gadgets, create fictitious personas with weird names, doctor pictures to appear better or at least different in Facebook pages, create a different self to interact with others. We exist on an information cloud, digitized, remote, and omnipresent. We have titanium implants in our joints, pacemakers and hearing aids, devices that redefine and extend our minds and bodies. If you’re a handicapped athlete, your carbon-fiber legs can propel you forward with ease. If you’re a scientist, computers can help you extend your brainpower to create well beyond what was possible a few decades back. New problems that once were impossible to contemplate, or even formulate, come around every day. The pace of scientific progress is a direct correlate of our alliance with digital machines.
We’re reinventing the human race right now.
Another common thread running across a number of the answers is the question of what constitutes "artificial" intelligence in the first place and how we draw the line between machine thought and human thought. Caltech theoretical physicist and cosmologist Sean Carroll performs elegant semantic acrobatics to invert the question:
We are all machines that think, and the distinction between different types of machines is eroding.
We pay a lot of attention these days, with good reason, to “artificial” machines and intelligences – ones constructed by human ingenuity. But the “natural” ones that have evolved through natural selection, like you and me, are still around. And one of the most exciting frontiers in technology and cognition is the increasingly permeable boundary between the two categories.
Art from Alice in Quantumland by Robert Gilmore, an allegory of quantum physics inspired by Alice in Wonderland
Developmental psychologist Alison Gopnik, who has revolutionized our understanding of how babies think, considers the question from a complementary angle:
Computers have become highly skilled at making inferences from structured hypotheses, especially probabilistic inferences. But the really hard problem is deciding which hypotheses, out of all the many possibilities, are worth testing. Even preschoolers are remarkably good at creating brand-new, out-of-the-box concepts and hypotheses in a creative way. Somehow they combine rationality and irrationality, systematicity and randomness, to do this, in a way we haven’t even begun to understand. Young children’s thoughts and actions often do seem random, even crazy – just join in a three-year-old pretend game sometime... But they also have an uncanny capacity to zero in on the right sort of weird hypothesis; in fact, they can be substantially better at this than grown-ups.
Of course, the whole idea of computation is that once we have a complete step-by-step account of any process, we can program it on a computer. And after all, we know there are intelligent physical systems that can do all these things. In fact, most of us have actually created such systems and enjoyed doing it, too (well, at least in the earliest stages). We call them our kids. Computation is still the best – indeed, the only – scientific explanation we have of how a physical object like a brain can act intelligently. But at least for now, we have almost no idea at all how the sort of creativity we see in children is possible. Until we do, the largest and most powerful computers will still be no match for the smallest and weakest humans.
Art by Ben Newman from Professor Astro Cat’s Frontiers of Space by computer scientist Dominic Walliman
In my own contribution to the volume, I consider the question of "thinking machines" from the standpoint of what thought itself is and how our human solipsism is limiting our ability to envision and recognize other species of thinking:
Thinking isn’t mere computation – it’s also cognition and contemplation, which inevitably lead to imagination. Imagination is how we elevate the real toward the ideal, and this requires a moral framework of what is ideal. Morality is predicated on consciousness and on having a self-conscious inner life rich enough to contemplate the question of what is ideal. The famous aphorism attributed to Einstein – “Imagination is more important than knowledge” – is interesting only because it exposes the real question worth contemplating: not that of artificial intelligence but of artificial imagination.
Of course, imagination is always “artificial,” in the sense of being concerned with the unreal or trans-real – of transcending reality to envision alternatives to it – and this requires a capacity for accepting uncertainty. But the algorithms driving machine computation thrive on goal-oriented executions in which there’s no room for uncertainty. “If this, then that” is the antithesis of imagination, which lives in the unanswered, and often vitally unanswerable, realm of “What if?” As Hannah Arendt once wrote, losing our capacity for asking such unanswerable questions would be to “lose not only the ability to produce those thought-things that we call works of art but also the capacity to ask all the unanswerable questions upon which every civilization is founded.”
Will machines ever be moral, imaginative? It’s likely that if and when they reach that point, theirs will be a consciousness that isn’t beholden to human standards. Their ideals will not be our ideals, but they will be ideals nonetheless. Whether or not we recognize those processes as thinking will be determined by the limitations of human thought in understanding different – perhaps wildly, unimaginably different – modalities of thought itself.
Futurist and Wired founding editor Kevin Kelly takes a similar approach:
The most important thing about making machines that can think is that they will think differently.
Because of a quirk in our evolutionary history, we are cruising as if we were the only sentient species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses possible in the universe. We like to call our human intelligence “general purpose,” because, compared with other kinds of minds we’ve met, it can solve more kinds of problems, but as we continue to build synthetic minds, we’ll come to realize that human thinking isn’t general at all but only one species of thinking.
The kind of thinking done by today’s emerging AIs is not like human thinking.
AI could just as well stand for Alien Intelligence. We cannot be certain that we’ll contact extraterrestrial beings from one of the billion Earthlike planets in the sky in the next 200 years, but we can be almost 100 percent certain that we’ll have manufactured an alien intelligence by then. When we face those synthetic aliens, we’ll encounter the same benefits and challenges we expect from contact with ET. They’ll force us to reevaluate our roles, our beliefs, our goals, our identity. What are humans for? I believe our first answer will be that humans are for inventing new kinds of intelligences that biology couldn’t evolve. Our job is to make machines that think differently – to create alien intelligences. Call them artificial aliens.
Art from a vintage children's-book adaptation of Voltaire's Micromégas, a seminal work of science fiction and an allegory of what it means to be human
Linguist and anthropologist Mary Catherine Bateson – whose mother happens to be none other than Margaret Mead – directly questions how the emergence of artificial intelligence will interact with our basic humanity:
Will humor and awe, kindness and grace, be increasingly sidelined, or will their value be recognized in new ways? Will we be better or worse off if wishful thinking is eliminated and, perhaps along with it, hope?
This, indeed, is another of the common threads – the question of moral responsibility implicit to the future of artificial intelligence. Philosopher Daniel Dennett, who has pondered the flaws of our intuition, counters our misplaced fears about artificial intelligence with the appropriate focus of our concerns:
After centuries of hard-won understanding of nature that now permits us, for the first time in history, to control many aspects of our destinies, we’re on the verge of abdicating this control to artificial agents that can’t think, prematurely putting civilization on autopilot. The process is insidious, because each step of it makes good local sense, is an offer you can’t refuse. You’d be a fool today to do large arithmetical calculations with pencil and paper when a hand calculator is much faster and almost perfectly reliable (don’t forget about round-off error), and why memorize train timetables when they’re instantly available on your smartphone? Leave the map reading and navigation to your GPS; it isn’t conscious, it can’t think in any meaningful sense, but it’s much better than you are at keeping track of where you are and where you want to go.
But by outsourcing the drudgery of thought to machines, Dennett argues, we are rendering ourselves at once obsolete and helplessly dependent:
What’s wrong with turning over the drudgery of thought to such high-tech marvels? Nothing, so long as (1) we don’t delude ourselves, and (2) we somehow manage to keep our own cognitive skills from atrophying.
He drives the point home with a simple, discomfiting thought experiment:
As we become ever more dependent on these cognitive prostheses, we risk becoming helpless if they ever shut down. The Internet is not an intelligent agent (well, in some ways it is), but we have nevertheless become so dependent on it that were it to crash, panic would set in and we could destroy society in a few days. That’s an event we should bend our efforts to averting now, because it could happen any day.
The real danger, then, is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.
Art from Alice in Quantumland by Robert Gilmore, an allegory of quantum physics inspired by Alice in Wonderland
Computer scientist and inventor Danny Hillis similarly urges for prudent progress:
Machines that think will think for themselves. It’s in the nature of intelligence to grow, to expand like knowledge itself.
Like us, the thinking machines we make will be ambitious, hungry for power – both physical and computational – but nuanced with the shadows of evolution. Our thinking machines will be smarter than we are, and the machines they make will be smarter still. But what does that mean? How has it worked so far? We’ve been building ambitious semi-autonomous constructions for a long time – governments and corporations, NGOs. We designed them all to serve us and to serve the common good, but we aren’t perfect designers and they’ve developed goals of their own. Over time, the goals of the organization are never exactly aligned with the intentions of the designers.
He calls the notion of smart machines capable of building even smarter machines "the most important design problem of all time" and adds:
Like our biological children, our thinking machines will live beyond us. They need to surpass us too, and that requires designing into them the values that make us human. It’s a hard design problem, and it’s important that we get it right.
In the collection's pithiest contribution, Freeman Dyson, he of great wisdom on the future of science, answers with a brilliant reverse Turing Test of sorts:
I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant. If I am right, then the whole question is irrelevant.