BOOK EXCERPT

Divine genius does not exist: Hard work, not magical inspiration, is essence of creativity

Mozart knew it as well as anyone: Creativity is not magic -- it's work

Published February 1, 2015 9:00PM (EST)

  (Wikimedia)
(Wikimedia)

Excerpted from "How to Fly a Horse: The Secret History of Creation, Invention, and Discovery"

In 1815, Germany’s General Music Journal published a letter in which Mozart described his creative process:

When I am, as it were, completely myself, entirely alone, and of good cheer; say traveling in a carriage, or walking after a good meal, or during the night when I cannot sleep; it is on such occasions that my ideas flow best and most abundantly. All this fires my soul, and provided I am not disturbed, my subject enlarges itself, becomes methodized and defined, and the whole, though it be long, stands almost finished and complete in my mind, so that I can survey it, like a fine picture or a beautiful statue, at a glance. Nor do I hear in my imagination the parts successively, but I hear them, as it were, all at once. When  I proceed to write down my ideas the committing to paper is done quickly enough, for everything is, as I said before, already finished; and it rarely differs on paper from what it was in my imagination.

In other words,  Mozart’s  greatest  symphonies,  concertos, and operas came to him complete when he was alone and in a good mood. He needed no tools to compose them. Once he had finished imagining his masterpieces,  all he had to do was write them down.

This letter has been used to explain creation  many times. Parts of it appear in The Mathematician’s Mind, written by Jacques Hadamard in 1945, in Creativity: Selected Readings, edited by Philip Vernon in 1976, in Roger Penrose’s award-winning 1989 book, The Emperor’s New Mind, and it is alluded to in Jonah Lehrer’s 2012 bestseller Imagine. It influenced the poets Pushkin and Goethe and the playwright Peter Shaffer. Directly and indirectly, it helped shape common beliefs about creating.

But there is a problem. Mozart did not write this letter. It is a forgery. This was first shown in 1856 by Mozart’s biographer Otto Jahn and has been confirmed by other scholars since.

Mozart’s real letters—to his father,  to his sister, and to others— reveal his true creative process. He was exceptionally talented,  but he did not write by magic. He sketched his compositions, revised them, and sometimes got stuck. He could not work without a piano or harpsichord. He would set work aside and return to it later. He considered theory  and craft  while writing, and he thought a lot about  rhythm, melody, and harmony. Even though his talent and a lifetime of practice made him fast and fluent, his work was exactly that: work. Masterpieces did not come to him complete in uninterrupted streams of imagination, nor without  an instrument, nor did he write them down whole and unchanged. The letter is not only forged, it is false.

It lives on because it appeals to romantic prejudices about invention. There is a myth about how something new comes to be. Geniuses have dramatic moments of insight where great things and thoughts are born whole. Poems are written in dreams. Symphonies are composed complete. Science is accomplished with eureka shrieks. Businesses are built by magic touch. Something is not, then is. We do not see the road from nothing to new, and maybe we do not want to. Artistry must be misty magic, not sweat and grind. It dulls the luster to think that every elegant equation,  beautiful painting,  and brilliant machine is born of effort and error, the progeny of false starts and failures, and that each maker is as flawed, small, and mortal as the rest of us. It is seductive to conclude that great innovation is delivered to us by miracle via genius. And so the myth.

The myth has shaped how we think about creating for as long as creating has been thought about. In ancient civilizations, people believed that things could be discovered but not created. For them, everything had already been created;  they shared the perspective of Carl Sagan’s joke on this topic: “If you want to make an apple pie from scratch, you must first invent the universe.” In the Middle Ages, creation was possible but was reserved for divinity and those with divine inspiration. In the Renaissance, humans were finally thought capable of creation, but they had to be great men—Leonardo, Michelangelo, Botticelli, and the like. As the nineteenth century turned into the twentieth, creating became a subject for philosophical,  then  psychological investigation. The question being investigated was “How do the great men do it?” and the answer had the residue of medieval divine intervention. A lot of the meat of the myth was added at this time, with the same few anecdotes about epiphanies and genius—including hoaxes like Mozart’s letter— being circulated  and recirculated.  In 1926, Alfred North Whitehead made a noun from a verb and gave the myth its name: creativity.

The creativity myth implies that  few people can be creative, that any successful creator  will experience dramatic  flashes of insight, and that  creating  is more like magic than  work. A rare few have what it takes, and for them  it comes easy. Anybody else’s creative efforts are doomed.

How to Fly a Horse is about why the myth is wrong. I believed the myth until 1999. My early career—at  London  University’s student  newspaper,  at a Bloomsbury noodle start-up called Wagamama,  and  at  a  soap  and  paper  company  called  Procter   & Gamble—suggested that I was not good at creating. I struggled to execute my ideas. When I tried, people got angry. When I succeeded, they forgot that the idea was mine. I read every book I could find about creation, and each one said the same thing: ideas come magically, people greet them warmly, and creators are winners. My ideas came gradually, people greeted them with heat instead of warmth, and I felt like a loser. My performance reviews were bad. I was always in danger of being fired. I could not understand why my creative experiences were not like the ones in the books.

It first occurred to me that the books might be wrong in 1997, when I was trying to solve an apparently boring problem that  turned out to be interesting. I could not keep a popular shade of Procter & Gamble lipstick on store shelves. Half of all stores were out of stock at any given time. After much research, I discovered that the cause of the problem was insufficient  information. The only way to see what was on a shelf at any moment was to go look. This was a fundamental limit of twentieth-century information technology. Almost all the data entered into computers  in the 1900s came from people typing on keyboards or, sometimes, scanning bar codes. Store workers did not have time to stare at shelves all day, then enter data about what they saw, so every store’s computer system was blind. Shopkeepers did not discover that my lipstick was out of stock; shoppers did. The shoppers shrugged and picked a different  one, in which case I probably lost the sale, or they did not buy lipstick at all, in which case the store lost the sale, too. The missing lipstick was one of the world’s smallest problems, but it was a symptom of one of the world’s  biggest problems: computers  were brains without senses.

This was so obvious that few people noticed  it. Computers were fifty years old in 1997. Most people had grown up with them and had grown used to how they worked. Computers processed data that people entered. As their name confirmed, computers were regarded as thinking machines, not sensing machines.

But this is not how intelligent machines were originally conceived. In 1950, Alan Turing,  computing’s  inventor,  wrote,  “Machines  will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy. Both approaches should be tried.”

Yet few people tried that second approach. In the twentieth century, computers  got faster and smaller and were connected together, but they did not get “the best sense organs that money can buy.” They did not get any “sense organs” at all. And so in May 1997, a computer called Deep Blue could beat the reigning human chess world champion, Garry Kasparov, for the first time ever, but there  was no way a computer could see if a lipstick was on a shelf. This was the problem I wanted to solve.

I put a tiny radio microchip  into a lipstick and an antenna  into a shelf; this, under the catchall name “Storage System,” became my first patented invention. The microchip saved money and memory by connecting to the Internet, newly public in the 1990s, and saving its data there.  To help Procter  & Gamble  executives understand this system for connecting  things like lipstick—and diapers, laundry detergent, potato chips, or any other object—to the Internet, I gave it a short and ungrammatical  name: “the Internet of Things.” To help make it real, I started working with Sanjay Sarma, David Brock, and Sunny Siu at the Massachusetts Institute of Technology.  In 1999, we cofounded  a research center, and I emigrated from England to the United States to become its executive director.

In 2003, our research had 103 corporate sponsors, plus additional labs in universities  in Australia, China, England, Japan, and Switzerland, and the Massachusetts Institute of Technology signed a lucrative license deal to make our technology commercially available.

In 2013, my phrase “Internet of Things” was added to the Oxford Dictionaries, which defined it as “a proposed  development of the Internet in which everyday objects have network  connectivity,  allowing them to send and receive data.”

Nothing about this experience resembled the stories in the “creativity” books I had read. There was no magic, and there had been few flashes of inspiration—just tens of thousands  of hours of work. Building the  Internet of Things  was slow and hard, fraught  with politics, infested  with  mistakes,  unconnected to grand plans or strategies.  I learned to succeed by learning to fail. I learned to expect conflict. I learned not to be surprised by adversity but to prepare for it.

I used what I discovered to help build technology businesses. One was named one of the ten “Most Innovative Companies in the Internet of Things” in 2014, and two were sold to bigger companies—one  less than a year after I started it.

I also gave talks about my experiences of creating. My most popular talk attracted so many people with so many questions that, each time I gave it, I had to plan to stay for at least an hour afterward to answer questions from audience members. That talk is the foundation of this book. Each chapter tells the true story of a creative person; each story comes from a different place, time, and creative field and highlights an important insight about creating. There are tales within the tales, and departures into science, history, and philosophy.

Taken together, the stories reveal a pattern for how humans make new things, one that is both encouraging and challenging. The encouraging part is that everyone can create, and we can show that  fairly conclusively. The challenging part is that there  is no magic moment of creation. Creators spend almost all their  time  creating,  persevering despite doubt, failure, ridicule, and rejection until they succeed in making something  new and useful. There are no tricks, shortcuts, or get-creative-quick schemes. The process is ordinary, even if the outcome is not.

Creating is not magic but work.

From the book "How to Fly a Horse: The Secret History of Creation, Invention, and Discovery" by Kevin Ashton Copyright © 2015 by Kevin Ashton Published by arrangement with Doubleday, an imprint of The Knopf Doubleday Group, a division of Random House LLC


By Kevin Ashton

Kevin Ashton led pioneering work on RFID (radio frequency identification) networks, for which he coined the term “the Internet of Things,” and cofounded the Auto-ID Center at MIT. His writing about innovation and technology has appeared in Quartz, Medium, The Atlantic, and The New York Times.

MORE FROM Kevin Ashton


Related Topics ------------------------------------------

Books Creativity Excerpts