Sunday, December 6, 2009

True Creativity

Some months ago I was asked to write a chapter on dyslexia and creativity for a new book to be published in the United Kingdom next year. The chapter is done. But in doing it I was made aware once again how of central creativity is to my own investigations -- that is, real creativity -- not mindless reverence for the merely different or the shocking.

I my view, creativity is one of the highest forms of what human beings are capable of doing -- and often involves a perfect solution to a major problem or question -- whether in art or design or science or politics. Yet, the term and the idea are often misunderstood and frequently abused.

Many years ago, I attended a small conference on creativity organized by the Smithsonian Institution here in Washington, DC. The speakers were a Harvard psychologist, a local woman known for her witty books on ordinary life, a sculptor and a self-satisfied chair of the panel.

Many years later I happened to touch on the subject in a conversation with a friend of a friend. We did not know each other then but we had both attended the same small Smithsonian conference that day -- and we had both come (strikingly) to the same conclusion.

We had both felt that no one on the panel knew anything about real creativity (as we had experienced it) -- except, possibly, the sculptor. But it seemed that he had limited means of articulating his experience.

Other than the sculptor, all the panelists were mainly greatly pleased with themselves -- full of ego and arrogance -- immensely pleased with all their own cleverness and wit.

In contrast, we both felt that we had been privileged to experience real creativity -- which was rare as it was wonderful.

In both cases we had felt that we were taking dictation -- from some other place. Ego had no part of it. I can recall trying to write as fast as I could, fearing to lose some of the wonderful strings of words that fit together so beautifully and expressed exactly what needed to be said. Of course, in some sense it must have come from me. But is not how it was experienced. I was full of gratitude, as was my new friend.

We both saw that something of a true test of true creativity might very well be the unexpected perfection of the product or the solution to the problem. But there is no sense of how clever am I. Rather, there is deep gratitude at the gift one has been given.

In a brief prose essay, Robert Frost seems to have captured something of this sense of what it means to be involved in a genuine creative process --

“Scholars and artists thrown together are annoyed at the puzzle of where they differ. Both work from knowledge; but I suspect they differ most importantly in the way their knowledge is come by. Scholars get theirs with conscientious thoroughness along projected lines of logic; poets theirs cavalierly and as it happens in and out of books.

“They stick to nothing deliberately, but let what will stick to them like burrs where they walk in the fields. . . . The artist must value himself as he snatches a thing from some previous order with not so much as a ligature clinging to it of the old place where it was organic. . . .

“No tears in the writer, no tears in the reader. No surprise for the writer, no surprise for the reader.

“For me the initial delight is in the surprise of remembering something I didn't know I knew. I am in a place, in a situation, as if I had materialized from a cloud or risen out of the ground. There is a glad recognition of the long lost and the rest follows. Step by step the wonder of unexpected supply keeps growing. . . .

“It must be a revelation, or series of revelations, as much for the poet as for the reader. For it to be that there must have been the greatest freedom of the material to move about in it and to establish relations in it regardless of time and space, previous relation, and everything but affinity.”

Quotation from Robert Frost, "The Figure a Poem Makes," Complete Poems, 1967.

Tuesday, October 27, 2009

The Dangers of Learning to Read

In our modern culture, reading is seen as an unmitigated good -- the source of accumulated knowledge as well as social and economic advancement. However, there appears to be an unexpected dark side. Observing world events, I am often reminded of this other side of reading. In a historical context that may be relevant to our times, we can see that literacy, especially new literacy in a formerly backward group, may be linked to intolerance, radicalism and the worst kinds of violence. Author Leonard Shlain tells us of the original “iconoclasts,” long ago:

“In the eight century, a sect arose from within the ranks of its highly literate clergy that so despised images that its members declared an all-out war against statues and paintings. . . . At first, they sought out only religious images to smash. Church mosaics, painted icons, and stained-glass artistry fell to their savage assaults. Later their targets also included painters, sculptors and craftsmen. They even murdered those whose crime it was to love art. Monks who resisted were blinded and had their tongues torn out. The iconoclasts beheaded the Patriarch of the Eastern Church in 767 for refusing to support their cause. The iconoclast movement never spread to illiterate Western Europe; its madness consumed only the segment of Christendom that boasted the highest literacy rate. Artists fled for their lives from Byzantium, heading for the western court of Charlemagne whose largely illiterate courtiers welcomed them with open arms.”

Re-consecrated Shrines

When we are trying to understand something fundamental about human beings and the human brain, it seems wise to look, as much as possible, to other ages and other cultures to see the full range of what we need to consider. This is effectively what has been provided by Leonard Shlain in his book The Alphabet Versus the Goddess--The Conflict Between Word and Image.

Shlain, a surgeon from Mill Valley, California, spent seven years drawing together elements from many cultures and thousands of years of history to weave a narrative and an argument about the sometimes catastrophic interplay of image, alphabetic writing, religion, gender relationships and human history. For the vast sweep of the topic, Shlain's achievement is astonishing -- although it is not always entirely convincing. One does not have to accept all of Shlain's argument, however, to be persuaded that he is dealing with a topic that is well worth our attention. His view is bold and he delivers new insights and information that substantially enlarge our understanding of important historical dynamics -- as well as helping us, strangely perhaps, with developing a better understanding of some of the main issues of our own time.

While on a tour of Mediterranean archaeological sites years ago, Shlain was told that many shrines had originally been consecrated to a female deity. Then, later, “for unknown reasons, unknown persons reconsecrated” the shrines to a male deity. After some consideration, Shlain “was struck by the thought that the demise of the Goddess, the plunge in women's status, and the advent of harsh patriarchy and misogyny occurred around the time that people were learning how to read and write.”

He wondered whether “there was something in the way people acquired this new skill that changed the brain's actual structure.” Shlain points out that in the developing brain, “differing kinds of learning will strengthen some neuronal pathways and weaken others.” Applying what is known of the individual brain to that of a whole culture, Shlain “hypothesized that when a critical mass of people within a society acquire literacy, especially alphabet literacy, left hemispheric modes of thought are reinforced at the expense of right hemispheric ones. . . .” This change resulted, he proposed, in “a decline in the status of images, women's rights, and goddess worship.”

Using Both Sides

In developing this approach, Shlain points out that his own occupation as surgeon (and as an associate professor of surgery) probably has contributed in significant ways. By selection, training and daily work, it is often observed that surgeons have to move constantly back and forth between right hemisphere and left hemisphere modes of thought. Accordingly, Shlain observes that his “unique perspective led [him] to propose a neuroanatomical hypothesis to explain why goddesses and priestesses disappeared from Western religions.”

The experience of surgeons is thus unlike many scholars and historians. The latter are expected to use mainly one side only -- the left side of the brain, the world (generally) of words, grammar, logic and highly specialized analysis. Less weight is given to the pictures, images and the large-scale, global view so characteristic of the right side of the brain. It is widely recognized in some circles that there is often a trade off between verbal and visual skills. This tradeoff is recognized in the half-serious joke sometimes told by neuroscientists: “Never trust a surgeon who can spell.” If you are too good with the mechanics of writing, perhaps you may not be good enough with the mechanics of visualizing, locating and removing a dangerous tumor. Unlike many other professionals, surgeons cannot avoid being both “bookish” and “hands on.”

Two Hemispheres Through History

Years ago, when I was researching my earlier book, In the Mind's Eye, I found that always in the background, behind and under every story and every neurological observation, was my own awareness of the larger implications of the dual nature of the two hemispheres of the human brain. I was aware that this then relatively new understanding of the brain provided the larger context for most of the things I was writing. (While we have since learned that the roles of the two hemispheres is more complex than previously thought, the contrasting functions are still useful ways of thinking about the brain and cognitive processes.)

Along with this awareness, however, came a quiet but persistent series of questions. If we are now, in fact, moving from a present world largely based on words to an emerging new world increasingly based on images, has this happened before in other periods of history and how did it happen? In the past, were there whole societies and cultures largely based on right-hemisphere kinds of knowledge -- as ours seems to be based largely on left-hemisphere forms of knowledge and understanding? What would be the main consequences of following one approach over the other? What is gained and what is lost in each direction? And what happens to various factions and power groups within these societies when one group takes over from the other and there is a substantial change in one direction or another?

I wondered why certain religions and certain cultures seem to revere the written word and the book so very highly (two relatively new technologies in the long history of the human race)--and seem so ready, from time to time, to explode with a destructive force full of fear and hatred for images and everything linked to them? And what might all this mean for us today if we are, in fact, beginning to go through such a major change once again? I knew just enough of history to suspect that there was a major story to be told. But these questions were outside the scope of my own research--and I had no time to look into them further.

Years later, Shlain's wide-ranging analysis has provided a rich and thought-provoking series of possible answers to these questions. His observations show some of the wonderful possibilities, but also some of the frightening prospects. It is the kind of book that holds your attention long after you have put it down -- turning the evidence and arguments over in your mind, returning to passages, trying to see whether or not the pattern holds -- and trying to sort out what it might mean for our own times. It is a very different picture from what we are usually given. It is full of ideas that many will find very hard to accept. Sometimes he seems to push his material too hard to make it fit his thesis. However, in the end, his perspective may prove to be far more perceptive and pertinent than many more conventional interpretations by specialists and professionals.

In a series of 35 tightly-constructed chapters Shlain surveys an enormously broad territory--“Image/Word,” “Hunters/Gatherers,” “Right Brain/Left Brain,” “Hieroglyphs/Isis,” “Abraham/Moses,” “Athens/Sparta,” “Taoism/Confucianism,” “Jesus/Christ,” “Muslim Veils/Muslim Words,” “Mystic/Scholastic,” “Protestant/Catholic,” “Sorcery/Science,” “Page/Screen.” With example after example, he attempts to show that, in general, the old goddess-linked, polytheistic religions are more concerned with the cycles of life, more tolerant, less given to religious warfare and tended to exhibit the values and perspectives of the right hemisphere. The newer, literacy-linked, monotheistic religions, on the other hand, are more given to single-minded pursuit of narrow group goals, are often intolerant and self-righteous in the extreme, can be extraordinarily savage in extended religious warfare (in spite of peaceful religious teachings they pretend to follow) and tend to exhibit the values and narrow perspectives typical of the left hemisphere of the brain.

Shlain argues that these changes were brought about, remarkably, by learning to use alphabetic writing systems. “Aside from obvious benefits that derived from their ease of use, alphabets produced a subtle change in cognition that redirected human thinking. . . . Alphabets reinforced only half of the dual strategy that humans had evolved to survive. . . .” Each part of this “duality perceived and reacted to the world in a different way; a unified response emerged only when both complementary halves were used.” “All forms of writing increase the left brain's dominance over the right.” Learning to read and write “supplants all-at-once gestalt perception with a new, unnatural, highly abstract one-at-a-time cognition.”

New Thoughts About The New World

Consequently, according to Shlain, the rapid spread of literacy and inexpensive printed materials with Gutenberg's press in 1454 had mixed results. “The rapid rise of literacy rates wrought by the printing press was a boon to European science, literature, poetry, and philosophy. And yet it seemed no country could escape the terrible religious upheaval that inevitably followed the march of the metal letters.” Shlain provides detailed descriptions of the religious wars of this period.

The possibilities inherent in one predisposition versus another is probably most clear in Shlain's speculations about the discovery of the New World. If the Old World discoverers had been more tolerant and less single-minded, he argues, this sad period of history might have been very different. “Had the discovery and invasion of the New World been undertaken by a culture other than sixteenth-century Europeans driven mad by the printing press, a different scenario might have ensued. In the fourth century B.C., Alexander the Great made peace treaties with Dravidian tribes in India and Scythians in Thrace; people as exotic as any he would have encountered in America.

Unencumbered by the intolerance that comes with alphabet monotheism, Alexander did not feel compelled to eradicate the local religions and enslave the native populations.” Alternatively, “If Julius Caesar had discovered the New World, would he have destroyed the local population, stolen their lands, and rooted out their culture? Likely not. This wise pagan would have forged alliances, fostered trade, and treated the people with respect.” This should be expected, according to Shlain, because this is the policy he actually pursued with the “blue-painted Celts and Pics.”

It is noteworthy that in Shlain's view, the most dangerous historical times appear to be soon after the growth and establishment of widespread literacy. The more people learned to read, the more likely they were going to find good and authoritative reasons to begin slaughtering each other.

It is doubtful whether this will be a popular view among the growing numbers of well-intentioned literacy programs. However, perhaps we can be grateful that in the US and other advanced economies we are now mostly working on the last few percentage points -- rather than the first burst of broad-spread literacy, as in other parts of the world, especially certain developing countries. For the advanced economies, the dangerous period has largely passed; for the newly developing, the dangerous period has just begun (giving us a new and troubling perspective on the raising militancy of fundamentalist religions in a number of countries). Not being aware of this pattern, our leaders and their advisors are puzzled by world events as they unfold.

Hoping For A New Balance

Shlain gives us an unsettling picture of what can happen with the rapid spread and deep effects of a powerful technology -- reading, writing and the book. In his Epilogue, however, he apologizes for his criticism of the books he loves so dearly. “Throughout, as a writer, as an avid reader, and as a scientist, I had the uneasy feeling that I was turning on one of my best friends.” However, he felt that he had to point out the “pernicious side effect” of literacy that “has gone essentially unnoticed.”

What is most important is finding a new balance once again. He notes that “even when we become aware that literacy has a downside, no reasonable person would . . . recommend that people not become literate. Instead, we seek a renewed respect for iconic information, which in conjunction with the ability to read, can bring our two hemispheres into greater equilibrium and allow both individuals and cultures to become more balanced.”

The promise of this new balance leads Shlain to foresee a brighter future. “I am convinced,” he asserts, “we are entering a new Golden Age -- one in which the right-hemispheric values of tolerance, caring, and respect for nature will begin to ameliorate the conditions that have prevailed for the too-long periods during which left-hemispheric values were dominant. Images, of any kind, are the balm bringing about this worldwide healing.”

(It is worth noting, parenthetically, a possible alternative variation of Shlain’s theory. As we have seen, Shlain argues that the new development of literacy has a strong tendency to change people’s brains, emphasizing left-hemisphere modes of thought. Such effects may be subject to debate. However, there is an alternative dynamic that Shlain does not mention which is implicit in his argument and may be an important contributing factor. Just as individual brains might be changed, so whole populations might be changed as well. Perhaps it is not so much that the brains of individuals are changed (so quickly) but that with the spread of reading those individuals (and factions) with a certain skill and talent mix suddenly achieve, because of the new importance of reading, a status and power that they never had before -- bringing along their mainly left-hemisphere (one dimensional and single minded) view of the world. In other words, in a new reading-based culture and power structure, those with natural inclinations toward reading proficiency come to prosper, rising quickly to the top ranks. As a consequence, left hemisphere values and views of the world become an increasingly dominant part of this new culture. At the same time, right hemisphere values of balance and tolerance are overwhelmed, at least for a time. Thus, alternatively, it may not be that all brains are quickly changed, but rather that the whole population comes to be dominated by those with a certain kind of brain -- who find their new power that they did not have before because of a broad ascendancy of a new technology -- that is, reading, writing and the making of books that speak with magical and unassailable authority.)

As a group of people interested in the image in its many forms, we may hope that Shlain is correct in his future expectation of a new balance. However, we may also hope that we will not see a revival of those who are single-minded in their love only for the written word, smashing images on every side in their passionate intensity.

Passion Over “The Passion”

A relatively short time ago we might have wondered whether the actions of historically distant Christian reformers or Islamic fundamentalists would ever bear on our near interests today. However, it has become increasingly clear that these issues are becoming more relevant with each passing month. Indeed, I sometimes wonder whether we may be going through one of those portentous periods where world events and mass media will be dramatically shaped once again by the age-old battle between the image haters and the image lovers.

It is clear that images still stir deep passions--now, however, with a curious reverse twist of which many seem to be unaware. Some years ago, in an article on Mel Gibson’s then new film, “The Passion,” art critic Paul Richard points out (“Irony in Passion”) that the film depends heavily on the literal and bloody depictions of the crucifixion of Christ characteristic of the Counter-Reformation art from within the Catholic Church.

Indeed, Richard observes that the great irony here is that the avowed target audience for the film, evangelical Christians, seems to be attracted to the same literal and bloody depictions that were used as a weapon against their own theological ancestors long ago. Such images were hated by the early reformation, yet their theological descendents have come to embrace them with enthusiasm.

How did all this come to be? In Richard’s words, “Martin Luther’s Reformation was a theological rebellion. No longer would the rebels accept the pope in Rome, or the hierarchy he led, or the Latin of the Mass and of the Vulgate Bible, which most of them could neither read nor understand.” They wanted their own Bible, in their own language so they could understand and interpret the scriptures for themselves. “They didn’t need the pope, they didn’t need his saints, they didn’t need his priests, and -- as some began insisting -- they didn’t need his art.” [1] They realized that the art of the Catholic Church, and especially the art of the Counter-Reformation, was a counter attack on their own call for an end to all image making (as they believed was required by their reading of scripture) and for extreme simplicity in all things. As Richard notes, this desire for simplicity is still evident among American Protestant buildings. “That plainness is still seen in the clean, white clapboard churches scattered through New England, in the Quaker meeting houses of Pennsylvania, all the way to the Crystal Cathedral in Orange County, Calif. No Catholic paintings taint these sanctuaries.”

Smashing of Images and Restoration

Reminding his readers of the historical events, Richard gives some detail about the Reformation’s role in destroying many works of art through a hatred of images of all kinds. “On Aug., 10, 1566, at Steenvoorde in Flanders, a Calvinist preacher named Sebastian Matte told his listeners to go and smash the art of the Catholic churches. Ten days afterward, the cathedral at Antwerp was methodically trashed.” Although Richards does point out that “later, under Catholic rule, Rubens was commissioned to re-do [the cathedral’s] splendor,” the fate of most churches and cathedrals in Protestant areas was grim indeed. “Such spasms of enthusiastic image-breaking erupted in the British Isles for most of the next century. ‘Lord, what work was here!’ lamented the Bishop of Norwich in 1647. ‘What clattering of glasses! What beating down of walls!’ ”

Eventually, after years of civil war, the image haters came to be in full control of England and in time found reason to chop off the head of their king, Charles I. Later, after years of puritanical and repressive rule by Oliver Cromwell and his supporters, the English people had had enough of it. They then brought back the king’s son and restored him to the throne -- releasing a rebirth of creativity and vitality rarely seen before or since. As art historian Kenneth Clark observed (Civilization) , the restoration of Charles II in 1660, “ended the isolation and austerity which had afflicted England for almost fifteen years. As so often happens, a new freedom of movement led to an outburst of pent-up energy. There are usually men of genius waiting for these moments of expansion, like ships waiting for high tide. . . .” So came a rebirth of English accomplishment in art, architecture and science. Sometimes such extreme measures led to a new restoration of balance and a new burst of creativity.

A More Ancient and Kindly Islam

The surprisingly central role of image haters in current world events is strikingly evident in a recent book about problems of democracy by Fareed Zakaria (Future of Freedom). Zakaria argues that “If there is one great cause of the rise of Islamic fundamentalism, it is the total failure of political institutions in the Arab world. Islamic fundamentalism got a tremendous boost in 1979 when Ayatollah Ruhollah Khomeini toppled the staunchly pro-American shah of Iran. The Iranian Revolution demonstrated that a powerful ruler could be taken on by groups within society. It also revealed how in a developing society even seemingly benign forces of progress -- for example, education -- can add to the turmoil.”

Zakaria observes that over past centuries Islam was more adaptable and flexible than what we see today. “Until the 1970s most Muslims in the Middle East were illiterate and lived in villages and towns. They practiced a kind of village Islam that had adapted itself to local cultures and to normal human desires. Pluralistic and tolerant, these villagers often worshiped saints, went to shrines, sang religious hymns, and cherished art -- all technically disallowed in Islam.”

All this was changed by more recent historical forces (of course, in some measure not unlike the Protestant Reformation in the West hundreds of years before): “By the 1970s, however, these societies were being urbanized. People had begun moving out of their villages to search for jobs in towns and cities. Their religious experience was no longer rooted in a specific place with local customs and traditions. At the same time they were learning to read and they discovered that a new Islam was being preached by a new generation of writers, preachers, and teachers. This was an abstract faith not rooted in historical experience but literal and puritanical -- Islam of the high church as opposed to Islam of the street fair.”

It is striking how well this brief aside in Zakaria’s book seems to fit Shlain’s main argument. (For emphasis, I have added the italics.) It is fair to assume that Zakaria knows little or nothing about Shlain’s book and argument. Yet, amazingly, there is a persuasive convergence. Whether Taliban or Al Qaeda, Islam’s puritanical fundamentalists are intent upon destroying images in all forms, just as they are intent upon destroying all tolerant and progressive institutions -- in a manner strikingly similar to the puritanical Protestant Christian fundamentalists of long ago. It is remarkable how this passage reveals how much these patterns still dominate our times and how modern political commentators, however well informed, seem to be unaware of a larger pattern of which their current concerns are, apparently, but the most recent manifestation.

We might hope that over the longer term, unfolding conditions might be more favorable to image lovers, as well as greater tolerance in general. However, in the short run, it would appear that the image haters and image smashers will be shaping world events in the familiar age-old pattern. And we can wonder how long it will go on before people will have had enough of it -- and will want to restore a former balance once again.

Adapted from chapter 6, Thinking Like Einstein by Thomas G. West (Prometheus Books). This book is based on a series of columns (“Images and Reversals”) written over several years for the quarterly journal Computer Graphics, a publication of ACM SIGGRAPH, the international professional association for computer graphic artists and technologists. During these years, the journal editor was Gordon Cameron who now makes software tools at Pixar Animation Studios in Emeryville, California.


Friday, October 23, 2009

Making All Things Make Themselves

I have been reading the new book by Iain McCalman, Darwin's Armada -- Four Voyages and the Battle for the Theory of Evolution. This book has put me in mind of an article I wrote for SIGGRAPH some years back, a version of which was included in Thinking Like Einstein. I think the issues and conceptions presented are still very much alive today -- and are given a new aspect with links to the newest visual technologies.

An article on science and religion in Science magazine noted: "The reaction to the Darwinian theory was . . . diverse when it first it exploded onto the Victorian scene. . . . For Charles Kingsley, a deity who could make all things make themselves was far wiser than one who simply made all things."

Computer graphics has always been about making things, whether making 3D images of real objects, making images of imagined visions or making models of the “unseen” in visualizations of scientific data.

During the Renaissance, the goal of art was to imitate nature, to make true and accurate images of the real world. Even in the poetry of the period, the “prime aim … was to make an imitation” in order to “grasp the essential meaning and value.”

Although this goal fell away some time ago, we can now see that our newest tools make it possible to imitate nature once again - but this time at much deeper levels. The imitation can be not only of images of surfaces, but also imitation of nature’s growth, its physical motion, its processes, its inner workings, its unfolding instructions according to an simple code -- imitating the immense complexity generated by variation of a simple code within a context of meaningful selection. And it is already becoming clear that part of this imitation is developing some understanding of things that “make themselves.”

Virtual Creatures

Years ago, when I first saw the short silent movies of Karl Sims’ block creatures -- walking, swimming, seeking light and guarding their food -- I was thunderstruck. I felt immediately that something very powerful indeed was afoot. And I imagined that what I was seeing was really only the very beginning.

Of course, Sims’ work is in many ways mainly a highly sophisticated extension of Richard Dawkins’ “Blind Watchmaker” program developed for the early Macintosh. With this program (included with Dawkins’ book of the same name), two-dimensional, static, monochrome stick figures varied their form in random ways. This allowed the player to select among an array of mutations for different kinds of shapes and traits, play after play, generation after generation -- until a new and wonderful centipede, starfish or complex crystal was produced.

This piece of software showed the potential of a modest interactive game to reveal a deeper concept, a fundamental process of nature. However, for me, although the stick figures were fascinating, it took the more life-like motions and behavior of Sims’ virtual creatures to drive the point home.

What Works

Accordingly, I think our new tools and our new toys are bringing us face to face with a deeper understanding of how nature’s system really works -- how truly dazzling adaptations to a particular environment can be generated with a relatively simple mechanism, constantly interacting with a particular and changing environment.

This deeper understanding also belies the idea of fixed superiority, since so much depends upon one form of environmental context for any particular form of superiority. The system may evolve an amazingly superior swimmer, but this does not make for a superior walker. (Or, as friend has recently pointed out, a good basketball player would probably make a poor jockey.)

With such models, we can see what an effective mechanism evolution is, partly because of this intense and constant mutual interaction between genetic code and outside environment. Indeed, we might reflect that even if there were overt intention by a maker, this might not be a good thing.

We humans are constantly designing things by intention. But often we come up with the “unintended consequences” that are so often observed and lamented. It becomes clear that making things that make themselves may be a much more practical strategy in an ever-changing world.

But there is a rub here -- for many people, an enormous change of metaphor. We now have to view the maker’s role not as a craftsman or designer or engineer, but as a maker of things over which the maker has only limited control. Over time, the vast complexity generates a freedom of form and function well beyond the power of the maker. And often, I would argue, this is a good thing.

Early Discoveries

One of Sims’ creatures in particular stands out in my mind. In his design specifications, “walking” was defined as forward motion. Most creatures found ways to “walk,” more or less as expected. However, to the surprise of the programmer, one creature found that by simply building itself very tall (with an appropriately placed top-heavy weight), it could fall and tumble heel over head so that it could generate ample forward motion and thus satisfy the original selection criteria. Sims observed that this was a strategy that had never occurred to him -- the programmer, the arms-length designer.

What is no less remarkable is that Sims also found that other creatures took advantage of the bugs and programmer’s mistakes in specifying the physical world. Indeed, Sims noted that such a process was so reliable that it might be seen as a “lazy” way of finding such bugs.

Thus he argued that “it is important that the physical simulation be reasonably accurate when optimizing for creatures that can move within it. Any bugs that allow energy leaks from non-conservation, or even round-off errors, will inevitably be discovered and exploited by the evolving creatures. Although this can be a lazy and often amusing approach for debugging a physical modeling system, it is not necessarily the most practical.”

Unexpected innovation and relentless exploitation of tiny areas of possible advantage -- these are impressive indeed and show the amazing power of nature’s engine of adaptation. If these striking innovations are so easy to observe in a synthetic universe, in only a handful of generations, how much more should we expect in the real world of nature?

Of course, when viewed in this way, ample evidence can be seen in every direction. A recent article in Science News, for example, noted that a graduate student has just found a kind of bacteria living on a certain kind of ant that proved to be highly beneficial to the ants. The ants “farm” a certain kind of fungus for their own food.

However, this fungus would be easily killed by another pest fungus - if it were not for the antibiotic secretions of the ants’ bacteria ride-along buddies. Thus the graduate student “who once mused about funny-looking patches” on the ants found that the insects have “a microscopic partner species overlooked despite about a century of study.”

“What’s interesting from an evolutionary perspective is that once again the ants hit on something before we did. Ants beat humans in developing agriculture by some 50 million years. Now, says [one scientist], it looks as if the same ants came in ahead on bacterial antibiotics by millions of years.”

Learning from the Lowly

Such stories take us to a place where we should be prepared to have much higher respect for apparently ordinary and humble creatures. What we are not sufficiently aware of is that these creatures are the beneficiaries of a system capable of innovation far beyond our own small observations or imaginings.

Perhaps it is time (especially now in an era of climate change) to reconsider centuries of human self-congratulation on our own cleverness and begin to look at the cleverness -- indeed the wisdom -- of humble creatures.

These creatures have used an elegant process to not only produce clever innovations, but to produce clever innovations that are proven to work, again and again, over millions of years. Such accomplishments should teach us a great deal about the possibilities for “sustainable development.”

We have built an elaborate maze of justifications and explanations. Modern culture has sophisticated arguments to explain why human beings are at the top the pecking order and why human language is a supreme achievement. Accordingly, it is very hard for us to fully appreciate the accomplishments of the humble ant, the unsavory fungus and the insignificant microbe.

Yet, our newest technologies -- as the microscope and telescope did centuries ago -- are slowly opening up these worlds so that we may see clearly once again what we should have known all along.

However, learning from such lowly creatures is not easy. Our whole culture, for most of us, has trained us not to think such thoughts. Certainly there is very little in the Modern Western tradition to cultivate such an appreciation.

Generally, there has been virtually no respect for the intelligence of animals. Intelligence is not seen as manifest in nature. Rather, it comes in spoken words and in writing. Respecting the intelligence of animals and creatures that make themselves would seem to be more consistent with the worldview of our very distant ancestors.

It would indeed be curious if our new visualization technologies and computer simulations were to take us back in time to a place where we can admire -- once again, as our distant ancestors did -- the high intelligence of lowly creatures.

These creatures learn and innovate not from the words and logic but from relentless experimentation and selection over long periods of time, thus showing the deep wisdom of making “all things make themselves.”

Based on chapter 12 of Thinking Like Einstein. References: Brooke, John. “Science and Religion: Lessons from History?” Science, Dec. 11, 1998, pp. 1985-1986. Dawkins, Richard. The Blind Watchmaker-Why the Evidence of Evolution Reveals a Universe Without Design, W.W. Norton, 1987. Dean, Leonard. Renaissance Poetry, Prentice-Hall, 1960, p. 1. Milus, S. “Farmer Ants Have Bacterial Farmhands,” Science News, vol. 155, April 24, 1999, p. 261. Sims, Karl. “Evolving Virtual Creatures,” Computer Graphics Proceedings, Annual Conference Series, 1994, pp. 15-22. More information about Karl Sims work can be found at: http://www.karlsims.com/

Sunday, October 11, 2009

Thinking Like Einstein on the Hokule’a

The following observations are based on sections of a chapter from my book Thinking Like Einstein. Here I was trying to make the point that some of the most creative thinking in science and other fields is based on very different ways of gaining knowledge than conventional academic models. I think this has been true for most of human history and is still true today--although this is almost never recognized by modern conventional educational systems.

“We have Hawaiian names for the houses of the stars--the places where they come out of the ocean and go back into the ocean. If you can identify the stars, and if you have memorized where they come up and go down, you can find your direction. The star compass is also used to read the flight path of birds and the direction of waves. It does everything. It is a mental construct to help you memorize what you need to know to navigate. . . .

“We use the best clues that we have. We use the sun when it is low on the horizon. . . . When the sun gets too high you cannot tell where it has risen. You have to use other clues. Sunrise is the most important part of the day. At sunrise you start to look at the shape of the ocean--the character of the sea. You memorize where the wind is coming from. The wind generates the swells. You determine the direction of the swells, and when the sun gets too high, you steer by them. And then at sunset we repeat the observations. . . . At night we use the stars. . . .

“When I came back from my first voyage as a student navigator from Tahiti to Hawai’i the night before he went home, [my teacher] . . . said ‘I am very proud of my student. You have done well for yourself and your people.’ He was very happy that he was going home. He said, ‘Everything you need to see is in the ocean but it will take you twenty more years to see it.’ That was after I had just sailed 7000 miles.” (Navigator Nainoa Thompson of the Polynesian Voyaging Society, having sailed on the traditional Polynesian canoe, the Hokule’a.)

Using the Best Cues We Have

With the words of Navigator Nainoa Thompson, in this passage we see a wonderful description of using well the best of what is ready at hand to do a most important job--on which rests the survival of a whole people. Indeed, during recent years, the successful voyages of the Hokule’a and other long-distance canoes have become cultural milestones and Nainoa Thompson has become a major hero among Polynesians. These voyages and revived navigation skills have much to teach us all. We are shown a highly refined example of the observation and visual thinking skills needed to navigate across the Pacific. If we had not known better, most of us would have thought that it was not possible to do.

Perhaps we are just now mature enough in our modern culture to fully appreciate what these navigators accomplished in an earlier culture with the simplest of tools and the most sophisticated use of their brains--and to see that such feats rank with the highest accomplishments of human beings, in any field, at any time. We can now see that it is not a matter of developing complex mathematics or the most modern tools and technologies. Rather, it is a matter of using well what is available in the particular situation-- developing techniques to train the brain and the senses through close observation, long practice and sensitive teaching--making the best use of what is at hand, using “the best clues that we have.”

Feats such as these draw heavily on visual and spatial abilities and “intelligences” that have been generally under-appreciated in modern culture. But all this is changing and the newest technologies are taking us back to some of our oldest and most essential abilities--teaching us that in some fields, the further forward we proceed, the more we reconnect with our ancient roots.

Thinking Like Einstein

Visual thinking and visual knowledge is a continuing puzzle. It seems to come up more and more these days--but few seem to understand its deep roots and larger implications. The human brain is indeed wonderful--the way it permits us to use all forms of natural systems and subtle information to do rather unbelievable things--and still survive, or, rather, in the long run, in order to survive.

To navigate across thousands of miles of open ocean while feeling the long-distance swells (not sailing past tiny unseen islands just over the horizon), to adapt to amazing extremes of heat and cold--using with great sophistication only those tools and resources that are readily at hand--all without modern technologies or distant supports and hidden subsidies (always a major advantage for modern travelers). All this is accomplished without a book of written instructions. And without full scientific knowledge. But with knowledge enough to hunt food, find home, build shelter, fend off enemies, cooperate with a group and raise a family--for thousands and thousands of years.

Strangely, the further ahead we go, the more our future is seen to be like our distant past. Sometimes, the more we look into our future, the more it is like the very, very old. The more modern we become, really, the more we come to appreciate (belatedly) the long-earned wisdom of traditional cultures.

The more we understand the brain’s deep resources for creativity and pattern recognition, the more we come to respect the accomplishments of our distant ancestors--and appreciate the problems they solved--the solutions that have secured our survival and allowed us to be. The more we move into unfamiliar territory, without map or guidebook, the more we admire traditional knowledge, long discounted by bookish education.

This is not mere romanticism but level-headed respect. The more our technologies change (and also change us), the more we can see that the newest computer data visualization technologies draw on some our oldest neurological resources--more like those of the hunter-gathers than they are like those of the scribes, schoolmen and scholars of more recent times. Albert Einstein tells us, as we have seen previously, that all his really important and productive thinking was done by playing with images in his head, in his imagination. Only in a secondary stage did he translate--with great effort, he says--these images to words and mathematics that could be understood by others. We now have technologies that can deal with the images directly--so the laborious translation may often not be necessary or even desirable.

The Rise of Visual Technologies

Some believe that visualization technologies are already in evidence everywhere. They think the battle is over. Others, myself among them, think that visualization technologies have a very, very long way to go and they have hardly begun to have substantial impact in the full range of fields they will transform over time. We think, with the exception of a few specialists, the process of deep change has not yet really begun. We think that gaining insight and new understanding through the sophisticated use of visualization technologies and techniques, in time, should be as pervasive as reading and writing.

But there can be little debate that we already have tools that can help us think the way Einstein thought--and it is striking how old and traditional these ways of thinking were. In many ways, Einstein thought and worked more like a craftsman than a scholar. And, indeed, the more proficient he became at the sophisticated science and mathematics of his peers, the less visual he became--and, more important, the less creative and innovative he became.

It would seem that the traditional Polynesian navigators were drawing on some of the same neurological resources that were so very useful to Einstein when he was a young man--before, as we are told by other scientists, he became corrupted by excessive familiarity with sophisticated mathematics. As he became more expert as a scientist and mathematician, he accomplished less. He had abandoned the modes of thought that had given him his best and most original insights.

In is notable that two physicists, Richard Feynman and Abraham Pais, both observed independently that as Einstein grew older, he became less visual in his approach and became more adept at conventional mathematics. They both noted that this process seemed to make Einstein much less creative, original and productive in his work. Feynman believed Einstein became much less productive when “he stopped thinking in concrete physical images and became a manipulator of equations.”

Abraham Pais, the author of a scientific biography on Einstein, also noted that Einstein’s increasing dependence on mathematics in later life also involved a reduced reliance on the visual and intuitive approaches he used so heavily and so productively in his earlier work. Pais observes that it is “dangerous and can be fatal to rely on formal [mathematical] arguments,” a danger from which Einstein did not escape. “The emphasis on mathematics is so different from the way the young Einstein used to proceed.”

Traditional Visual Cultures

When I was giving talks to teachers and school heads in Fairbanks, Alaska, several years ago, I was told that the Athebaskan Indian students in the villages along the Yukon River were natural visual thinkers and natural scientists. This should make us think.

We may well wonder whether they would they have substantial advantages if they were to be educated in the visual world of Einstein’s imagination and modern computer graphics--rather than the old academic world of facts and dates, words and numbers. Shortly afterward, when I had given talks in Honolulu, others there told me that traditional Polynesian culture quite naturally promotes highly visual and hands-on approaches over verbal approaches to the communication of knowledge.

If we were to fully and deeply understand the roots of knowledge in our own new world, we might see that Einstein’s way of thinking is far more like those used in resurrected traditional cultures than it is like the academic conventions of the distant and recent past. We might see, again, that Einstein’s way of thinking is more like that of the artisan or craftsman or traditional navigator and less like that of the conventionally trained scholar or mathematician or scientist.

Smashing Images

There seems always to have been tension and conflict between the world of the word and the world of the image. In March of 2001, a whirlwind fanned the fires of this ageless battle between the word and the image. As the world watched in horror and disbelief (foreshadowing greater horrors the following September and thereafter), leaders of the Taliban government in Afghanistan decided that it was time to finally destroy all images in their country.

No more playing on international sympathies or bargaining for foreign funds and support. They just declared that the statues were idolatrous and gave the order and the giant ninth-century Buddha statues of Bamiyan were blasted to rubble. “It took us four days to finish the big statue. He was very strong,” said one soldier.

Thus here we have the two extremes of a global and philosophical continuum. On one side, the image haters, those who would destroy all art in all forms out of a strict obedience to an ancient and narrow prescription. And, on the other side, those who would have art harnessed to shape and serve larger interests.

For a long time (it is not often observed), many Christian denominations have embodied this split as well. Some churches have always used the image to teach church doctrine and Bible stories--especially to those who were not able learn it through written text. Other Christian groups have tried to avoid all images and decoration--indeed, some of their Protestant forbearers smashed the stained glass windows and pulled off the heads of the stone saints just as the Taliban have done in recent times.

In some parts of England, even today, they are still discovering stained glass window sections that were buried long ago to save the Christian images from the enraged, puritanical Christian destroyers of images. Indeed, it is sobering to observe that some of these same Puritan Christians sailed to America on the Mayflower--and, being less able navigators than the ancient Polynesians, ended up in New England rather than Virginia, their intended destination.

Based on sections from Thinking Like Einstein. See also: Nainoa Thompson, “Voyage Into the New Millennium,” Hana Hou! magazine, February/March 2000, pp. 41 ff. Harriet Witt-Miller, “The Soft, Warm, Wet Technology of Native Oceania,” Whole Earth Review, Fall 1991, pp. 64-69. Dennis Kawaharada, “Wayfinding, or Non-Instrument Navigation,” The Polynesian Voyaging Society, http://leahi.kcc.Hawaii.edu/org/pvs/, May 2002.

Sunday, October 4, 2009

Without "Big Science"

I have just crossed the country by car and have been thinking along the way of a chapter on creativity and dyslexia that I have been asked to write for a new book to be published in Britain. In so doing my thoughts have returned repeatedly to a bit of research I did on the Wright Brothers--a story that is far more interesting and timely than one might suppose. The following is based on an early version of this story:

Today, it is pretty much a foregone conclusion that virtually all science of consequence is "big science." There seems to be nearly a universal belief that nothing of importance can be done without large a research staff, expensive equipment and a massive budget. It may be true that much of modern science and technology requires such investment, but not, perhaps, in every case. There are a few counter examples to be considered.

One example that comes from the historical pantheon of American technological heroes: An illustration of what common sense and basic capabilities and determined effort may accomplish, where experts with conventional research programs or gentlemen enthusiasts with grandstand performances had failed repeatedly, is provided by the example of the Wilbur and Orville Wright.

Periodically, there seems to come a time when all the tools and techniques become available for relatively ordinary people to draw together what they need in some novel new way. Once these circumstances converge, then the major additional needed ingredient is mainly effort--determined, focused, passionate and unrelenting--along with a willingness to risk many small failures in order to inch forward into unknown territory through a good deal of trial and error.

The Wrights had only their bicycle business and such tools and skills and income that this business made available to them. Yet, with these modest resources, they took what was essentially an off-season hobby and turned it into a dramatically successful achievement, one that had previously eluded all the professional efforts.

(There is an aspect of evolutionary selection here as well. It matters most of all, perhaps, that at a particular time and place all the resources to do a job are widely available. Many will fail. A few will succeed. But with widely distributed capabilities, the whole enterprise moves ahead at a much more rapid rate. During the late nineteenth century, we are told, the total number of inventions and patents increased dramatically. The number of foolish inventions also increased greatly. But what is more important is that the number of really good inventions increased greatly as well.)

Neither Wilbur nor Orville Wright ever finished high school, although other members of the family were college educated. (There is no evidence of learning disabilities in either brother.) Their father was an educated religious leader and administrator, a traveling Bishop of the United Brethren in Christ. Their sister eventually attended Oberlin College to earn her teaching credentials.
The brothers initially went into business as job printers and publishers of several small local newspapers but they eventually stumbled into the suddenly popular and modestly profitable bicycle business that started to flourish in the early 1890s. The business involved bicycle sales (with weekly-installment payments), rental and repair in the summer with cottage industry manufacturing in the winter. The manufacturing operation was easily run by the two brothers, who were observed to "combine mechanical ability with intelligence in about equal amounts." An old school friend who had originally joined their printing operation did the final bicycle assembly. Orville operated the enameling oven while Wilbur did the brazing using a brazier that they had designed themselves.

There seemed to be a relatively aimless quality about the young adult years of the two brothers in Dayton. Their printing business was in part a continuation of Orville's summer job in high school years. Off and on Wilbur had considered going to college. But in high school a sporting injury and a "vaguely defined" heart ailment caused Wilbur to drop this plan and he stayed at home for several years, partly taking care of his semi-invalid mother until her death in 1889--while his father's work continued to require extensive travel and long absences from home.

Wilbur's slow recovery and apparent lack of drive during these years exasperated his brother Reuchlin who had married and moved to Kansas City. He could not understand how it was that Wilbur stayed at home for so long just reading and taking care of his ailing mother. Reuchlin wrote their sister Katharine: "What does Will do? . . . He ought to do something. Is he still cook and chambermaid?" This is not exactly the expected beginning for the senior member of the two originators of the dashing and daring field of early aviation.

Without formal training or special resources, the self-taught Wrights seemed to have all they needed. Each necessary task was dealt with in a straight forward way with the resources commonly available and readily at hand. When they needed a windy site for their earliest glider experiments, they simply wrote to the U.S. Weather Service. Their early experiments were essentially large-scale kite experiments, but these yielded important information on lift, more efficient wing shapes and novel devices for control in all three dimensions.

Unlike many other early experimenters in aircraft design, they knew that they needed to teach themselves how to control the aircraft first. Consequently, well before employing an engine, they took many short gliding practice flights down a large sand dune to develop their own reflexes and the essential skills--to give themselves some time to learn by trial and error the "feel" of the machine and controls.

They found that the tables on wing shape and lift published by a German engineer and professor were wrong, so they devised their own, partly from their own wind tunnel, itself made from simple parts, to their own design. Indeed, in time, they came to distrust any data they had not tested themselves.

They could find no automobile company willing to make the light and powerful engine they required, so they made their own engine to their own design, largely in their own shop, with the help of the machinist who worked in their bicycle business. At the time, it was falsely believed that air propellers were similar to water propellers. Consequently, the brothers found that they had to devise their own theory as well as develop their own design for the propellers needed to drive their machine. And throughout their flying experiments, they took photographs of everything they did so they could access their progress and make a permanent record of their achievements.

Those who are familiar with the way such a project would be staffed today--with teams of highly-paid experts in dozens of fields--can easily see how rapidly budgets and development schedules for such an undertaking would expand.

Of course, a very great deal more is known now in many fields than was known then. But at the edge of the new, the situation (now as then) may not be as different as it might appear at first. Small determined groups with fresh ideas (although less well educated and less experienced) may still move more rapidly and effectively than large ones. Today's commercial developers of powerful software for personal computers, for example, find that errors, communication problems and developmental delays increase dramatically if they employ more than a few programmers on a given project (or module of a larger project).

In this connection, it is worth recalling that the first real personal computer was developed not by a large, well-established computer company, but by two counter-culture adolescents in their now-famous garage, largely using parts available to hobbyists. Sometimes it is good not to be an expert, not to know too much. Sometimes it is far more important to have a vision and to be willing to learn by taking risks and making mistakes that experts would not make. Sometimes it is good not to know beforehand all the reasons why it will never work and why it will never sell.

In the time of the Wrights, the head of the Smithsonian Institution, Samuel Langley, a respected engineer and scientist, obtained a sizable grant of $50,000 from the War Department to develop a heavier-than-air aircraft, but his experiments sank in the Potomac, with "howls of derision from all quarters." Wilbur and Orville financed their experiments entirely out of their own modest personal resources and at a small fraction of the cost of the fully-funded professional effort.

But, we might ask, are not the efforts of the Wrights more that a hundred years ago really quite irrelevant to the realities of our times? In some ways yes. In some ways no. Sometimes the most effective barrier is, as usual, the simple belief that it cannot be done.

Perhaps, we need now to consider if we might not be at the beginning of a period in some important ways comparable to the time of the Wrights. In their own time, the Wrights had in their hands all the mechanical tools and skills needed for their novel task. Today, as more and more powerful electronic tools and skills become increasingly available in the hands of ordinary people (hobbyists and hackers in their garages, basements and bedrooms), one wonders what determined individuals and small groups might be able to accomplish where the great companies and laboratories have so far failed--or what they might discover that would never have occurred to fully-qualified workers in well-funded research laboratories (where perhaps there is a much greater tendency to take fewer risks and fail less often).

In the time of the Wrights, sophisticated mechanical capabilities had become widely available at modest cost to comparatively ordinary people-- without special education or facilities. Today, highly sophisticated electronic capabilities are becoming widely available at modest cost to these same comparatively ordinary people.

On reflection, perhaps it is only a matter of time before we should expect modern-day electronic Wrights to devise some truly new things--perhaps things as yet unimagined by the professionals, or in areas where were highly-trained teams from "big science" and "big technology" have repeatedly failed before.

They may not have the expensive equipment or the big budgets, but if they are really original, they may find a totally new way of doing the job-- and with far more modest resources. Sometimes, perhaps even now, what is needed is not so much big budgets, but big visions and a willingness to fail.

Based on Chapter 8, In the Mind’s Eye.

Sunday, September 20, 2009

Recruit Autistics

Just received, the October issue of Wired magazine (pp. 98-99) briefly tells of a company in Denmark called Specialisterne which hires out workers with autism and Asperger's syndrome to do detailed work in software debugging and the like. Similar companies have opened in several other European countries. The company founder, Thorkil Sonne, says: "This is not cheap labor, and it's not occupational therapy. . . . We simply do a better job." (Reported by Drake Bennett.) Using distinctive talents in this way is an idea that could apply in many areas. For example, bright dyslexics are often very good at visual thinking and creative problem solving. However, most professionals in the field never take seriously the special talents of dyslexics -- and continue to focus narrowly on fixing problems by teaching reading -- when they should be giving at least equal time and effort to the distinctive talents that are not studied nor well understood. They should take a lesson from Thorkil Sonne and his associates. It is about time.

Saturday, September 12, 2009

Youtube video clips on dyslexia

You are invited to have a look at the video clips put up in recent days on Youtube by Chris Smart, the maker of the UK video called "Dyslexia, An Unwrapped Gift." If you click on the Youtube address (below), you can select the first and second parts on the same page (about 9 and 10 minutes, respectively). Although I have a role, I have no reservations about saying that I think this is one of the best documentaries ever made on dyslexia.

I am delighted that Chris Smart has now made parts widely available on Youtube. It especially appeals to teens. Chris and his team selected a great setting for words ("Chained Library") and images ("Mappa Mundi") at Hereford Cathedral and great stories from the young dyslexic teenagers. Bravo Chris and the "Unwrapped Gift" team!

http://www.youtube.com/watch?v=ngl_II8TtGk

Following are parts of Chris Smart's commentary on his own film:

The first film ever made by Silva Productions back in 1999, but still popular today.

The film asks the question, is dyslexia a disability or an ability and goes onto to highlight research that suggests dyslexics will be the intellectual elite in the digital and visual picture packed world of tomorrow.

"An Unwrapped Gift" features Tom West, author of In the Minds Eye which examines the role of visual-spatial strengths in the lives of historical people who were dyslexic, including Albert Einstein, Winston Churchill and William Butler Yeats

The intimate and thought provoking video diaries of young people in the film have proved to be a very positive factor in families with teenagers coming to terms with their own dyslexia. Shown extensively throughout the UK and America, the film's narrative explains how we can learn from the distinctive strengths of dyslexics, rather than just focusing on their weaknesses and failures

Testimonials:
People with dyslexia are given a voice in "An Unwrapped Gift." It is not a video about how to treat dyslexia, it is a video celebrating the dyslexic difference.
Jo Todd, Key 4 Learning

"An Unwrapped Gift" is a high quality film that makes a positive statement about a group of people who have historically been put through the mill by virtue of misunderstanding.
Stephanie Zinser, Freelance Journalist (who writes regularly on health for the Daily Mail)

Sunday, August 30, 2009

Thinking Like a Child

It is often observed that one of the essential characteristics of creativity is the "childlike" view of the world, full of freshness and plasticity. As they grow older, most children gradually lose this view. Most children appear to shift their thinking to a more rigid left-hemisphere dominance at a given age, as is expected. But it seems that some children cannot shift to the usual one-sided dominance so readily; they are delayed in the maturing process; they grow up using both sides of their brain or mature with a greater facility with their right hemisphere than is usual. This may lead to some degree of confusion, ambivalence, and awkwardness, but the intellectual resource may be profoundly richer thereby -- and that makes all the difference.

Maturity is a key concept here. Maturity suggests responsibility, conventional education, having children, understanding the adult world and finding a place in it -- making one's way or doing one's duty. A small child cares little for these things. He or she is too busy discovering the world, examining things closely, seeing how they behave, trying to figure out how things work, how people respond when you do different things -- touch, sounds, smells, tastes, images -- and all of this starts well before words or numbers. All of this is play -- learning and discovering. While maturity is, of course, necessary to make one's way in the adult world, we are aware that it is good to preserve something of the child, especially if we desire the freshness of view that seems to promote real creativity. This is generally known and understood. However, what is not generally known is that it may be a good thing when the maturing process takes a little longer than usual.

Parents are usually pleased when their children mature quickly, becoming more independent, more organized and more self-directed in advance of their peers. What is not generally known is that late maturation can serve a useful function, although it seems to contradict conventional belief. The neurological evidence indicates that the onset of puberty stops further neurological development. That is, neurological development is not speeded up by early puberty. Rather, early puberty appears to arrest neurological development at an earlier and less fully developed stage. One neurologist notes: "The studies show that on the average . . . quick development means you sort of 'gel' earlier and you don't develop as fully. It is not just true for brain development; it is true for growth also. People who grow slowly tend to grow taller."

Accordingly, it is possible that the early developer may be good at what they can do, but they may be able to do less than the child or adult who has developed over a longer period. Thus, later maturity may be seen as desirable in at least three ways: First, the plastic, absorbent world of the child may be experienced longer, giving the adolescent and adult a deeper store of real seeing and feeling experience of the world to draw on -- and build intuition on -- before the adult world of fixed, literate, learned knowledge takes over. Second, there is a real possibility of significantly increased neurological capacity, at least in some cases and in certain areas, which may more than compensate for earlier awkwardness and some lingering areas of relative disability. And third, the later developer may be able to retain some aspects of the child's view throughout life -- such as a sense of wonder, or, a comparative freshness and lack of preconception -- making the expression of real creativity much more probable. Although the clock of maturation follows its own beat, it is good to know that a slower pace may have, under the right circumstances, notably positive consequences.

With respect to creativity, the freshness of the child's view is not to be underestimated. When the world of the small child is properly understood, then perhaps it is no surprise that Einstein said he was led to his discoveries by asking questions that "only children ask." This view of himself is clearly evident in the following curious passage: "I sometimes ask myself . . . how did it come that I was the one to develop the theory of relativity. The reason, I think, is that a normal adult never stops to think about problems of space and time. These are things he has thought of as a child. But my intellectual development was retarded, as a result of which I began to wonder about space and time only when I had already grown up. Naturally, I could go deeper into the problem than a child with normal abilities." Is it possible for us to think of Einstein as "retarded"? But, of course, this paradox helps us to gain deeper understanding, we are told. Indeed, relative to other mammals, all human beings are "retarded" -- more helpless longer, safe inside social structures, allowed to build greater brain capability with a broader knowledge base.

Also, Einstein was willing to continue to play. If delayed development is acknowledged as one major factor, then the child-like playfulness of this strong visual thinker may have been another. Einstein referred to the source of his ideas as "playing" with "images." When he describes the process in his own words, the fresh, childlike plasticity of the ideas and the interplay of the two hemispheres and two modes of thought seems clearly evident: "The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined. There is, of course, a certain connection between those elements and relevant logical concepts. It is also clear that the desire to arrive finally at logically connected concepts is the emotional basis of this rather vague play with the above mentioned elements. But taken from a psychological viewpoint, this combinatory play seems to be the essential feature in productive thought -- before there is any connection with logical construction in words or other kinds of signs which can be communicated to others. The above mentioned elements are, in my case, of visual and some of muscular type. Conventional words or other signs have to be sought for laboriously only in a secondary stage, when the mentioned associative play is sufficiently established and can be reproduced at will."

It is of no small significance that Einstein's words so clearly describe a two-mode process that corresponds so closely with the findings of those who have been investigating the roles of the two hemispheres. He first "plays" with "images" in the visual right hemisphere mode, the apparent source of new ideas or perceptions of order, possibly relatively independent of conventional thought, current scientific understanding and education. He plays until he arrives at the desired result. And then, "only in a secondary stage" does he have to seek "laboriously" for the right words and mathematical symbols to express the ideas in terms of the verbal left hemisphere mode, in terms of the world, in terms that fit within the structure of scientific thinking, in terms that "can be communicated to others."

It should be pointed out that these observations are not entirely unusual, nor should they be expected to be. Such observations as Einstein's occur frequently in the literature of creativity. Also, the concept of two modes of thinking has been cropping up in the medical literature, in one way or another, for a century or two, particularly with reference to artists and musicians and composers. What was new in the 60s, 70s and 80s was that research on the two hemispheres of the brain yielded such substantial evidence that serious investigators were forced to reverse major trends of the time and not only recognize, once again, the concept of consciousness, but also to entertain the concept that there are, not one, but two major modes of consciousness, each fundamentally different from the other -- one that we knew a little about, the other that we knew almost nothing about.

Based on excerpt from chapter one, In the Mind's Eye; second edition released September 4, 2009.

Friday, August 28, 2009

News on Second Edition

Today, I received a message from my publisher, Prometheus Books, saying that they had received, this morning, the printer’s shipment of the second edition of In the Mind’s Eye.

They are shipping out now (Friday afternoon) and the books should be available at bookstores and online by the end of next week, Friday, September 4 -- this includes, according to their list, noted previously, Amazon, Barnes&Noble, Borders, BooksaMillion and others. Some booksellers are still taking advance orders at significant discounts.

I was also delighted to learn that seven publishers or agents from overseas had expressed interest in obtaining rights for translations of the revised book -- three from Europe, one from the Middle East and three from Asia, so far. At this point, one cannot be sure of what will happen, but I am really pleased with this kind of interest in the second edition of In the Mind's Eye -- and I hope that the ideas set forth in the book may still have impact in many additional languages and cultures.