Creativity requires letting go

“To be fully free to create, we must first find the courage and willingness to let go:

  • Let go of the strategies that have worked for us in the past…
  • Let go of our biases, the foundation of our illusions…
  • Let go of our grievances, the root source of our victimhood…
  • Let go of our so-often-denied fear of being found unlovable.

You will find that it is not a one-shot deal, this letting go. You must do it again and again and again. It’s kind of like breathing. You can’t breathe just once. Try it: Breathe just once. You’ll pass out.

If you stop letting go, your creative spirit will pass out.

Now when I say let go, I do not mean reject. Because when you let go of something, it will still be there for you when you need it. But because you have stopped clinging, you will have freed yourself up to tap into other possibilities — possibilities that can help you deal with this world of accelerating change.”

-Gordon MacKenzie, Orbiting the Giant Hairball

Slivers of reality

“Being infinite, the whole of reality is too much for the conscious human mind to grasp. The best any one of us can do is to take the biggest slice of Infinite Reality that we can hold — intellectually, spiritually and emotionally — and make that slice our personal sense of what is real. But no matter how broad it is, any human perception of reality can be no more than a tiny sliver of Infinite Reality.”

-Gordon MacKenzie, Orbiting the Giant Hairball

Wrist communicator

John Gruber, “Apple Watch: Initial Thoughts and Observations“:

The most intriguing and notable thing about Apple Watch’s design, to me, is the dedicated communication button below the digital crown. […] Apple is notorious for minimizing the number of hardware buttons on its devices… The only explanation is that Apple believes that the communication features triggered by that button are vitally important to how we’ll use the device.

I had that same thought when viewing the Apple Watch unveiling and noticing the unusual dedicated button: Apple must consider those communication features vitally important.

It took me a little while to get used to the idea, but it now seems quite natural to virtually tap loved ones on the wrist and send them little drawings and heartbeats. Perhaps in five years we’ll be wondering how we ever got by without that capability.

Information Architecture

I use the term information architecture a lot but have found that its meaning is often unclear. The reason I chose that word is simply that I haven’t found or coined a better one yet. To try to describe what I want it to mean, I started by listing close synonyms and related words:

  • Conceptual Structure
  • Taxonomy
  • Object Model
  • Model
  • Framework
  • Classification
  • Categorization
  • Hierarchy
  • Typology
  • Decomposition
  • Characterization
  • Understanding

In the physical world, most objects are distinguished by physical independence. A pencil, a desk, and a chair are self-contained objects — we can use them and talk about them independently from each other. They also relate to each other — a desk is a helpful aid for using a pencil. But we are not confused about the identity of the pieces (pencil and desk). In other words, the “object model” is concretely in front of us.

We create an analogous world in our minds that consists of concepts and theories. Here, our determination of independent pieces is far more subjective. For example, writing and drawing are usually considered separate concepts, but is calligraphy a form of writing, or is it drawing? Or both? Or is it something else entirely? We are free to create whatever concepts are useful to us as we interact with the world and think about what is happening.

When I say information architecture, I’m referring to an instance of this conceptual world. What are the definitions of each component part, and how do they fit together?

In software design, the component parts are things we call “features”, “pages”, “commands”, etc. The designer’s choices about the definitions of these components are as subjective as the concepts in our minds — but they must be understandable by millions of people who use the software! Though most people rarely talk about it or think about it, the information architecture influences everything about the way we learn and interact with a tool.

And just as we can create new concepts like “democracy” with far-reaching effects, we can create new software concepts like “windows”, “hyperlinks”, and “text messages” which transform the way technologies are used.

Data is only available about the past

“Data is only available about the past.”

-Clayton Christensen (link)

This is an obvious but fundamental limitation that we should not lose sight of! Despite the fact that we typically want to predict the future, the hard data all comes from the past.

One way to deal with this is to assume that what was true in the past will still be true in the future. For example: “If the weather is hot today, it will likely be hot tomorrow.” Of course, the farther into the future we go, the more likely it is that something will change. But the continuity assumption allows us to pretend that the past data also reflects the future. And in many cases, this turns out to be accurate.

The other way to deal with this fundamental limitation is to use the data to form theories of correlation and causality. This is what the scientific method is all about. It allows us to generalize from the specific data and say “any time this happens, this other thing will happen.” For example: “this configuration of high pressure systems will cause the temperature tomorrow to fall.”

In the former approach, the data analyst is very interested in specifics, such as outliers and numeric values.

  • “What is the temperature?”
  • “Are there any problems we need to fix?”
  • “Where are our best successes?”

In the latter approach, the analyst is more interested in correlations, patterns, differences and trends.

  • “What time of year is it usually hot?”
  • “What causes our successes?”
  • “What leads to problems?”

It seems likely that different data analysis tools are optimal for these different types of questions.

Racism is just a theory

Why is it so easy to jump to the conclusion that racists are bad people?

Isn’t that conclusion almost as narrow-sighted as racism itself?

I just listened to an episode of This American Life about a white supremacist speech writer who later changed his whole persona and published a best-selling book about tolerance for others and respect for nature. The radio show asserts that this writer “pulled a 180” – completely changed course.

I don’t see it that way at all. As truly terrible as racism is, the evidence is that racists tend to have just as good intentions as everyone else. The white supremacist speech writer believed that blacks and Jews were the cause of many social and political problems. It followed that the way to improve society and work for good was to promote segregation and white supremacy. The theory turned out to be wrong, but the underlying goal was simply to fix society’s problems, not to cause harm.

An old friend of this writer, who is also a white supremacist and southern conservative, was quoted on the show saying the book was not a change of course at all. To him, the book is all about the problems with big government and the importance of honoring the natural order of things.

The way I see it, the speech writer did eventually realize that the white supremacy theory was wrong. But this wasn’t a change to his underlying values. It was merely a change to one of the multitudes of theories he held – such as “things fall when you drop them” and “people enjoy receiving gifts”. However, there was so much tied up in this supremacy theory, socially and politically, that he felt the need to pretend to be a new person entirely.

Why is it so hard to believe that people can update their theories? If you have any doubt, just listen to the Silver Dollar episode of Love+Radio, where a black man befriends dozens of Ku Klux Klan members and gently, lovingly disproves their theory that black people are the problem. Through this process, many Klan leaders updated their theories, and as a result, entire branches of the Klan were quietly dismantled.

No one wants to be wrong! And very few want to be a bad person. If you treat people with the assumption that they are good, they will tend to prove you right. You just need to provide a graceful way to be wrong, so that everyone has the chance to reconsider and update their theories.

Metadata Visualization

There are at least two ways of interpreting a table of data.

Date Temperature Humidity
June 18 92 57
June 19 95 NULL
June 20 84 51

The first interpretation treats the table as a collection of facts about the world. For example, on June 18 the temperature was 92 degrees and the humidity was 57%. On June 19, the temperature was 95 degrees and humidity was unknown.

The second interpretation treats the table as a literal list of data points. For example, on June 18, someone recorded the temperature at 92 degrees and the humidity at 57%. On June 19, the humidity sensor was broken. The data is stored in a table with three columns. Before June 18, the data was being recorded in a different table.

In other words, we can focus on what the data says about the world, or we can focus on the data itself.

We can think of the data ephemerally as information, or we can think of it as a physical thing that exists in and of itself.

This is analogous to written language: a sentence or paragraph generally means something, but it also exists as physical letters and punctuation on the page.

The second interpretation is often called metadata: data about the data. How was it collected, by whom, for what purpose, and where and how is it stored? How accurate is it likely to be?

If we are very confident about the accuracy and relevance of the data, we can summarize and visualize it cleanly. We could show a line chart of temperature over time and start to draw conclusions about what the temperature trend means.

But if the accuracy and relevance is unknown, we need to take steps to better understand the metadata. How much data is there? Which parts are missing, or appear to be duplicated? Where did it come from? What metrics are most relevant?

Suppose the default behavior of a data analysis tool is to ingest your data and take you directly to a clean line chart. Is that convenient or misleading? Does that clean line chart imply that you are looking at truth, when in fact you may just be looking at data?

Can we assume that the line chart is about temperature, or should we emphasize that it shows data about temperature? What is the best way to communicate that distinction?

Swift

Apple announced a new programming language called Swift earlier this week at WWDC 2014. The focus during the keynote was ease of use, and indeed the language is incredibly exciting as a learning tool. But this is not a simplistic language. It is extremely powerful, extremely well crafted, and designed to replace Objective-C for professional software development. In many ways it feels like the next evolution in the line of C, C++, Obj-C, C#… but they ran out of “C” modifiers and instead called it Swift.

Swift can be easily adopted by software companies because it is interoperable with most existing code written in C, C++, and Objective-C. You don’t have to rewrite your app from scratch just to get started.

The developer tools team is also shipping a live coding environment inspired by Bret Victor. This is truly exciting to see, and I suspect they are only just getting started. This environment is not only useful for beginners, it will also change the way professional programming is done: instead of building and debugging entire apps, developers can prototype, explore, and debug individual modules interactively in the “playground”. The documentation also lives in this environment, so you can play with example code and see the results in real time.

I have a lot more to learn about Swift, but my initial impressions are that it has achieved the high praise of “obvious only in retrospect.” I suspect it will significantly influence the software community.

The World is Continuous, but the Mind is Categorical

I’m going to contend that the physical world, at least at human scale, is continuous. For example, when objects move through space, they visit every perceivable intermediate position along the way. When you heat a room, the temperature passes though all intermediate temperatures. Colors, sounds, materials, emotions… even objects that we see as discrete entities, such as dogs and cats, have all sorts of continuous dimensions like size and weight, ear length and paw length, hairs per square centimeter, etc.

Yet we humans are constantly categorizing everything into discrete buckets. Hot, cold, warm, lukewarm… dog, cat, fish, zebra… Democrat, Republican, Whig, Tory… introvert, extrovert, intuitive, logical… using labels, names, groupings, and sub-groupings, essentially all of human language is an exercise in chopping the world up into manageable chunks.

This categorization allows us to communicate with each other, remember things, and reason logically. In fact, there is evidence that the part of the brain that most distinguishes humans from other species (our large neocortex) is structured specifically to support the storage of hierarchical, discrete concepts.

Cognitive psychology has shown that people typically define categories by using one or more canonical examples of each. For example, to determine whether a red-orange swatch is red or orange, we mentally compare it to our memories of canonical red and canonical orange and decide which is closer. Similarly, to decide whether something is a cat or a dog, we compare the specimen to our mental representation of a canonical cat and a canonical dog.

The less common, alternative method of distinguishing between categories is to define their boundaries (instead of their centroids). For example, some jurisdictions define a blood alcohol level of .08 as the boundary above which you are not allowed to drive. Notice that the precision is arbitrary: the cutoff could just as well have been .085 or .08222 repeating. The precision is there to make legal decisions easier. But in the real world, the boundary between “safe driver” and “unsafe driver” is fuzzy.

And indeed, the pesky, continuous real world interferes with even the most sophisticated attempts to categorize it. For example, biologists might agree that the way to distinguish between two species of fish is that one has a dorsal fin and the other doesn’t. It sounds black and white. But then you find a specimen that has sort of a partially formed dorsal-fin-like appendage that might just be a bump. The biologists get squirmy, and end up categorizing the specimen the intuitive way, based on similarity to their mental canonical examples of each species.

Even the distinction between “continuous” and “discrete” is itself fuzzy! Consider that human perception is inherently discrete because it operates using individual nerve cells; similarly, computers deal only with ones and zeroes so are discrete by definition. Yet the high resolution of both perception and computer displays gives the illusion of a continuous process, and indeed it is often most useful to think of these systems as continuous.

So the great advantage of using categories is that they allow us to convert the infinitely complex world into finite pieces that we can gain familiarity with and reason about. The great disadvantage is that categories are fuzzy, subjective. They form a simplified model of reality that is subject to interpretation, especially around the edges.

When you think about it, it’s astonishing how smoothly humans can navigate this very rough interface between models and reality. Every time we do almost anything, we have to first perceive the continuous world, translate it into categorical thinking, make a discrete decision, and then translate that back into a continuous motor action. All of this happens innately, below the threshold of consciousness.

I started pursuing this Interesting Thought because data analysis systems distinguish between continuous (numerical) fields that can be summed and averaged, and categorical (string) fields that can only be compared or filtered. But the deeper I got, the more it started to feel like a fundamental underpinning of Life, The Universe, and Everything.

For example, cultures get dragged down by divisive categorizations that form stereotypes. Religions suffer from rigid definitions of good and bad. Lawyers make their case by arguing that their client’s actions are best seen as an example of some discrete law or case history. And scientists and other professionals are often limited — or inspired — by arbitrary boundaries between companies, departments, and fields of study.

At its core, the search for a unified theory of physics can be seen as an effort to eliminate all distinctions when describing the physical world. Now I wonder: would we even be able to comprehend such a theory, given that our minds fundamentally think in categories?

Are there ways to get around this limitation? Zen? Math? Computation? How have people coped with this through history? Is this the next step of evolution? Or just a philosophical insight? It’s probably somewhere in between.