I suppose that given a big enough forest, eventually one tree is bound to fall so precisely against another tree that it stays…
(I took this photo on Orcas Island, WA)
Ideas I wanted to remember and share.
When I first noticed, almost seven years ago, that there was no good software for sketching Economics graphs, I never imagined that this small observation would grow into such a major project. After several independent studies, conference papers, a masters thesis, and an acquisition, today my little project has reached a new milestone: the official release of OmniGraphSketcher 1.0.
This software application has always proven somewhat tricky to describe. I think this is partly because it is the first product of its kind, and partly because the word “graph” has such a wide range of meanings. I completely reframed the description at least five or ten times while writing my masters thesis. But my new favorite summary is by Omni’s documentation and UI master, Bill:
OmniGraphSketcher is a quick, straightforward tool for creating graphs, whether you have specific data to visualize or you just have a concept to explicate. And it doesn’t assume that you have gobs of free time and attention to learn how to use it.
You can find out more about OmniGraphSketcher in Linda’s entertaining Omni blog post and on the official product page.
As for me, I want to acknowledge again all the friends, mentors, usability testers, beta testers, and users who have continued to give feedback, find bugs, and suggest further improvements. The software would never have come this far if they had not repeatedly convinced me that it was actually useful.
So thank you, and enjoy OmniGraphSketcher! It’s a different kind of graphing program. I know there are many possibilities for expanding its functionality, and I look forward to exploring them. Even after seven years, this is version 1.0 – just the beginning.
If you have any doubt that electric cars are the future of transportation, watch or listen to Shai Agassi’s TED talk.
Among his interesting points:
“If we do not make this modern transition away from fossil fuels quickly and decisively, we will lose our economy right after we lose our morality.”
This article caught my eye because it was in the MIT alumni publication yet was written by a Williams College professor — a rare collision of my two alma maters.
That was my excuse… but then it turned out to also be a really interesting article. Author Morgan McGuire writes, “modern video games… are arguably the most complex systems engineered in any discipline, ever.” That had never occurred to me before. As one example, he points out that the US federal tax code is about a third the length of a standard video game’s source code (not to mention the graphics, textures, maps, etc. that accompany it).
Unlike most engineering disciplines (including software engineering) where the goal is to make the solution as simple as possible, in game design complexity is often desirable because it makes the game more interesting to play. Often, amazing complexity can be achieved with just a few interacting rules. I remember reading that the game designer behind the Sim series (SimCity, The Sims, etc.) was always looking for simple yet powerful sets of rules that put the user in control of an essentially infinite number of options — consider the limitless number of possible cities that can be built in SimCity.
McGuire’s thesis is that game design strategies should be better formalized so that they can be applied to designing or improving complex systems in the real world such as government policy, economic regulation, social and technical networks, etc. We need to be careful with this analogy though, because the goal in most of these disciplines is still to simplify if possible. So the hope is that by analyzing complex games, we’ll be better able to understand the complexities that inevitably arise despite our best efforts in real world systems.
I recently read an article in Communications of the ACM about making computer science in the classroom more socially relevant. The author, Michael Buckley of U. Buffalo, points out that “there isn’t a textbook out of the 60 I have on my shelf that makes me see computing as socially relevant…. If I was a student, beginning these important four years, and I was taught programming via doughnut machines [, pet stores, and games — the types of examples he finds in the textbooks –], I would quit and go do something important. Major in some field that had an impact.”
The author argues that when intro computer science is taught using silly, simple examples, students just don’t see the potential power and relevance of the tool. “Even… pure mathematics has me counting and measuring planets and populations.” He’s created an alternative set of examples that are much more socially relevant, involving for example voting systems (counting), pollution in the great lakes (2-d arrays), disaster evacuation (optimal paths), and drug interactions (databases).
These observations struck me as stunningly accurate. I think a big part of why I was drawn to computer science was that I had a strong sense of the power of programming long before I took an actual CS class. I think I saw very early on that you could learn how to do math, and then you could program the computer to do that math a billion times in a second. It could do all these things for you. It just seemed like the ultimate tool.
Also, I remember thinking that the coolest part about taking Statistics 201 in college was getting to use all sorts of real-world data sets. Historical SAT scores come to mind. We ended the course by doing a project where we had to gather some data in the real world and analyze it with statistics. My team looked at whether people’s close friends have the same sibling status (i.e. only child, older sibling, younger sibling). Not rocket science, but certainly socially relevant!
Conversely, I remember spotting contrived examples from miles away. In algebra and pre-calculus, you spend a fair amount of time learning how to do vector math, and most of the examples involve things like canoes in a fast-flowing stream. It always seemed bizarre to me that we were spending so much time on something with so little real-world applicability. Finally, I got to calculus and realized that the real reason we had to learn all that vector math was because it was vital for calculus, which allowed us to model physics, economics, biology, on and on. My thought was “why didn’t you just tell us about calculus, instead of boring us with canoe examples?”
Mr. Buckley has gone even farther by setting up a lab where more advanced students work on “socially relevant” problems, including educational tools and devices for the disabled. That’s fine, but it seems to me that the critical insight here is about how students are introduced to the field. Once you’re advanced enough to work on real problems, you’re hopefully way past the point where you understand why computer science is interesting and relevant.
I share his outrage that none of the textbooks are up to the task. Let’s get moving.
It’s not what the software does. It’s what the user does.
I just read an Interactions magazine article that was a bit muddled but pointed out,
The Industrial Revolution did not occur when we built steam engines, it occurred when we used steam engines to build steam engines. The true information and computing revolution will not occur until we use software to build software; until we really use executable knowledge to create executable knowledge.
It’s hard to know specifically what this will look like. Will it be more machine-learning based, probabilistic like the human brain? Analogy-centric? Evolutionary? Based on internet-scale knowledge? In any case, I don’t doubt that the essential “meta” argument is true. Without software that can build software, computers will remain pretty dumb.
Here’s what I’m thinking.
Science is all about gradually figuring things out by trial and error — making predictions and see if they hold up. You hope to be surprised. Engineering is all about clever hacks — breaking the rules while following the real rules. Design is about constraints — tradeoffs, compromises, and innovative solutions that manage to support all of the constraints. Art is about free association — thinking outside the box at all cost and doing something simply because it is new.
What is liberal arts? I think liberal arts is all about critical thinking — pushing an argument to its full logical conclusion. This is important in all of the above areas. For example:
Do you see what I mean? It’s just another vindication of the liberal arts. I kind of got sidetracked from my original point, though, which was just new thinking about the essence of science and engineering.
I went to a talk given by Ray Kurzweil today. This is a man who helped shape the way I think, because I read one of his books at age 15 or so, and it was startling. As soon as he started talking today I knew I had seem him before… it must have been a similar talk at MIT last year, or maybe a recording I watched online. The funny thing about him is that he talks about these crazy things that will happen in the future, in a totally droning voice like he’s so bored with all these obvious predictions. Also, you can’t argue with the guy. His numerical evidence is just way too strong. You have no grounds whatsoever to disagree. The only way you can beg to differ is by going outside the game — finding what he’s not talking about.
There are a couple of his points that I wanted to touch on here. An audience member said that a century ago people predicted that they would have more leisure time in the future; why isn’t that the case? And Ray basically pointed out that it is the case — most people work a lot because they want to, not because they have to in order to survive. Their jobs are a big part of who they are, what gives them gratitude. So, in many senses, that’s leisure. It struck me as slightly profound. Not working is boring. And if no one depends on you, what is the meaning of your life?
Another interesting aspect of Kurzweil is that he talks about all these exponential trends as if they are completely inevitable. Computer power doubling every year, gene sequencing doubling every year, brain imaging resolution doubling every year. And I agree with him 100% — when you look at those numbers, it does seem inevitable. But you also can’t forget that it only happens because real people do it!
He talked a fair bit about renewable energy. Apparently the amount of electricity in the world coming from solar power has been doubling every 2-3 years. Right now it accounts for around 1% of our electricity. If you follow the exponential trend, this means that in 15-20 years, just about all our energy will come from solar. I think this prediction has an excellent chance of coming true. But it’s interesting to compare Kurzweil with Ted Nordhaus and Michael Shellenberger, who argue a very similar thing in their recent book: climate change will be solved by massive investment in technology. The difference is that N&S are closer to the ground, advocating more research funds towards renewable energy technology. To Kurzweil, it will just happen because people like N&S and all the scientists will do their thing and make it so. It’s kind of amazing how he’s lifted himself out of the big complex mess of actually doing it. But if everything is preordained, what’s the meaning of life?
If I’m coming off as giving Kurzweil a hard time, I don’t mean to. Not only is he brilliant, he is in fact working hard with many companies on new research and advocating for research funding from his positions on government panels. I should mention that he became famous partly because he invented the first realistic music synthesizer, the first flatbed scanner, and the first robust optical character recognition system… among others. (And he wrote some damn good books — because, as he says, he can’t work on systems that are 15 years away, so he can only write about them.)
Based on current exponential-growth information technology trends and the estimated computational capacity of the human brain, Ray estimates that a computer will pass the Turing test (be able to simulate a human) around the year 2029. Many people argue that such a thing could never happen, but I see no good reason why not. I will be 45 in 2029. Who knows what the world will be like when extra human intelligence is cheap. Ray was quick to point out that we will use these technologies to extend ourselves — as we have with all past technologies — not to build artificial intelligence robots that take over the world — as is popular in sci-fi.
One last thing. It occurred to me that regardless of all the technological progress that has taken place, all the “social networking” that goes on online, I still just really want to cuddle with real humans. As the world becomes more connected and more overwhelming, we need to figure out how to make sure people feel loved and involved in their real world communities, and interact with real physical people. Yes, maybe in 50 years we will have realistic physical virtual reality, but that is too far away to worry about (I will be 74). I wonder what I could do in the meantime to help create more loving, physical, local communities.
There was a couple sitting next to me, about my age, and they were bored and “passing notes” by typing them on a cell phone and passing it back and forth. It seemed vaguely ironic… here was Ray Kurzweil, telling us that 10 years ago only a few people even owned cell phones. It had never even occurred to me to pass notes on a cell phone. It makes me wonder if I will read this 10 years from now and think, typing on cell phones, how old-fashioned…
I attended a talk some weeks ago that I’ve been meaning to write about. It was given by one of the chief something-or-others of Continuum, a design firm in the Boston area. He used the analogy of the light spectrum to describe a spectrum of design — “ultraviolet” on the one end referring to impractical but beautiful artsy design, to “infrared” on the other focusing on “design that’s almost invisible, but it makes you feel warm, so you go back and buy more.” This “infrared” end is what Continuum is all about.
For me, the most interesting part of his talk was his point that his favorite design problems are very constrained, with very little wiggle room to change anything. He pointed out that when this is the case, the little details can make all the difference. For example, his team worked on disposable diapers (for Pampers), and had a huge impact on their market share by carefully adjusting the smell of the diapers and by introducing slightly different diaper shapes for different developmental stages of the baby (because parents love to talk about the development of their baby).
The reason this is interesting is that as any product becomes commoditized (a process which is only intensified by globalization), the more the design settles, the less room there is for change, and the more the details make the difference. This means that software interfaces will improve as applications like word processing become commodities, because the competitive advantage will be gained on “little things” like the details of the user experience — beyond the feature set to how good the users feel when they use the product. As Continuum guy put it: as technological differentiation between products decreases, the value of customer experience differentiation increases.
Apple has always been very good at the user experience aspects of product design, so I think they will do well in the future. Indeed, Apple does not tend to develop new paradigms per se but new crucial tweaks that enhance the usability, the feel, the efficiency of the product. They do well in consumer markets (as opposed to corporate) because that is where products are sold on the whole experience rather than some sort of price/feature tradeoff decision made by managers.