Zoom, zoom, zoom!
Interesting article in yesterday’s Washington Post about Ben Shneiderman, the man who literally wrote the book about user interfaces. I saw him speak at CHI a few years ago. He talks in the article about how he believes (based on research, of course) that voice will never be the dominant method by which users interact with computers. He thinks visual methods are more suited to the way peoples’ brains work. I’ve used voice to control my computer going back to the days when an SE/30 was a hot new Macintosh. It was interesting, but I have to agree that it’s a niche thing at best. I think an office full of people talking to their computers would be one of the outer rings of hell, for example.
It looks like a lot of his work these days is on visualization of large data sets. Interesting stuff. I was going to say that I wasn’t sure most people need to visualize large data sets, but then I thought about what you do when you go to Amazon or another such web site. I know I would like to have some truly visionary tools to look at server logs visually. I talked to a few people at Bell Labs about this when I worked there, and at one of the spin-off ventures, Visual Insights, but there was never a really useful tool that came out of it.
The article in the Post talks about some of the work the U. Md. HCI lab is doing on visualization, using a photo browser called PhotoMesa that Ben Bederson wrote as an example. It sounds just like what iPhoto does on the Mac OS X. PhotoMesa works on Windows, UNIX, and OS X, and anything else that includes Java 2 version 1.4 (who came up with that versioning scheme?) I think that leaves Mac OS 9 out. Oh well. When Shneiderman gave the keynote at CHI, he used a program based on HCIL’s work with zoomable user interfaces (ZUIs) as his presentation tool. It was wizzy and everything, but I’m not sure how much it added over using PowerPoint to do the same thing. It seemed in that case that it was done because it could be, not because it was compelling. PhotoMesa looks like it might be a more appropriate application of a ZUI.
Shneiderman, Bederson, and Alison Druin, another very interesting speaker I saw at CHI lo! these many years ago, had a live chat on the Washington Post web site yesterday. I like Bederson’s reply about whether gesture recognition systems will ever become available: "Users wouldn’t be good at producing just the right gesture every time. And, how would you distinguish an explicit gesture vs. a sneeze?" Of course, that just refers to something like eye tracking or such. Gesture recognition via a tool (like a stylus) works pretty well. I’ve been using it for years, first on my Newton and now on my Palm.
Anyway, it’s very interesting to see academic research on user interfaces actually get some exposure in the mainstream press. I don’t remember the last time I saw that.
Posted at 3:39 PM