We have had personal computers for over a quarter century, and they still suck. Yeah, I know, they can play music and video now, and that's cool, but think about all the little steps you have to go through to get a computer to do what you want. Point here, click once there, double click that, drag down this menu which, by the way, is completely different from what it was before you did that other click...
What you are doing is navigating the large, but finite internal space of states that the computer can be in so that you can finally get to that last command that triggers the desired response. You do it using that part of your brain that lets you navigate through familiar and unfamiliar real spaces, like shopping malls, and lets you find your car in the parking lot when you're done shopping. If you have a stroke in your right vertebral artery, which keeps this region alive, you may be able to walk and talk, but you won't be able to use your computer. At least not until after a long period of retraining.
Which makes me wonder: why after all these years are we still navigating the computer's state space? Don't computers now have the processing power to start navigating our state space? After all, there are only a finite number of commands you can give a computer. Therefore, there are only a finite number of commands you can want to give a computer - at least from the computer programmer's point of view.
Why do we have to remember where to find the obscure command that formats the margins the way we what, or go through three click-and-drags to get a Greek letter? Computers now have keyboards, mice, microphones, and cameras. It's time for operating systems programmers to make computers that understand typing, points, clicks, scrolls, and some words and gestures - and that use those inputs to figure out what we want them to do.
It's time for the computational burden of learning and remembering the shared computer-user state space to shift from the user to the computer.
No comments:
Post a Comment