Future ComputersI have the feeling that future computers will closely resemble the slow-to-take-off ultra mobile PC (UMPC).
I've used Dragon Systems NaturallySpeaking since 1998 version 1. It has some things that it does well, and after a while, you learn to work around its idiosyncrasies, like a car you have had for years. The best thing about it is that it learns from its mistakes (from your corrections, that is) and has some sense of context.
So since 1998, I've had a vision of future computers whose primary means of accepting human input is through speech, and where the keyboard with its quirky arrangement (wordplay intentional) of rows of buttons that you individually press to append the next letter to what word or sentence or command you are trying to convey is remembered the same way old-timers remember the typewriters with their hundreds of connected moving parts -- as something inefficient and quaintly humorous.
Will it make mistakes? Will it misunderstand what you say to it? Of course. But don't we sometimes mishear what we say to each other? I'm sure it will be nearly effortless to correct any misunderstandings. I hope they teach future computers to say "huh?"
By eliminating the need for a keyboard, and the ongoing miniaturization of discrete functionality into single chips, I expect low-cost, low-power, connected and personal devices to be throw-in-the-pocket and easily recycled or replaced if lost (with the info it contained reconstituted automatically).
For NaturallySpeaking UsersIf you ever use NaturallySpeaking, here's a tip that helps me turn 10 or 20 minutes worth of speaking into text.
Several times I have recorded on a digital device minutes worth of explanation that I wanted to turn into a document. I don't buy the entry level edition of NaturallySpeaking, so it is capable of listening to an audio file and turning the sound into text. Maybe I just play my recording into the microphone connected to the computer. I don't remember.
Anyway, when I proof what it thinks it heard, there are invariably nonsensical passages that are so obscure I can't remember what my original meaning was. Fortunately, I can select a range of text in its scratchpad, right-click and choose "Play That Back". It will replay my spoken words that it used as the basis for the text it generated.
The next productivity feature for proofing I use when I have finished dictation and corrections. Any version except the entry level offers text-to-speech capability. I save my document, turn off the dictation, select everything, then right-click and choose "Read That" from the context menu.
It's surprising how understandable the synthesized human voice is in reading not only the random arrangement of words I've written (guided in inflection by the placement of punctuation), but also how well it "sounds out" people's names and product names and technologies that it never encountered before.
In fact, some times I will copy and paste a news article or online essay into the scratchpad and have the software read it to me through the computer speakers. I think that will be a capability we value in our digital devices/small portable computers in the future.
You really need to hear it aloud because it might have dropped a little word like "of" or "a" or turned an "if" into an "in". [Hey, they should add a grammar-sentence-structure-checker to catch these errors that you can repeatedly read right past because you know what you were trying to say, and it's all spelled correctly so nothing jumps out at you.]
Sidenote: NaturallySpeaking doesn't work on Vista last time I checked. Windows Speech Recognition is usable, not tedious to train, and convenient. But it gives me the sense of expecting me to conform to what it thinks a word should sound like, rather than learn to recognize how I pronounce it.