What Kurzweil Brings to Google

A few years ago I mentioned one of my favorite movies and its vision of the future. Until the End of the World (Bis ans Ende der Welt) by Wim Wenders was released in 1991 with only limited distribution in America. I was fortunate to be introduced to the film by a Kiwi I met in Dublin in 1994 after I finished my degree and contemplated how to get hired into a tech company in the Commonwealth (e.g. DEC in Ireland, Unisys in New Zealand…).

The film’s opening scenes involve a car giving real-time traffic information and direction. The movie basically had GPS navigation, Internet search engines, voice interfaces, laptops, mobile tracking, video phones and so many other predictions that today seem like uncanny predictions. All that in 1991!

What it did not have, however, was a self-driving car often found in science-fiction (Blade Runner, Total Recall, The Jetsons).

What does this have to do with today? I read in the news that Kurzweil, a famous futurist, is joining Google. And I also have read many times that people are unsure why he would join Google, even though it seems to me he spells it out clearly on his website:

“I’ve been interested in technology, and machine learning in particular, for a long time: when I was 14, I designed software that wrote original music, and later went on to invent the first print-to-speech reading machine for the blind, among other inventions. I’ve always worked to create practical systems that will make a difference in people’s lives, which is what excites me as an inventor.

“In 1999, I said that in about a decade we would see technologies such as self-driving cars and mobile phones that could answer your questions, and people criticized these predictions as unrealistic. Fast forward a decade — Google has demonstrated self-driving cars, and people are indeed asking questions of their Android phones.

I don’t know why someone would criticize those 10yr predictions in 1999. If he had said early 1990s or earlier…but by 1999 plenty of evidence was around that voice interfaces were working and automation vehicles were within reach.

Here’s my take on what Kurzweil was talking about: When I arrived at LSE in 1993 I volunteered to partner with disabled students. Technology and computers were skills I listed on the form at the office. My assignment came quickly. I was to help a blind Philosophy PhD student named Subbu with a new OCR system. The OCR system may even have been one of Kurzweil’s; I don’t remember. Once a week I would meet in Subbu’s cold and drafty office, heated by the lamp of his Xerox scanner, to gather text files on a floppy.

The system, I was told, cost the school more than $50K yet it often made systematic errors. 5 would be read as S, an i could sometimes be a t, and so forth. Subbu needed someone to fix the text integrity so his computer could read it to him. He also needed me to add page breaks. While I understood the obvious problem of mistakes the concept of page breaks was eye-opening (pun not intended) for me.

Subbu and I started spending lunch and more time together debating differences between seeing and blind user interfaces. He emphasized to me how the concept of a page is alien to someone who has never been able to see one. He said he could feel a physical page and its edge but he said it was an odd concept. Why would an idea stop because there was no more room to write? To him the unbroken thought was essential to philosophy and the page break was an unfortunate interruption.

And so I not only wrote WordPerfect scripts to clean the text automatically (he tended to scan many books a week, pushing me to become more efficient) but I also added page break marks into his text files. While he studied the scans without page breaks he needed them in order to make references for people who lived in the seeing world — visual space defined by page numbers. Incidentally, I did the same for my own thesis. My Apple Duo 230 had native voice recognition software (System 7 on the Macintosh came with free voice extensions) and so I would type and then have it read my writing back to me as I paced around the room with my eyes closed.

About three years later a similar thing happened. While working on voice recognition software for a Hospital I took some time to meet with a local Goodwill center in Iowa. It offered computer skills training to the disabled. Their equipment was amazing to me; from a laser pointer headband (screen keyboard for people with no limbs) to the latest OCR and voice recognition for the blind, I could see things were quickly advancing.

Seeing new interfaces brought back memories of Subbu and his productivity. He could read and write quickly without having ever seen a screen or a keyboard. Being “disabled” really started to sound backwards to me. I was the one disabled by a QWERTY keyboard and being asked to sit in a box hunched over in an uncomfortable chair. While I contorted myself to use an awful interface, the blind would listen to text in any position and speak from any position. Their interaction with technology, rather than being disadvantaged, made more sense than mine!

When I finished graduate school I searched for jobs where I could expand my experience with voice inputs as well as UNIX/Apple, TCP/IP and the web. All the latter has come to pass, but even with tiny mobile devices the concepts of a keyboard and screen still haunt us.

And that is what Kurzweil brings to Google. Interface innovations. Just like a clean search page revolutionized the web, they’re shooting for another big transformation in how we access information. Kurzweil is clearly a thought-leader in this space. I learned from him that we should not think of the blind as needing special instruments. It is the other way around. Kurzweil figured out how to remove a limitation that we were taking for granted. We should not have to see to use a computer. The keyboard was a strange standard and now we must move on to better, less-restrictive, options.

Think about the most annoying thing about driving. Seems to me it’s the time wasted manipulating a steering wheel and pedals just to go from point A to B. Nevermind the “thrill,” I’m talking about being forced to drive when you could be doing something else with that time, especially in places like Los Angeles. Google is moving to provide the benefits of an affordable dedicated driver (e.g. limo, bus, train) without the drawbacks that they usually come with (e.g. shared destinations).

One last thought. Recently I watched a Google employee present their vision of the future with big data. Their interface seemed overly trusted to the point of naive vulnerability. It made me think that the Apple map debacle was not having the impact it should; it was not only a warning for big data product usability but also for risk in big data trust.

My work with OCR integrity issues may seem dated now but the principle of testing systems for failure remains sound. What are the 5 and S of new automation systems and who is on the hook to validate that data before millions or more users with natural interfaces depend on the outcomes? Kurzweil will have some interesting ideas for sure and hopefully his experience will change the course of Google. I certainly hope not to see any more ads like the following.

This Google “One Day” video is a sickly saccarin, or even utopian, view of the future that is impossible for me to get behind. It’s devoid of obvious and necessary realities of trust and safety. Wim Wenders presented us a much more human story laced with risk, which could be why today it seems so close to what has really happened. Some of his predictions were over-the-top, such as a nuclear explosion in space. If only he had mentioned self-driving cars…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.