On Wearable Computing

If you're anything like me (and most of the people in my generation), it's rare to find you somewhere without your smartphone. Along with my wallet and keys, my iPhone goes into my pocket when I get dressed in the morning, and rarely leaves my side. Some call it an addiction, I call it convenience. Having a mobile computer like this in my pocket at all times is something that was unthinkable when I started using a computer, way back in the good old days of Windows 95. My dad got a BlackBerry after that, and it was fascinating to me. That thing could read emails, probably go on the internet, and had a full keyboard (also most importantly it had BrickBreaker). And it went in your pocket. Look how far we've come.

If you haven't heard of Google's X Lab, I would advise you to hit your favorite search engine and look them up. They're the people that are giving you self-driving cars, teaching computers to recognize cats, and this. Google Glass is a project that X has been working on for what we can assume is a long time. It's a head-mounted computer, and allegedly, it's heading out to developers this month.

I'm excited to see what people can hack up at Google, but I'm more excited for what this means to wearable computing as a whole. Back in the day, wearable computing meant wires all over you, strange visors and huge battery packs. With the so called "smartphone revolution", these bits are no longer necessary. I'd love to see third party and other cell phone manufacturers start planning their answer to Glass. You already have the entire internet in your pocket. Hook up your phone to your display using bluetooth and you're golden. Create an API that works on all smartphone operating systems and you're even better off. My fear is that Google will keep Glass to themselves and restrict it to Android. That would, in my opinion, be a huge mistake.

I do have a few things that I think could be improved with Glass. First, input. Right now, it seems that you input data into Glass by speaking out loud. You can move around by tilting your head. That's all fine and good, but voice is not the way to go. At least not yet. I talk to myself all the time, it's how I think, but I don't want other people to hear me do it. The answer already exists. Chorded, handheld keyboards. With 5 keys, it's possible to generate 31 different combinations (trust me, I checked my math). That's enough for all 26 letters in our (english) alphabet, and a few extra keys to customize the keyboard like shift or enter. Better yet, turn some of the combinations into common bigrams (combinations of two letters) like "th" or "in". Make the keyboards bluetooth and you've changed the way people interact with computers as well as the way they interact with their new wearable computer.

Second, privacy. For some people, their privacy weighs heavily on their minds. The thought of sending an army of people out into the world with cameras mounted on their heads sends shivers down the spines of the privacy conscious. And they have a valid point. I don't want people to know everything about me just by looking. Don't even get me started on the videos and pictures of me that are bound to look bad that could wind up on the internet. But to me the solution is simple. What's a thing that we do every single day when we meet someone new? The answer is shake their hands (unless you're a germophobe, then maybe you bump elbows, I'm not sure). We use image recognition to see whether or not the user is shaking hands, and we send out a request to the other person's device. Access is limited at first, maybe they can see where you work or get your email address. But now these two computers know about each other. So when you're together, your computers can get to know each other better, just like you are. When you become friends on Facebook, you can see their latest status update when you see them again. When you follow each other on Twitter, same thing. Privacy is easily managed, as long as you're willing to help the user out.

One more point and then I'll be done. Safety, particularly when driving. The answer is again, image recognition. You just got behind the wheel of a car. Your computer knows that because it knows what a steering wheel looks like, and you can no longer hit up Facebook and Twitter to see what's happening. You can't send text messages, and your display is very limited. Maybe you can only see the map of where you're going when there are no cars in front of you. People are worried that because you have to focus on the screen right in front of you, your awareness of the real world will go down. And it will, but we can mitigate the risks. The possibilities are endless. Just wait and see what developers come up with when they get their hands on this.