Below is a chronicle of my first day:
My morning started with a technical issue: I was unable to join my wireless network after doing a device reset. After searching the community forums, I learned my Glass auto-upgraded overnight to XE12, but the MyGlass app / website insisted on presenting XE11 QR codes (QR codes are used to join Wifi networks with Glass). I managed to work around the issue by setting up a MyGlass account a second time.
After saying goodbye to my family in Chinese (“Ok glass, google: how do I say ‘Goodbye’ in Chinese”), I backed my car out of the driveway, and asked Glass for directions to my office (“Ok glass, directions to: 280 Summer Street Boston”). Glass responded with a heads up display showing me a map and my next turn. To my surprise, it did not annotate the map with traffic information. All I can say is: it was a good thing it was the day after Christmas.
Before heading into Boston I stopped at a gas station. While pumping gas, one of my contact lenses bothered me, and I instinctively winked several times. Since wink detection was enabled, a few seconds later, I was the proud owner of several photographs of the gas station and the pump. ;)
Driving To Work
I am not sure if wearing Google Glass is legal in Massachusetts. I can say the distraction was nominal though, since the directions were voiced into my earpiece, and the map only appeared on the heads up display shortly before the next turn. Also, while the display is always present in your field of vision, its transparency allows your brain to mostly tune it out. But since the direction feature was lacking traffic information, it made for a poor GPS anyway.
After parking in Boston, I walked the three blocks to the office, seeing a few incoming emails from work colleagues appear on my display. I also verbally sent a message to my tech geek son, who is watching my Glass experience with interest. The speech to text translation worked surprisingly well. Seconds later my son’s picture and the word “Cool” appeared on my display. He is a man of few words.
Periodically while at work, the Glass display would light up and show “cards”. The sources varied, and included Twitter, incoming emails, the New York Times, Google Chat, and local information (e.g. about nearby restaurants). The information did not contribute much to my day though, so were primarily just a novelty. But they did make you think about the information you wished was appearing on the display, which I will discuss in a future blog post.
I went for lunch with a couple colleagues. Before going I tried to check the food truck schedule (“Ok glass, google: Boston food truck schedule”). This brought back a series of cards that became irrelevant as one of my colleagues interjected: “I already checked. No food trucks today”. Humans: 1, Cyborgs: 0.
I categorized three types of looks while wearing Glass in public during the day: curiosity, confusion, or disdain. Cashier at lunch: curious; Dunkin Donuts worker: confused; person on elevator: disdain. Until there is more widespread use of technology like this, I suspect there will be more confusion and disdain. Fortunately though no one assaulted me.
While walking to the car, I snapped a picture of the Boston skyline from Fort Point, which I verbally posted on Twitter. The wink detection didn’t work in the dark, so I had to take it the old fashion way (“Ok glass: take a picture”). As I neared the car, an email from one of my engineers appeared on the display, which I asked Glass the read. The read aloud feature worked surprisingly well. I’ve watched this feature over the years, but until Glass, have not seen much advancement since the text to speech synthesizer of my Apple IIe in 1983 (okay, slight exaggeration - but mostly true).
After taking Glass off in the evening, I noticed a mirage of the transparent display in the upper right corner of my eyes. While this disappeared in a few minutes, it revealed how quickly the human brain adjusts to its new cyborg part.
I wore Glass while writing this post later in the evening, periodically scrolling back in the timeline to see what happened that day. The Glass timeline is one of the more innovative features that comes out of the box. It keeps a record of the cards that occurred over the day, which you can scroll through by swiping your finger against the side of Glass. I could see the exact time I said goodbye to my family in Chinese, my request for directions, the picture of the gas station, the incoming emails while walking to the office, and so on.
After wearing Glass for the day, I’m at a loss to understand the privacy issue. When you take a picture in Glass, the picture appears on your display, which is visible as a white light to anyone looking at you. By contrast, you can take a picture with my iPhone without any external indication. So I’m not sure Glass alters anyone’s privacy more or less than a smartphone. I suspect the people complaining about privacy must not have seen a smartphone yet. ;)
Warning: It’s Beta
It’s good Google calls this the Explorer program, because you definitely feel like one. Google Glass is very much a beta product. Some examples:
- Incompatibility between Glass and MyGlass app / website after XE12 upgrade.
- A limit of 10 contacts in your contact list, which need to be added / managed manually.
- Inability to change per message the contact method for a contact (e.g. if you set it to Hangout, it will always use Hangout for new messages).
- The supply of available apps (a.k.a. Glassware) is very limited, and are very much works in progress.
- While the iPhone support is much better in XE12, it still is lacking in some features available to Android users.
- You can browse the web, but cannot fill out forms.
As an early user of Blackberry, I remember the “ah ha” moment that occurred on reading email on my first smartphone. In that instant, I understood the value of the device, and how it could be integrated into my life. There was no such feeling with Google Glass. In many ways, Glass in 2013 is like having the original BlackBerry with all its prerequisites for that killer product - the operating system, keyboard and internet connectivity - but without the email software.
But it’s hard not to see the potential of Glass. Google has delivered a platform that has the potential to enable incredible innovation. All the features for the killer apps are there - e.g. high quality heads up display, excellent speech recognition, head tracking, wireless / bluetooth support, text to speech, developer SDK. All that is lacking is the killer apps.
We know how to solve that, right?