During my exchange semester abroad at WPI, I visited an AR course that was actually meant for Master’s students about Augmented Reality. In the second lecture, our professor entered the room wearing a Google Glass. Turns out he was one of the few selected people outside of Google to do Beta testing in real life.
Wearing an early version of Google Glass
There are pleanty of videos out there to show the vision behind Google Glass. But I’m sure you have heard of it already in some way.
Our professor was kind enough that every student of the course could check out Google Glass for a few minutes. Here are my first impressions in a brief:
- Compared to what is indicated in the videos by Google, it is not a see-through glass but more an extra display that thows contextual information. However, you have to actively look to the top-right corner to see it.
- The availability of apps is still very limited.
- Google Glass has a touch control on one side. However, I personally think it looked very odd that our professor was constantly swiping with his hand along his glasses.
- The voice activation of the glasses is triggered with the words “Hello Glass!”, followed by the action. To take a picture for example, you have to say “Hello Glass, take a picture.”. This voice activation would however be a bit troublesome when there is more than one person wearing a set of Google Glass. The is not voice identifiaction of the user yet.
- The audio output of the glasses work via bone conduction. This means there are tiny vibration that are transmitted directly to your inner ear. And that worked surprisingly well.
- According to our professor, the batteries are charted within 45 minutes, and last about a day.