Earlier this month, Google introduced the Beta and Android-based version of the new and, for some, startling photo-based search feature they’ve calling Google Goggles. Its premise is as simple as its promise is mind-boggling. Take a photo with your smart phone and, with the touch of a button, it’s analyzed and matched up to image, data, and text links from Google’s vast databases. Photograph a book cover or a product’s bar code, for example, and Goggles will tell you where and how to buy it. Photograph a famous monument or tourist site and Goggles will deliver results that explain its history, cultural meaning, and the nearest public transportation stop. Photograph the front of a restaurant, see its menu, and get recent reviews by patrons, on the spot. Cool enough.
But while this new feature is only in its infancy, more than a few online commentators are speculating on some of the scarier places this new application may take us. In a post at The Industry Standard, Ian Lamont suggests that while this visual search capability definitely has its upside—think how easy comparison shopping’s going to be—what happens when a photo’s associated data includes negative or incorrect information? What happens, as Barbara Krasnoff wonders in a blog post, when you can point your smart phone at a person in the street and know, within seconds, that person's name, history and particulars. What happens, for example—once the object and facial recognition limitations and kinks currently being reported on get worked out—when that snapshot of your new neighbor and the mugshot from his/her DUI arrest a couple of years ago share space on your smart phone screen?
With Google Goggles promising to link any single image to archives of facts and other images we can barely imagine now, that over-used saying, “A picture is worth a thousand words,” is going to sound quaint, very soon.
Produced by the Smithsonian Institution Archives. For copyright questions, please see the Terms of Use.
Leave a Comment