Alphabet’s Google unveiled its latest breakthrough in mobile phone technology by rolling out Google Lens for All android devices in a phased manner. All those science-fiction scenes in movies where you can see the world through a high tech device that identifies objects in real life is now a science-fact, a reality made possible by Lens.
Google Lens uses visual analysis to present relevant information on your camera screen based on what you’re aiming your Lens at. Using Visual AI coupled with the same monster Engine that drives Google search, Google Lens is set to capture more than just people’s imagination. Previously available only on flagship Google Pixel smartphones, Google lens will soon be within reach of all Android users worldwide now, provided they have Google Photos installed.
With this piece of intelligent software, Google will be able to achieve two significant milestones-
1. Getting rid of Google Glass and any other hard-to-implement, high tech hardware required to make this work practically in real life. By using software technology combined with AI and machine learning on a widely used platform such as the in-house Android on existing hardware- People’s phones.
2. Reintroducing Google Photos as an enabler for Google Lens. If you don’t have the Photos app, you can’t use Lens. By giving unlimited cloud storage for Photos to the billions of Android users capturing these objects and places, Google is able to capture unlimited data to analyze and use, which is certainly better than big data.
Today, It may be easy to underestimate the huge impact that these two things will make on the internet of tomorrow, but it will basically change how we will see the world from now, how we interact with it, and how this information is processed. And of course the results of these interactions. It seems Google lens is poised to set an example on the creative uses of Artificial Intelligence.
Let’s imagine a scenario to understand Lens’ features and get a glimpse of the world through the Google Lens –
So you’re hiking in the Himalayas and you see a beautiful flower you haven’t seen before in your life. Wouldn’t you like to know more about it? Just whip out your phone, open Google Photos> Open Lens and you will have all the information you need including the flower species, rarity and even its medicinal uses on your camera screen.
That’s one example of how Google Lens will work for us when we don’t know what to put in the Google Search bar. The object is there but you don’t know what it is. It’s likely that the texting process may become irrelevant in many cases. If we can see it, We can know about it almost instantly. Objects moving fast will be harder to catch and know about, so it’s likely that people might start using Lens as a default shortcut whenever the phone is taken out of the pocket.
Moving on with the hiking in the Himalayas story, being the Nature-loving cowboy that you are on your trips, you take some quick-draw Photos of a goat that Google Lens identified as a Himalayan Ibex Goat. Now you’re back from your trek in the forest with great photos, memories and a load of new information. As you approach a building that has some movie posters on the outside walls, you notice the text on the posters has been torn off. You point your Google Lens at the remaining poster and it gives you relevant visual search results like the movie name, casting as well as show timings at nearby theatres.
You decide to book a ticket for tomorrow and keep walking down the road. There is a River flowing alongside this road and on the opposite bank, you notice a nice building that looks like a hotel. But the name of the hotel isn’t visible because of the distance and small words. Once again, you can just point Google Lens at the building and it will try to identify the hotel. If successful it will give you not just the name, but also relevant search results about ratings, interior photos, booking options and room rates.
After crossing a footbridge half a mile away you reach the opposite bank of the river and head to the nearest shop to buy something to drink and eat. But all the products are new and you don’t know if you should buy them. Once again, you can use Google Lens to know more about the products before buying them.
Adding to these ‘basic’ features, (if you can call them that), Google Lens also lets you scan textual information and save it to the related application. For example, when you point your camera at a WiFi label with login details, the Google lens will automatically connect to the WiFi Source. Similarly, if you have a business card with someone’s name and phone number, scanning the card through Google Lens will automatically add the contact to your phone directory.
These are just a couple of the ways that I believe Google Lens will change information technology. Since it uses artificial neural networks and deep learning routines to identify objects, we can hope to see amazing things happening in the future. Maybe we will see something innovative being done with facial recognition, brand identity, and intellectual property.
Like maybe in 2038, celebrities will automatically get paid when they take a picture with one of their fans. Who knows? Anything is possible with Google Lens. It’s all about the perspective.
On the other hand, should we be concerned about our privacy? Since everything we see or capture is uploaded to the Google Photos cloud instantly. Is it possible that Google Lens will result in an age of even lazier human beings who just point their phone at each other instead of having a conversation, making mobile phone addiction even worse?
We would be delighted to know your point of view on this. Will Google Lens become the next big thing that will make our lives easier and efficient? Will it be able to visually search and annotate all objects successfully, or will we end up getting frustrated if it doesn’t work?