Google once again entices its users at its annual I/O developer conference with a new beta version of Google Lens. The original Google Lens is integrated into the Google Photos app, where users can currently only select the Google Lens feature on their already-snapped still images. The new beta sees Google Lens repositioned into the camera app to allow users to access its features on both images and in real-time without having to access another app.
Google’s integrated AI will enable you to access more information surrounding an array of different items and objects. For example, animals, famous landmarks, paintings, furniture and even people can be identified, alongside samples of text from menus, street signs or books. Utilising it’s powerful Google search functionality, Google Lens will unearth a world of knowledge to its users wherever they are.
Google Lens will eventually coincide with other apps such as Google Maps to help you navigate with Street View, YouTube to show you recipes similar to restaurant menus and Style Match to decorate your home and decide on an outfit for yourself.
Google is not only boosting its technologically-advanced ego with Google Lens, it’s showing users (and its competitors) how AI can be utilised in everyday, real-life situations. By aiding users on-the-go, Google is proving that AI has more important, real-world users. For example, in theory, travellers and explorers can quickly identity key tourist destinations and translate road signs and menus, whilst shoppers can identify items of clothing or pieces of furniture and find out where they could buy alternatives online.
After just one year, Google has already jumped ahead with the development of Google Lens. We can’t wait to test it out for ourselves!