Google this week held its developer conference for 2017, where it teased some of the brand new features coming to its products and services.
What we saw on stage was undeniably impressive, but one of the demonstrations in particular was frighteningly so.
Google Lens will give the company greater insight into our daily lives than ever before.
Gadgets and tech news in pictures
It was one of the first things to be revealed at the conference, and few people expected it to steal the entire show, as it ended up doing.
Lens isn’t available to consumers yet, but when it does arrive, it should prove seriously useful and, thus, incredibly popular.
It uses machine-learning to identify real-world objects through your phone’s camera, but that’s just the start of the story.
It can also analyse everything it sees, understand the context, work out where you are, and figure out what you want to do.
As shown by Google, Lens can use optical character recognition to take the username and password from a Wi-Fi router, and instantly connect your phone to that network.
It can also bring up restaurant reviews and details, using GPS location data to instantly work out which branch you’re considering going to.
All you need to do is point your camera in the right direction.
“In an AI-first world, we are rethinking all our products,” said Google CEO Sundar Pichai, who announced the company’s plans to use machine-learning to improve everything it does.
Google says Lens is coming to both Photos and Assistant.
The former, incidentally, will use machine-learning to analyse your pictures more thoroughly than ever. As well as editing them and recognising the people in them, it will prompt you to send the right photos to the right people, and invite your contacts to send pictures of you, to you.
Assistant, meanwhile, has just been rolled out to Apple’s App Store. Photos is already available on iOS.
The company is quietly…