Google AR Translation Glasses: Futuristic Tech

At the end of its I/O display on Wednesday, Google dragged out a “one more thing”-type surprise. In a short video, Google ascertained off a pair of augmented reality glasses to display audible language translations directly in front of your eyeballs.

Google product manager Max Spear anointed the capability of this prototype “subtitles for the world,” and we noticed family members communicating for the first time.

Now carry on for just a second. Like many people, we’ve employed Google Translate before and essentially think of it as an awe-inspiring tool that transpires to make a lot of awkward misfires. But, while we might delegate it to get us directions to the bus, that’s nowhere near the same as authorizing it to accurately analyze and relay our parents’ childhood stories. And hasn’t Google said it’s ultimately breaking down the language barrier before?

In 2017, Google sold real-time translation as its original Pixel Buds feature. The former colleague Sean O’Kane explained the experience as “a laudable idea with a lamentable execution” and reported that some of the individuals he tried it with said it stated like a five-year-old. But unfortunately, that’s not quite what Google conducted off in its video.

Also, we don’t want to skim past the fact that Google’s swearing that this translation will happen inside a pair of AR glasses. Not to hit a sore spot, but the truth of augmented reality hasn’t even caught up to Google’s idea video from a decade ago. You know, the one that served as a predecessor to the much-maligned and embarrassing-to-wear Google Glass?

Google’s AR translation glasses seem much more focused than what Glass was trying to accomplish. From what Google showed, they’re meant to do one thing — display translated text — not act as an ambient computing experience that could replace a smartphone. But even then, making AR glasses isn’t easy. Even a moderate amount of ambient light can complicate viewing text on see-through screens. It’s inquiring enough to read subtitles on a TV with some glow from the sun through a window; now, guess that experience but strapped to your face (and with the added strain of engaging in a conversation with somebody you can’t comprehend on your own).

Technology advances quickly — Google may be able to overcome a limitation that has hindered its competitors. However, that wouldn’t change that Google Translate is not a magic bullet for a cross-language chat. If you’ve ever tried holding an actual conversation through a translation app, you presumably know that you must talk slowly. And methodically. And clearly. Unless you like to gamble with a garbled translation, you might just be done with one slip of the tongue.

People don’t chat in a vacuum or like machines do. Like we code-switch when talking to voice assistants like Alexa, Siri, or Google Assistant, we know we have to employ more straightforward sentences when trading with machine translation. And even when we do communicate accurately, the translation can still come out clumsy and misconstrued. For example, some colleagues who can be fluent in Korean pointed out that Google’s pre-roll countdown for I/O displayed an honorific interpretation of “Welcome” in Korean that nobody uses.

That mildly uncomfortable flub pales in comparison to the fact that, according to tweets from Rami Ismail and Sam Ettinger, Google revealed over half a dozen backward, damaged, or otherwise incorrect scripts on a slide during its Translate presentation. (Android Police reports that a Google employee has realized the mistake and corrected it in the YouTube rendition of the keynote.) To be precise, it’s not that we desire perfection — but Google’s trying to notify us that it’s close to smashing real-time translation, and those sorts of mistakes make that seem extremely unlikely.

Google is trying to crack an immensely complex problem. Translating words is straightforward; figuring out grammar is challenging but possible. But speech and communication are far more complex than just those two fortes. As a relatively simple example, Antonio’s mother expresses in three languages (Italian, Spanish, and English). Moreover, she’ll sometimes take words from language to mid-sentence, including her regional Italian dialect (like a fourth language). That type of thing is pretty easy for a person to parse, but could Google’s prototype reflectors deal with it? Never mind the more chaotic parts of conversation like undefined references, incomplete thoughts, or innuendo.

It’s not that Google’s goal isn’t admirable. On the contrary, we want to live in a world where everyone gets to share what the research participants in the video do, gazing with wide-eyed amazement as they see their loved ones’ words occur before them. Breaking down language obstacles and understanding each other in methods we couldn’t before is something the world requires way more of; it’s just that there’s a long way to reach before we reach that destiny. Device translation is here and has been for a prolonged time. But despite the overload of languages it can handle, it doesn’t speak humans yet.