Google and devices work concurrently with services and AI, so help is anywhere you desire it, and it’s fluid. Now the computer isn’t a thing in your pocket at all. It is all around you. It is everything.
The technology disappears into the background when you don’t require it. So the gadgets aren’t the center of the system — you are. The idea of “ambient computing” bounces to a concept that hovered around Amazon, Apple, and other companies over the previous few years.
One easy way to crack ambient computing is almost voice assistants and robots. Put Google Assistant in everything, shout at your appliances, done and done. But that’s only the very start of the idea. The ambient computer Google envisions is more like a guardian angel or a super-sentient Star Wars robot. It’s an engine that comprehends you completely and tracks you around, stirring through and solving for all the stuff in your life.
Which is a concern for Google. The company is decentralized and non-hierarchical, and it can sometimes seem like each engineer on staff is given the green light to dispatch whatever they created that week. Since that day in 2019, Google has continued to do what Google always does: build an incredible amount of stuff, often without any real strategy or plan. It’s not that Google didn’t hold a more prominent image; it’s just that no one seemed interested in doing the connective work needed for the all-encompassing, perfectly-connected future Osterloh had envisioned. Google was evolving into a warehouse full of cool effects rather than an ecosystem.
But over the last pair of years, Google has started to change to meet this challenge. Osterloh’s devices team, for example, has fully reset its relationship with the Android team. The company proudly kept a wall between Pixel and Android for a long time, treating its internal hardware crew like any other manufacturer. Now, Google feasts on Pixel like the tip of the spear: it’s meant to be both a flagship device and a growth platform through which Google can build features it then communicates with the rest of the ecosystem.
Around the company, these corresponding teams and products are beginning to come closer together. They’re creating on unified tech, like Google’s custom Tensor processors, and on common ideas like conversational AI.
As a result, Google I/O handles unusually… coherent this year. Google is trying — more challenging than we can recognize — to build products that work well and work well together. Search evolves into a multisensory, multi-device proposition that understands who’s searching and what they’re looking for. It’s also augmenting the search background far beyond just questions and answers. It’s creating Android more context- and content-aware so that your phone transformations to match the things you do on it. It’s highlighting natural interactions to get information without learning a rigid set of commands. Finally, it’s building the hardware ecosystem required to make all that work everywhere and the software match.
Now, let’s be very straightforward: Google’s work is only just starting. It has to succeed in market share in device classes it has failed for years to capture. It has to construct new experiences inside new and old devices. It controls how to solve Android fragmentation between its devices and the market-leading devices from corporations like Samsung, which might be the most challenging. And it contains to become more current in users’ lives and extract more details from them, all without screwing up the search-ads business, upsetting regulators, or violating users’ privacy. Unfortunately, the ambient computer was never going to come quickly, and Google has made its efforts harder over the years.

Osterloh repeated the usual Google pitch for ambient computing, but it arrived with a bit of a twist to the standard this time. The long-term vision is still an always-there understanding of Google that works everywhere with everything, but right now? Right now, it’s always all about the ultra-fast processor in your pocket.
When Google set out to complete the Pixel 6A — essentially a cost-cutting exercise, endeavoring to shift a $600 phone into a still-credible $449 one — one expensive part survived the cut. “Pixel’s target is about having an awesome user experience that supports getting better over time,” Osterloh expressed. “And with that as the essence, what you learn is like the thing that is essential to have across these devices is Tensor.”
The Google-designed Tensor processor was the critical component introduced alongside the Pixel 6, mainly to improve its on-device AI abilities for speech recognition and more. And now, it seems, it will be a staple of the line: Osterloh said all the forthcoming Pixel phones — and even the Android-powered tablet the team is operating on for release subsequent year — will run on its Tensor processor.
The smartphone is the center of the cosmos for now, but you can already start to see how that might change. For example, the new Pixel Buds Pro is a robust set of noise-canceling headphones and a hands-free interface to a wirelessly connected computing device.
The new Pixel Watch is a phone accessory, delivering warnings and the like to your wrist and showing another interface to the same power in your pocket. But Google’s also trading an LTE version, so you’ll be able to access Assistant or pay with Google Wallet without demanding your phone nearby. And that tablet, whenever it comes, will include all the same Pixel qualifications in a giant shell.
The Pixel and Android crews have recently adopted a mantra: Better Together. This year, much of what’s new in Android 13 is not whizbang new features but minor tweaks meant to produce the ecosystem a little more seamless. With an update to the Chrome OS Phone Hub feature, you’ll be competent to utilize all your messaging apps on the Chromebook just as you would on your phone. Approval for the Matter’s innovative home standard now arrives built into Android, creating set up and controlling new devices much more accessible. Google expands support for its Cast protocols for shipping audio and video to other devices and improves its Fast Pair services to make it easy to connect Bluetooth devices. Since CES in January, it has been talking about these features and has signed up an impressive list of partners.
It sounds like Google finally watched an Apple ad and discovered that making hardware and software together really does help. Who knew! But Google’s position is genuinely tricky here. Google’s ad business relies on a mind-bending colossal scale, which it gets primarily thanks to other companies creating Android products. That means Google has to support all those partners happy and discern like they’re on a level playing field with the Pixel team. And it simply can’t control its ecosystem as Apple can. It is forever fretting about backward compatibility and how fortes will work on all sizes, prices, and power devices. It has to engender support to make significant changes, whereas Apple brute-forces the future.

But Google has become increasingly aggressive in pushing ahead with the Pixel label. It can afford to because Pixel is barely a real sales threat to Samsung and others. But it also holds because Google only succeeds if the ecosystem buys in, and Pixel is Google’s best opportunity to model what the whole Android ecosystem should look like. That’s what Osterloh visits as his job and, in large part, his team’s reason for being.
Another way of telling the only way Google can get to its ambient computing dreams is to make sure Google is everywhere. Like, everywhere. Google invests in products in seemingly every square inch of life, from TV to the thermostat to the car to the wrist to the ears. The ambient-computing fortune may be one computer to rule them all, but it needs near-infinite user interfaces.
The second step to creating ambient computing work is making it easy to use. Google is relentlessly trying to carve away every bit of friction involved in accessing its services, particularly the Assistant. So, for instance, if you own a Nest Hub Max, you’ll soon be able to converse with it just by glancing into its camera, and you’ll be able to set timers or turn off the lights without giving a command at all.
That has compelled Google to reinvent the search input, depending on voice and images like the text box and the output. Google created one hell of a text box, but it’s not sufficient anymore.
The most apparent outpouring of that work is multi-search. For example, using the Google app, you can take a photo of a dress — by Google’s standards, it’s always a dress — and then type “green” to search for that dress but in green. That’s the kind of thing you couldn’t do in a text box.
And at I/O, Google also conducted off a tool for running multi-search on an image with multiple things: take a picture of the peanut butter aisle, type “nut-free,” and Google will tell you which one to buy. Search employed to be one thing, Lens was another, and voice was a third, but when you incorporate them, new things become feasible.
But the authentic challenge for Google is that it’s much higher than a question-and-answer engine now. For instance, shopping has become essential to Google, but there’s no single correct response for “best t-shirt.” Plus, Google is employing search more and more to keep you inside Google’s ecosystem; the search box is increasingly just a launcher for various Google things.
So rather than just seeking to understand the internet, Google has to remember to understand its users better than ever. Does it support that Google has a gigantic store of first-party data that it has accumulated over the last few decades on billions of people globally? Of course, it does! But even that isn’t enough to earn Google where it’s going.

About the ads: don’t forget that even in a world outside the search box, Google’s still an advertising business. So, just as Amazon’s ambient computing vision always comes back to selling you things, Google’s will always come back to show you ads. And the specialty of Google’s whole vision is that it represents a company that understands a lot about you and appears to follow you everywhere will learn even more about you and follow you in even more places.
Google seems to be proceeding out of its way to try and make users discern comfortable with its existence: it’s moving more AI to devices themselves instead of processing and keeping everything in the cloud. It’s moving toward new systems of data collection that don’t so clearly identify an individual, and it’s delivering users more ways to control their privacy and security settings. But the ambient-computer life needs a privacy tradeoff, and Google is desperate to make it good sufficiently that it’s worth it. That’s a high bar and bringing higher all the time.
This whole procedure is full of high bars for Google. For example, suppose it wants to build an ambient computer that can be all things to all people. In that case, it’s going to need to create a sweeping ecosystem of hugely famous devices that all drive compatible software and services while seamlessly blending with a robust global ecosystem of other gadgets, including those made by its direct competitors. And that’s to construct the interface. Google maintains to turn the Assistant into something genuinely lovely to interact with all day and make its services flex to each need and workflow of users across the globe. But, of course, nothing about that will be comfortable.
But if you blink a little, you can notice what it would peek like. And that’s what has been discouraging about Google’s strategy in recent years: it discerns like all the puzzle pieces to the future are posing there in Mountain View, spread around campus with no one paying attention. But now, as a company, Google emerges to be beginning to assemble them.