Apple’s mixed reality headset has been wrapped in rumors for months, and a new report from Bloomberg’s Mark Gurman implies that it could come with Apple’s flagship M2 processor.
Apple’s most contemporary version of the device, reportedly capable of delivering augmented reality (AR) and virtual reality (VR) affairs, includes a base M2 chip and 16GB of RAM.
This shifts from supply chain analyst Ming-Chi Kuo’s previous forecast that Apple’s headset will have one processor with the abilities of an M1 chip and an additional lower-end processor devoted to handling data from the device’s sensors.
While Gurman doesn’t cite the purported secondary chip in this report, a multiple chip setup has also been whispered in an earlier account from The Information.
If Apple initially planned to use the M1 as the headset’s preliminary processor, it only makes sense for the company to switch it out for the most recent iteration of its in-house chip. Apple announced the new M2 chip at its Worldwide Developers Conference (WWDC) earlier this month and said it offers an 18 percent faster CPU and 35 percent faster GPU than the older M1 chip.
Rumors of the device’s standalone structure have also started to trickle out. Gurman’s prediction that it will contain 16GB of RAM could suggest a potentially more robust performance than the all-in-one Meta Quest 2. Meta’s VR headset arrives with 6GB of RAM and the Snapdragon XR2 Platform, so it’ll be fascinating to see how Apple’s headset compares when it eventually gets released.
There have been indications that Apple’s headset is getting closer to its whispered January 2023 launch window. Apple’s board of directors reportedly got the chance to try out the headset in May. RealityOS, the operating system the headset will reportedly use, has also emerged in Apple’s code and a trademark application likely filed by the company.
Last week, Apple CEO Tim Cook confirmed the headset’s existence when he told interviewers at China Daily to “stay tuned, and you will see what we have to offer” in the mixed reality space.
Mixed reality is the next wave in computing, followed by mainframes, PCs, and smartphones. Mixed reality is going mainstream for consumers and businesses. It liberates us from screen-bound experiences by offering instinctual interactions with data in our living spaces and friends. Among hundreds of millions worldwide, online explorers have experienced mixed reality through their handheld devices.
Mobile AR offers the most mainstream mixed reality solutions today on social media. People may not even realize that the AR filters they use on Instagram are diverse reality experiences. Windows Mixed Reality takes these user experiences to the next level with stunning holographic representations of people, high-fidelity 3D models, and the natural world around them.
Mixed reality combines physical and digital worlds, unlocking natural and intuitive 3D human, computer, and environmental interactions. This new reality is based on advancements in computer vision, graphical processing, display technologies, input systems, and cloud computing.
The term “mixed reality” was introduced in a 1994 paper by Paul Milgram and Fumio Kishino, “A Taxonomy of Mixed Reality Visual Displays.” Their paper explored the concept of a virtuality continuum and the taxonomy of visual displays. Since then, the application of mixed reality has gone beyond presentations to include:
- Environmental understanding: spatial mapping and anchors.
- Human experience: hand-tracking, eye-tracking, and speech input.
- Spatial sound.
- Locations and positioning in both physical and virtual spaces.
- Collaboration on 3D assets in mixed reality spaces.
Environmental Input and Perception
In recent decades, the relationship between humans and computers has continued to evolve using input methods. As a result, a new discipline has emerged that’s known as human-computer interaction or “HCI.” Human input can include keyboards, mice, touch, ink, voice, and Kinect skeletal tracking.
Advancements in sensors and processing power create new computer perceptions of environments based on advanced input methods. It is why API names in Windows that reveal environmental information are called the perception APIs. Environmental inputs can capture:
- a person’s body position in the physical world (head tracking)
- objects, surfaces, and boundaries (spatial mapping and scene understanding)
- ambient lighting and sound
- object recognition
- physical locations
A combination of three essential elements sets the stage for creating authentic mixed reality experiences:
- Computer processing powered by the cloud
- Advanced input methods
- Environmental perceptions
Our movements are mapped in a digital reality as we move through the physical world. Physical boundaries influence mixed reality experiences such as games or task-based guidance in manufacturing facilities. With environmental input and perceptions, experiences start to blend between physical and digital realities.
Mixed reality blends with both physical and digital worlds. These two realities mark the polar ends of a spectrum known as the virtuality continuum. We refer to this spectrum of realities as the mixed reality spectrum. On one end of the spectrum, we have the physical reality that we as humans exist. We have the corresponding digital reality on the other end of the spectrum.