top of page
  • MaryGrace Lerin

Meta Aims to Improve Virtual Experiences, Works on New AR and VR Spatial Audio Tools

Next-generation digital experiences, such as AR and VR tools, focus primarily on visual components. However, audio also renders a crucial part in enabling completely immersive engagement, with the sounds you hear serving to bring virtual settings closer to reality.


This is where Meta's most recent research comes in; Meta is creating new spatial audio tools that react to various settings as depicted in visuals to enable more realistic AR and VR experiences.


The characteristics of sound that individuals anticipate experiencing in particular places and how that might be translated into digital realms are the focus of Meta's work in this area, as seen in the above video overview.


According to Meta:


“Whether it’s mingling at a party in the metaverse or watching a home movie in your living room through augmented reality (AR) glasses, acoustics play a role in how these moments will be experienced […] We envision a future where people can put on AR glasses and relive a holographic memory that looks and sounds the exact way they experienced it from their vantage point, or feel immersed by not just the graphics but also the sounds as they play games in a virtual world.”


That might increase the immersiveness of the upcoming metaverse and contribute to it in ways you wouldn't quite anticipate.

To some extent, Meta has already considered this with the first generation version of their Ray-Ban Stories spectacles, which incorporate open-air speakers that transmit sound directly into your ears.


It is a rather stylish feature because it allows for fully immersive audio without the use of headphones thanks to the placement of the speakers. Which, although appearing as though it shouldn't, does work and might already be a key selling point for the gadget.

Three new models for audio-visual comprehension are now available to developers by Meta in order to advance their immersive audio components.


“These models, which focus on human speech and sounds in video, are designed to push us toward a more immersive reality at a faster rate.”


As demonstrated in the video clip, Meta has already created its own self-supervised visual-acoustic matching model. However, by involving other developers and audio professionals in this research, Meta may be able to make even more accurate audio translation tools to build on its previous efforts.


According to Mark Zuckerberg, CEO of Meta:


“Getting spatial audio right will be one of the things that delivers that ‘wow’ factor in what we’re building for the metaverse. Excited to see how this develops.”


That "wow" effect, similar to the audio components in Ray-Ban Stories, may very well be what encourages more people to purchase VR headsets, which may then aid in ushering in the subsequent stage of digital connectivity that Meta is working to achieve.


As a result, it might turn out to be a significant advancement, and it will be intriguing to see how Meta plans to develop its spatial audio capabilities to improve its VR and AR systems.

6 views0 comments

Comments


Discover Roundabout's free reporting tool for every social media campaign

Download the app

Apple and Android

apple_google_edited.png
apple_google_edited.png
bottom of page