Collaboration Opportunity

Real-time Sonic Reality 3D spatial audio

Posted by Garry Haywood

Creative Industries
1 reply

Kinicho are the developers on Sympan, the next-generation Sonic Reality Engine. We conceptualised Sonic Reality to advance the idea that natural sound is replicable in headphones and earbuds. 

We are looking to collaborate on projects where real-time natural sound replication is part of the use-case. We have a new processing technique that enhances naturalism in 3D spatial audio, improving localisation, audio quality and performance. The technique is currently working in desktop and high-power computing environments and in mobile devices, though limited by processing capacity, so we would like to explore it for cloud/edge computing scenario aligned to 5G.

As we can deliver ultra-high-definition spatial audio with zero-latency, this makes it a useful application for 5G use-cases. We envisage use-cases where cloud-processing can be parameterised by the orientation and positional data from mobile devices and hifi audiophile quality binaural spatial audio can be a return to the listener.

This obviously covers use cases across Immersive/VR/XR/AR applications but also extends into projects that use conventional media forms like Cinematic Surround Sound for movies, streaming live performances, enhanced gaming and so on.

But it is also applicable in some use-cases such as IOT/Sensors for health & safety, security and other avenues where the sonic reality of an environment might be relayed to a remote listener.

If you have a potential use case but you're not sure of its viability, we're also amenable to having an ideation conversation with you. 

more information is available in my profile



UK5G’s mission is to Educate, Collaborate and Innovate.

View posts Log in to create a post


Posted by Garry Haywood | 1 reply
  • My thought is, could you use n x smartphones to deliver a 7.1 audio experience?

    1 reply to this comment

    • Yes, potentially.

      However, there are 3 issues.

      1. if you use an on-device convolution engine to produce the binaural signal it will have more latency than our proposed spatialisations-as-a-solution.
      2. if you want a richer experience than 7.1 - something like Atmos, or maybe even a live-rendered sound field - you need more compute power.
      3. if you want multiple uses to have a synchronized experience, you can't do it on devices because of drift (and points 1 & 2)

      Hope that helps.

      0 replies to this comment

Want to collaborate? Start now

Know the answer? Collaborate now

Already got an account? Log in to post a comment