Highest Rated Comments


Lewinga1 karma

I led my school's VR club in 360 Film and Animation. This was a particular topic of interest to me, so tons of questions incoming:

  1. What are some papers you'd recommend reading to understand how spatialization of audio works?
  • If 360 cameras record image in all directions, what is the equivalent in sound?
  • Assuming a six mono microphone rig in a cube-format, how do you reconstruct the sound from real life to virtual? Are you interpolating changes in headspace positions? What's going on there?
  • Is a stereo recording sufficient to extrapolate 360 sound? Or is it easiest to transform a single mono sound?
  • Using cameras as analogy: are more microphones better? Or is two enough to differentiate objects in space?

  1. How does your API know which waves to transform?
  • Your website cites (1) shifting time delay and (2) adjusting frequencies as part of the transformation process. Assuming a static position, (1) seems straightforward by adding a "shift" amount to two audio sources to simulate the delay in ear placements. (2) seems to imply that sound is transformed to simulate passing through the anatomical structure of the ear. Is this correct?
  • For (1), how do you add the ability to move your head in XYZ space? Suppose two instruments exist with different timbres. Relative to a listener facing the instruments, the listener's left cello plays A-440hz and a right trumpet plays A-440hz. How does the transformation function know how much of A-440hz to "tweak" for the cello vs the trumpet as the listener moves their head? How does this principle extend for the entire sonic-spectrum?
  • For (2), if my understanding is correct, what process do you use to "reconstruct" the absorbent/reflective profile of the ear? Did you have to measure the absorbent/reflective profile per wavelength to create a model?
  • For (2), I imagine the angle of attack to be important in hearing the source clearly (eg. same XYZ position, different pitch/yaw). How do you model hearing sound through the ear (as if it were behind you, parallel to source) vs hearing it directly from one ear only (and hearing less from the other ear, perpendicular to source) as the listener's position changes with respect to the source? I think I'm generally confused about how spatial audio is able to reconstruct 360 audio without an object-oriented representation of the world.

  1. What is the simplest rig/way to capture 360 sound?

  1. In your eyes, what are the biggest limitations to 360 sound recording right now, and what would it take for it to become commonly adopted?
  • What are the biggest technical challenges you've had to overcome, and what are you still facing?

  1. Have you gone to any Oculus Connect conventions (or any other VR conventions)? Can I connect my university's VR club to you?

Lewinga1 karma

Wow, thanks for the answers! I would love to learn more. I'll ask them to reach out to you via a message!