Adobe previewed a concept it is working on that would make it easier for creators working on VR videos to place and align sound.
Producing high-quality 360-degree video content has traditionally been a difficult affair at all stages of production, from capture to delivery. However, a constant stream of new cameras, editing tools and streaming techniques are on the way to make the process easier.
Ricoh has revealed a brand new version of its Theta 360-degree camera: The Theta V, which upgrades the video capture quality to 4K resolution, adds live streaming support, and immersive “surround” sound audio recording. It takes over as the flagship Ricoh 360 camera, adding some much-appreciated modernized touches to one of the oldest and best-loved lines of consumer 360 cameras around.
The Ricoh Theta V also has a new high-speed wireless radio for faster data transfer, boosting it up to 2.5 times faster vs. the existing version. It also includes improved exposure and white balance accuracy, and boosted dynamic range, which Ricoh says should result in far better image quality in all lighting situations. The improved imaging tech was borrowed from the Pentax line of DSLRs, the company says.
Oculus Connect 4, the company’s fourth annual developer conference, is set for October 11th and 12th in San Jose, California. There, Oculus will share with developers some of its latest research and developments, including what’s coming to the company’s VR Audio SDK.
Spatial audio is hugely important for creating convincing virtual reality worlds. Traditional stereo audio often sounds like it emanates from within your head.
From Beyoncé’s Lemonade to VR games, virtual reality is becoming a fixture of the entertainment industry. It has given rise to the era of 360-degree videos, on YouTube and elsewhere. These videos immerse you into the recording and make you feel a part of it. Beyond the video though, the audio side of the equation is just as important, and 360-degree sound recording is a growing trend for this reason.
As VR content continues to grow, one concern for VR and 360 content creators is recording 360-degree audio that can fully immerse viewers in an environment. Therefore, with developments in technology, audio brands are now focusing on products that have the ability to capture immersive sound, as the human ear would perceive in a natural environment.
Your brain responds to sound faster than any other sense, so sound and music cues direct attention, trigger emotions, and guide participants in very powerful ways – organising all your other senses together. Today’s consumers are accustomed to capturing incredibly realistic videos, yet as mainstream technology makes immersive visual experiences ever more accessible, the power and emotion of this footage is too often let down by the quality of sound that these devices can capture. Thus it is safe to say that 3D audio adds to the feeling of presence that we strive so hard to achieve with just visuals in VR.
Spacial Connect workflow allows for 3D audio to be controlled from within a VR environment.
The Spacial Connect workflow allows users to export data as object-based audio directly to the Unity engine for both Vr or 360-degree video production. Dear Reality are accepting applications for users to join the Beta testing. Further information can be found on the Dear Reality website.
Spacial Connect works with almost any Digital Audio Workstation (DAW) connected to a HTC Vive or Oculus Rift to allow any sound designer, audio engineer or musician to create 3D audio content for VR.
A team at NPR is the winner of a grant to develop virtual-reality stories that will transport listeners to audio-rich soundscapes.
The NPR project is among 11 winners of the Journalism 360 Challenge awards announced Tuesday. Presented by the Knight Foundation, Google News Lab and the Online News Association, the grants of $15,000–$30,000 support the use of immersive storytelling in news.
Other winners include efforts to make immersive storytelling more accessible to community and ethnic media and to help journalists and others create location-based data visualizations in a virtual-reality format.
The world around us is vibrant and exciting simply because it’s dynamic and three-dimensional. Sometimes, capturing 2D images and videos with your smartphone or DSLR camera just doesn’t do the scene any justice. But that’s all about to change as this virtual reality camera called SONICAMenables users to capture both 2D and 3D videos and images in full 360 degrees. It’s the world’s first affordable, high-quality VR camera.
Capture vivid moments
SONICAM is a professional, spherical VR camera with 9 fish-eye cameras, 64 microphones, 4K HD resolution, and 360 degrees field of view. The combination of these features in one single device means that users can film any scenes vividly without any blind spots or image distortion.
Of course, you’ve heard about 360 cameras by now – capable of capturing video or stills of the entire scene. It’s been emerging over the past couple of years, and really feels as if the market is ready to explode. But if you truly want to a 360/VR experience with your goggles, the visuals are only part of an important equation: You need 360 audio in order to complete the experience.
Here at the NABShow, The Digital Circuit spent some time Monday poking around the world of Ambisonic audio.
Now you might think, given that 360 video is a recent phenomenon, that this whole idea of ambisonic (kind of a mashup of ambient and sonic) audio must also be a relatively new deal. That would be incorrect.
The reality is that this concept (and even the word) has been around since the analogue days. Wikipedia describes it as a “full sphere surround-sound technique: In addition to the horizontal plane, it covers sound sources above and below the listener.”
Though it was cool at the time, it never really took off. Some audiophiles loved it and there were niche recordings, but it just wasn’t a mainstream hit.
That was then. Enter digital, home theatre, 5.1, 7.1, 9.1-channel audio and the interest in what has popularly become known as “surround sound” enjoyed an obvious resurgence. But true ambisonic audio is different from these discrete channels, because the sound source can move just as you move in a virtual space.
Hallelujah is a new experience by VR film studio Within that’s captured using Lytro’s latest Immerge light-field camera which captures volumetric footage that makes for a much more immersive experience than traditional 360 video. Hallelujah is a performance of Leonard Cohen’s 1984 song of the same name, and mixes the latest in VR film capture technology with superb spatial audio to form a stunning experience.
Lytro’s Immerge camera is unlike any 360 camera you’ve seen before. Instead of shooting individual ‘flat’ frames, the Immerge camera has a huge array of cameras which gather many views of the same scene, data which is crunched by special software to recreate the actual shape of the environment around the camera. The big benefit of which is that the playback puts the viewer in a virtual capture of the space, allowing for a limited amount of movement within the scene, whereas traditional 360 video only captures a static viewpoint which is essentially stuck to your head. Not to mention the Immerge camera also provides true stereo and outputs a much higher playback quality. The result is a much richer and more immersive VR film experience than what you’ve seen with traditional 360 video shoots.
Dr. Henney Oh, co-founder and CEO of spatial audio specialist G’Audio Lab talks us through the processes of capturing, mixing and rendering sound for virtual reality and 360-degree video applications.
The premise of VR and 360-degree video is to simulate an alternate reality. For this to be truly immersive, it needs cogent sound to match the visuals. Humans rely heavily on sound cues to inform us of our environment, which is why immersive graphics need equally immersive 3D audio that replicates the natural listening experience. The challenge becomes how to draw the viewer’s attention to a specific point when there is continuous imagery in every direction, and sound cues can help with that.
The key to creating realistic audio for this is to synchronise sounds according to the user’s head orientation and view in real time. This helps replicate an actual human hearing mechanism, which makes the listening experience more realistic. Producing truly immersive sound requires several steps. First, you must capture the audio signals, then mix the signals and finally render the sound for the listener.
To replicate the natural listening experience, the use of two audio signals – Ambisonics and object – is essential.
Ambisonics is a technique that employs a spherical microphone to capture a sound field in all directions, including above and below the listener. This requires placing a soundfield microphone (also known as an Ambisonics or 360 microphone) somewhere near the position where you intend to listen to. Keep in mind that these microphones will record a full sphere of sound at the position of the microphone, so be strategic with where you place them. It’s also important that your mic is not spotted in the scene, so we encourage placing the microphone directly below the 360 camera.
In addition to capturing audio from a soundfield microphone, content creators also need to acquire sounds from each individual object as a mono source. This enables you to attach higher fidelity sounds to objects as they move through the scene for added control and flexibility. With this object-based audio technique, you can control the sound attributed to each object in the scene and adjust those sounds depending on the user’s view.
Capturing mono sound can also be tricky because the traditional use of a boom microphone to capture mono does not work in VR. In synchronised 360 sound recording, there is no space to place the boom microphone, so it is helpful to place a lavalier microphone directly on the individual (hidden underneath apparel).