Adobe’s Project SonicScape Visualizes Audio For Easier 360 Editing, by Ian Hamilton

Adobe previewed a concept it is working on that would make it easier for creators working on VR videos to place and align sound.

Producing high-quality 360-degree video content has traditionally been a difficult affair at all stages of production, from capture to delivery. However, a constant stream of new cameras, editing tools and streaming techniques are on the way to make the process easier.

Read More:

Ricoh’s new Theta V 360 camera offers 4K, spatial audio and more, by Darrell Etherington

Ricoh has revealed a brand new version of its Theta 360-degree camera: The Theta V, which upgrades the video capture quality to 4K resolution, adds live streaming support, and immersive “surround” sound audio recording. It takes over as the flagship Ricoh 360 camera, adding some much-appreciated modernized touches to one of the oldest and best-loved lines of consumer 360 cameras around.

The Ricoh Theta V also has a new high-speed wireless radio for faster data transfer, boosting it up to 2.5 times faster vs. the existing version. It also includes improved exposure and white balance accuracy, and boosted dynamic range, which Ricoh says should result in far better image quality in all lighting situations. The improved imaging tech was borrowed from the Pentax line of DSLRs, the company says.

Read More:

Oculus to Talk “Breakthroughs in Spatial Audio Technologies” at Connect Conference, by Ben Lang

Oculus Connect 4, the company’s fourth annual developer conference, is set for October 11th and 12th in San Jose, California. There, Oculus will share with developers some of its latest research and developments, including what’s coming to the company’s VR Audio SDK.

Spatial audio is hugely important for creating convincing virtual reality worlds. Traditional stereo audio often sounds like it emanates from within your head.

Read More:

For Audio Companies, VR Is the New Frontier, by Vipin Pungalia

From Beyoncé’s Lemonade to VR games, virtual reality is becoming a fixture of the entertainment industry. It has given rise to the era of 360-degree videos, on YouTube and elsewhere. These videos immerse you into the recording and make you feel a part of it. Beyond the video though, the audio side of the equation is just as important, and 360-degree sound recording is a growing trend for this reason.

As VR content continues to grow, one concern for VR and 360 content creators is recording 360-degree audio that can fully immerse viewers in an environment. Therefore, with developments in technology, audio brands are now focusing on products that have the ability to capture immersive sound, as the human ear would perceive in a natural environment.

Your brain responds to sound faster than any other sense, so sound and music cues direct attention, trigger emotions, and guide participants in very powerful ways – organising all your other senses together. Today’s consumers are accustomed to capturing incredibly realistic videos, yet as mainstream technology makes immersive visual experiences ever more accessible, the power and emotion of this footage is too often let down by the quality of sound that these devices can capture. Thus it is safe to say that 3D audio adds to the feeling of presence that we strive so hard to achieve with just visuals in VR.

Read More:

Dear Reality Release New Workflow For Spacial Audio, by Rebecca Hills-Duty

Spacial Connect workflow allows for 3D audio to be controlled from within a VR environment.

The Spacial Connect workflow allows users to export data as object-based audio directly to the Unity engine for both Vr or 360-degree video production. Dear Reality are accepting applications for users to join the Beta testing. Further information can be found on the Dear Reality website.

Spacial Connect works with almost any Digital Audio Workstation (DAW) connected to a HTC Vive or Oculus Rift to allow any sound designer, audio engineer or musician to create 3D audio content for VR.

Read More:

NPR wins award to experiment with virtual-reality audio, by April Simpson

A team at NPR is the winner of a grant to develop virtual-reality stories that will transport listeners to audio-rich soundscapes.

The NPR project is among 11 winners of the Journalism 360 Challenge awards announced Tuesday. Presented by the Knight Foundation, Google News Lab and the Online News Association, the grants of $15,000–$30,000 support the use of immersive storytelling in news.

Other winners include efforts to make immersive storytelling more accessible to community and ethnic media and to help journalists and others create location-based data visualizations in a virtual-reality format.

Read More:

Meet SONICAM: The World’s First Affordable 3D Virtual Reality Camera, by Kathleen Villaluz

The world around us is vibrant and exciting simply because it’s dynamic and three-dimensional. Sometimes, capturing 2D images and videos with your smartphone or DSLR camera just doesn’t do the scene any justice. But that’s all about to change as this virtual reality camera called SONICAMenables users to capture both 2D and 3D videos and images in full 360 degrees. It’s the world’s first affordable, high-quality VR camera.

Capture vivid moments

SONICAM is a professional, spherical VR camera with 9 fish-eye cameras, 64 microphones, 4K HD resolution, and 360 degrees field of view. The combination of these features in one single device means that users can film any scenes vividly without any blind spots or image distortion.

Read More:

360 Audio is The New Black: The Sennheiser Ambeo VR Mic, by Scott Simmie

Ambisonic microphone from Sennheiser

Of course, you’ve heard about 360 cameras by now – capable of capturing video or stills of the entire scene. It’s been emerging over the past couple of years, and really feels as if the market is ready to explode. But if you truly want to a 360/VR experience with your goggles, the visuals are only part of an important equation: You need 360 audio in order to complete the experience.

Here at the NABShow, The Digital Circuit spent some time Monday poking around the world of Ambisonic audio.

Now you might think, given that 360 video is a recent phenomenon, that this whole idea of ambisonic (kind of a mashup of ambient and sonic) audio must also be a relatively new deal. That would be incorrect.

The reality is that this concept (and even the word) has been around since the analogue days. Wikipedia describes it as a “full sphere surround-sound technique: In addition to the horizontal plane, it covers sound sources above and below the listener.”

“Ambisonics was developed in the UK in the 1970s under the auspices of the British National Research Development Corporation,” continues the entry.

Though it was cool at the time, it never really took off. Some audiophiles loved it and there were niche recordings, but it just wasn’t a mainstream hit.

That was then. Enter digital, home theatre, 5.1, 7.1, 9.1-channel audio and the interest in what has popularly become known as “surround sound” enjoyed an obvious resurgence. But true ambisonic audio is different from these discrete channels, because the sound source can move just as you move in a virtual space.

Read More:

Shot on Lytro’s Light-field Camera, ‘Hallelujah’ Is a Stunning Mix of Volumetric Film and Audio, by Ben Lang

Photo courtesy Lytro

Hallelujah is a new experience by VR film studio Within that’s captured using Lytro’s latest Immerge light-field camera which captures volumetric footage that makes for a much more immersive experience than traditional 360 video. Hallelujah is a performance of Leonard Cohen’s 1984 song of the same name, and mixes the latest in VR film capture technology with superb spatial audio to form a stunning experience.

Lytro’s Immerge camera is unlike any 360 camera you’ve seen before. Instead of shooting individual ‘flat’ frames, the Immerge camera has a huge array of cameras which gather many views of the same scene, data which is crunched by special software to recreate the actual shape of the environment around the camera. The big benefit of which is that the playback puts the viewer in a virtual capture of the space, allowing for a limited amount of movement within the scene, whereas traditional 360 video only captures a static viewpoint which is essentially stuck to your head. Not to mention the Immerge camera also provides true stereo and outputs a much higher playback quality. The result is a much richer and more immersive VR film experience than what you’ve seen with traditional 360 video shoots.

Read More:

How To: Producing immersive audio for VR, by Dr. Henney Oh

Dr. Henney Oh, co-founder and CEO of spatial audio specialist G’Audio Lab talks us through the processes of capturing, mixing and rendering sound for virtual reality and 360-degree video applications.

The premise of VR and 360-degree video is to simulate an alternate reality. For this to be truly immersive, it needs cogent sound to match the visuals. Humans rely heavily on sound cues to inform us of our environment, which is why immersive graphics need equally immersive 3D audio that replicates the natural listening experience. The challenge becomes how to draw the viewer’s attention to a specific point when there is continuous imagery in every direction, and sound cues can help with that.

The key to creating realistic audio for this is to synchronise sounds according to the user’s head orientation and view in real time. This helps replicate an actual human hearing mechanism, which makes the listening experience more realistic.  Producing truly immersive sound requires several steps. First, you must capture the audio signals, then mix the signals and finally render the sound for the listener.

Capture

To replicate the natural listening experience, the use of two audio signals – Ambisonics and object – is essential.

Ambisonics is a technique that employs a spherical microphone to capture a sound field in all directions, including above and below the listener. This requires placing a soundfield microphone (also known as an Ambisonics or 360 microphone) somewhere near the position where you intend to listen to. Keep in mind that these microphones will record a full sphere of sound at the position of the microphone, so be strategic with where you place them. It’s also important that your mic is not spotted in the scene, so we encourage placing the microphone directly below the 360 camera.

In addition to capturing audio from a soundfield microphone, content creators also need to acquire sounds from each individual object as a mono source. This enables you to attach higher fidelity sounds to objects as they move through the scene for added control and flexibility. With this object-based audio technique, you can control the sound attributed to each object in the scene and adjust those sounds depending on the user’s view.

Capturing mono sound can also be tricky because the traditional use of a boom microphone to capture mono does not work in VR. In synchronised 360 sound recording, there is no space to place the boom microphone, so it is helpful to place a lavalier microphone directly on the individual (hidden underneath apparel).

Read More: