- Please click the "PLAY" icon for music while you settle in and read -
Update: March, 12th 2021
Speaking in generalities and regarding the topic of sound there are four specific sound types that I deal with exclusively and employ when creating, producing, re-mastering, and remixing-Standard Stereo, 5.1 Audio, 8D Audio, and 12D Audio. Then their exists the audio file themselves-types, sample rate, and quality extent of the original source file.
Each of these sound types are excellent and have their proper applications, and this also goes to file formats that audio may arrive in. But do not get it confused, just because 12D produces a superior sound, it does not mean it is suitable or the best fit for a song or music genre, and the same can be said about 8D or 5.1.
So how can you tell or know what sound type to use or file format shortcomings present themselves when creating and producing? In tackling this complicated question, it is important for you the creator or producer to have some background knowledge into these sound types and file formats, because by understanding them and their principles you will be more equipped to properly selecting the sound type that bests fits your production while also knowing the obstacles that come into play with these different formats. So let’s start by looking at the types of sounds you will be exposed to when creating and producing and then take a look at some pitfalls and common mistakes made by creators and the software they may use.
DISCLAIMER: The information provided herein does not constitute any consent of an artist, producer, or record label, and is not to be taken as an informal or formal authorization by SilkenKitty.com to modify, produce, or release any music (outside of limited personal use) without said controlling individual or entity's ’s sole and expressed consent and approval. For official consent and authorization, it is the responsibility of the individual and not SilkenKitty.com to obtain all require Digital Millennium Copyright Act (“DMCA”) and licensing authorizations prior to any official releases and distributions of materials not of their original creation but under the expressed rights and control of another individual artist or corporation. All information that is discussed as part of SilkenKitty.com has been provided for educational purposes only.
Stereo is a method of sound reproduction that creates an illusion of multi-directional audible perspective. This is usually achieved by using two or more independent audio channels through a configuration of two or more loudspeakers (or stereo headphones) in such a way as to create the impression of sound heard from various directions, as in natural hearing. Mainly this is accomplished through a left and right channel audio signal that possesses slight audio variations that create that individual or isolated listening experience.
Figure 1: Traditional Stereo Signal (Demonstrated in Adobe Audition)
When working with Stereo it is rather easy because you are typically only dealing with a left and right channel, making any audio adjustments simple. Many of my original products are Stereo, because at the time of creation I was not yet skilled nor equipped to handle complex and multi-channel audio signals. I often refer to standard stereo as the audio workhorse, it is extremely power and a solid sound to fall back on. It not only sounds great, supports every music genre, but is also simple to work with and supported by many free software packages that can be found online.
My recommendation, when starting your music cutting in your creation endeavor, stick with stereo in the beginning. Why? Because you can quickly create products with minimal investment that sound great while studying up for the larger projects. Other than using stereo sound for music I also recommend it when creating sound effects, voices, and other nonessential sounds, not because stereo is simple and easy to work with, but because these ancillary sounds only play for seconds at a time and are not part of the core piece, but only a secondary support component. There is no need for you to exert all the extra hours effort into a 12 to 18 second sound that may only play once in a piece when your time and attention is better spent on the primary component-the song.
Below are some links to several freeware audio programs that I have found to pass the litmus test when it came down to the quality and functionality. Please feel free to check them out and select the one that best fits your needs and skill.
5.1 & 5.1E Surround Sound
5.1 or 5.1E (Enhanced) is a six-channel surround sound that is most common to cinematic and theater events. In its creation it uses a total of five audio channels (Center, Front Left, Front Right, Rear Left, and Rear Right-unlike standard stereo’s two channel Left and Right signal). The sixth channel is dedicated to low-frequency signals (typically 125Hz and below) that spatially covers the other five channels and is spatially and dimensionally processed.
Figure 2: 5.1 / 5.1E Signal Imagery Processing (Demonstrated in Adobe Audition)
You may know 5.1 by one of its many other labels, such as: Dolby Digital, Dolby Pro Logic, DTS, SDDS, or THX. 5.1 comes in two forms, standard 5.1 and 5.1E. A 5.1 or 5.1E signal sound almost exactly similar to a regular stereo signal on a two speaker system or headphone set, so to fully enjoy the multi-channel experience of 5.1 you require an audio system or headphone that supports 5.1, THX, or DTS audio signal processing. When encountering a 5.1 signal that carries the “E” suffix ("Enhanced") just know that the only difference between it and a standard 5.1 signal is that the individualized six channels have undergone additional analysis and signal separation processing, other than that they are the same.
5.1 or 5.1E cannot be easily worked or converted in any freeware because it is a multi-channel signal and most freeware seeks to compress anything it exports to stereo at the point of export-so you may start with a 5.1 or 5.1 E track in a freeware program, but at the point of export the software cues you and states that it will be creating a stereo file at the point of saving. This is a severe shortcoming when using freeware.
To ensure that your track starts and stops as 5.1 or 5.1E you require a more robust commercial audio suite such as Adobe Audition, iZotope, or Reason. Personally, I am a very large fan of 5.1 / 5.1E, because even though it is multi channel, it is rather easy to work with and the end product is not only as powerful as standard stereo, but also possesses the added benefit of the “THX” or "DTS" effect. Many people do not understand the difference, but I suggest you watch a move with just two speakers, and then in THX with six, you will understand shortly thereafter what I am talking about. (See, Figure 3).
Figure 3: Signal Imagery as Simulated on Physical Perception
I personally have made the investment in commercial sound software, because I desired the ability to create and modify 5.1 / 5.1E sound, and this could only happen by encoding a song using such programs as Adobe Audition, iZotope, and/or Reason. I recommend these software suites to anyone who is serious about delivering quality because these products do not disappoint and deliver nothing short of a top-notch and sound that towers over your competition. Granted and unlike the freeware, they are complicated to use at start, but given time and the abundance of support available, learning to use these packages is easier than you may believe. Once you do learn and understand them the potential and creative capabilities they offer is incredible-increasing your product offerings ten-fold over what is produced in any freeware program.
The links to purchase any of these commercial software packages can be found here:
8D Sound "Traveling Sound"
8D audio is essentially an effect applied to a stereo track where songs have been edited with spacial reverb (or the Doppler Effect) and mixing to make it seem like the audio moving in a circle around your head in a panning type motion. The effect is to simulate a “binaural” type recording (defined as a dual effect that influences both ears, bombarding them with uniquely different sounds that creates the sensation of movement). 8D audio is often called “Traveling Sound” because it gives the listener the impression that the sound is moving around them or that there is a sense of movement and they are passing through the sound. 8D is best enjoyed using a pair of high-end headphones or speakers (but headphones are more favorable). In the creation process and in deciding whether to use 8D, I can say it is a solid fit when creating music that is symphonic, trance, or ambient in nature, it does not work so well with hardstyle, rock, and deathstep.
Examining the sound characteristics of the 8D phenomenon, it can be easily explained by examining the composition and structure of the left and right channelized waveforms and frequency changes that differ between them. Now I know what you are thinking… This seems like a rather complicated description, but this concept can be easily explained by comparing a standard stereo waveform with that of an 8D waveform. (Compare, Figure 1 with Figure 4). Upon a simple review and comparison, it makes sense and is much easier way to grasping the 8D concept.
Figure 4: 8D Audio Waveform (Demonstrated in Adobe Audition)
From a creation perspective, true 8D is developed from applying a Doppler Effect (specific to frequency and not volume) to the original waveform that results in the left ear reaching a peak in frequency while the right is at a low, and vice versa. This up/down cyclic shifting of frequencies is determined by calculating a song's beat count and frequency at specific instances within the source piece-resulting in a feeling of euphoric movement we have come to call 8D.
When creating 8D I prefer to use Adobe Audition (but may also be accomplished the end goal in iZotope), but lets just stick with Adobe Audition. Adobe created a remarkable product when they released Audition, because it has a session plug-in that performs all the frequency analysis and beat count, automatically suggesting a perfected timed 8D track. The end result is the software does all the work by properly constructed 8D waveform and song that possess the Doppler Effect without any attenuation to the original frequency ranges.
However, many people tend to use Audacity to create what they call 8D. Let it be known, this is not 8D, it is a shifted sound based on volume and gain reductions and not frequency as intended by 8D. This "Psuedo-8D" may present the characteristics of real 8D, but there are some shortcomings in the sound that are very noticeable when put side-by-side with a true 8D track. I will identify these shortcomings only to ensure that every wayward person who attempts to create 8D in Audacity does and swears they found the “Holy Grail” for free flogging this fake as the real deal.
The problem with Audacity and with many freeware programs is that they do not perform any complex analytic calculations such as the Doppler Shift calculation, the very basis of what is required to make 8D sound. This lack of product capability leaves Audacity user with the cumbersome task of manually adjusting the track's sound envelope according to volume and gain. What comes out of this "Frankenstein-Process" is a pseudo “binaural effect”, if you even want to call it that. There is no frequency calculations performed nor any frequency adjustments during this entire Audacity process, not one. Instead of addressing any frequency level adjustment and peaks specific to side and time, the Audacity method calls on the creator to simply slide the Left/Right balance on the two stereo channels: (1) to left channel 100% on the left ear; and (2) 100% to the right for the right ear. This means that any peaks that were specific to the right or left side that are defined in the opposing channels are now lost, and all that remains is a mono signal in each ear. (See, Figure 5). This poses some seriously concerning acoustic problems if you truly care about creating a true 8D sound.
- PROBLEM NUMBER 1: As stated above, once you slide on channel to the left and one to the right you lose 100% of the song’s original stereo effect and are now left with a “dual-channel” mono signal-clearly not stereo.
- PROBLEM NUMBER 2: The timing of the shift that the Audacity user elects to use is random or a guess. How can they determine the required rise and fall times to align perfectly with a song's beat count? The answer is quite simple "they cannot". Not to underplay freeware but it does not support complex acoustic algorithmic analysis or any proper method of establishing a form of beat timing or counting. These manual adjustments are only a guess, and this guessing at envelope’s VOLUME or GAIN adjustment does not create 8D. Then there is the fact that you are not even adjusting FREQUENCY when moving the sound envelope as intended by the Doppler Effect, but VOLUME or GAIN… The laws associated with physics makes it clear, the Doppler Effect is based on FREQUENCY shifts and not VOLUME or GAIN shifts-leading into problem number three.
Figure 5: Volume Envelope Adjustments (Demonstrated in Audacity)
- PROBLEM NUMBER 3: Because you are adjusting the volume and gain levels at an unknown time and rate, there is a severe impacts to the file’s frequency integrity. This so-called manual adjustment or volume manipulation is not timed precisely with the beat and frequencies within the song, but instead is a hard and non-dynamic time based on a guess or influenced by what may have worked or sounded good prior. This “best-guess” approach results in a volume and gain rise or fall, that may or should have been a fall or rise. So what you will notice when such an instance occurs is that the track will go strangely silent at a point where is should be higher or higher at a low point. This imperfection significantly impacts the frequency response of the song because this is an attenuation caused by the miscalculated rises and falls of the manual adjustments to VOLUME and GAIN.
- PROBLEM NUMBER 4: As discussed prior, the Audacity method adjusts the sound envelope’s peaks according to VOLUME and GAIN not FREQUENCY. So these adjustments to VOLUME and GAIN cause the homemade track to loose certain frequencies (typically noticeable in high and low end sounds throughout the newly constructed waveform). The missing frequencies are lost because they are being attenuated by the manual collapsing and rising of the sound envelope's VOLUME and GAIN (and not the required FREQUENCY) at an incorrectly guessed interval. This shortcoming does not discounting the fact that it is VOLUME or GAIN being altered to begin with and not FREQUENCY as required by 8D. Granted, the user may hear a shifting sound volume that is similar to 8D, but it may seem under powered, shallow and wanting, or loud but then weak when effects are added that seems to now overpower the track. This is a result of the added effect(s) filling in the homemade track's low-point openings or gaps, creating what seems to be a louder effect, but is essentially the effect's sound wave occupying the open region of the homemade waveform while also cancelling out other frequencies that fall within modified envelope's low VOLUME and GAIN region. If you choose to use Audacity and encounter these circumstances, there is no overcoming this problem. You are playing with VOLUME and GAIN and not FREQUENCY. And do not think that amplifying the signal will make the problem vanish, because you cannot amplify a waveform compensate for frequencies that are simply not there. VOLUME + MORE VOLUME or GAIN + MORE GAIN does not equal FREQUENCY RECOVERY, instead it results in amplifications of frequencies that are already present, clipping, distortion, and problematic escalations within the tracks noise floor. Put simply, “You cannot rob Peter to pay Paul”, as much as you can substitute the loss of a frequency with an increase in volume (a louder sub-woofer cannot produce treble).
A picture is worth a 1000 words. Comparing what you see in Figure 4 with that of Figure 5, the differences are rather obvious. Do not be fooled, what comes out of Audacity is not 8D, because it is based on VOLUME / GAIN and not FREQUENCY. These freeware programs are great when dealing with stereo and mono (because the highest degree they export is ONLY stereo), but they cannot handle anything beyond that, and stretching their capabilities beyond their scope does not mean they can do more, it means you are growing apples and calling them tomatoes.
Now, please know this is not an attempt to discourage anyone from making “Homemade Moonshine Music”, I am just informing them that what they are creating is not 8D, because of the VOLUME and GAIN changes that they are reworking and not the required alterations based on FREQUENCY. If you are truly interested in 8D sound creation, you will have to coin up and buy a commercial software package that supports signal analysis and creation, it will save you the heartache of spending your all of your off-time manually adjusting the volume envelope to create something that is not even what it is supposed to be anyways. Personally I prefer to use the plug-ins provided by Audition, because at the click of a button and within one minute the software calculates, adjusts, and creates the frequency envelop modification automatically, ready for editing, export, and posting.
12D Audio "The Big Sound"
12D audio is a bit of a mystery, because all of the literature that surrounds this sound is limited leaving practically no clue as to what it is or how it is created. The only way to uncover what 12D is and how it is created is to work backwards and review its original pre-modification source file. Let me save you a ton of time on this matter. 12D is simply the modification of a 5.1 or 5.1E signal (all Six Audio Channels independently) in a similar fashion as 8D’s two-channel stereo source file, but taking in consideration a 360 degree movement among all of these channels. That means you are adjusting not only frequency as in 8D, but the gain levels across six independent channels to consider sound in a 360 degree range of movement. (See, Figure 6). That seems complicated, because it is complicated.
When listening to 12D, it is strikingly similar to its 8D counterpart-being best enjoyed on a pair of high-end pair of headphones or multi-speaker stereo system that supports 5.1 or 7.1 sound, but this is not 100% required for 12D. Furthermore, 12D sound is a more dynamic than 8D, because it can be deployed under more creative circumstances, such as symphonic, trance, hardstyle, bass dependent audio, vocal, or ambient type musics. And, because the file is constructed from a 5.1 or 5.1E file (all possessing independent channels) and the source file's sixth channel (the low end frequencies assigned to cover the 360 degree range) is an independent function of the original source file's waveform, the creator can alter bass level as they deem fit unencumbered and without any fear of encroaching on the high and mid-range frequencies that are a component of the other five channels. This signal independence is what allows 12D to sound so rich and full. You are not only reworking the tracks independent channel frequencies but also their gain levels, unlike the frequency specific attributes and limitations of a 8D modification. The result of a 12D conversion is very dynamic and fuller sounding track, without any distortion or frequency cancellation based on the gain modifications. Personally, I am a massive fan of the 12D sound and have to say that its label as "The Big Sound" is well deserved and unmatched.
Figure 6: 12D Waveform 360 Degree Sound Coverage
Other than this explanation and the flashy picture within Figure 6, I will close out my 12D discussion and retain revealing the secrets related to that sound construction. I provide you with a great detail of the how 12D is accomplished, and leave you with one more hint or recommendation. To create this level of sound it cannot be accomplished without the Adobe Audition, Reason, an abundance of coffee and patience as you go through the trial and error process. I leave the rest of this puzzle to you to figure out-good luck!
3DI "Immersive Sound or Ambisonics"
3DI audio relies on the principle of Ambisonics. Ambisonics is a method for recording, mixing, and playing back of three-dimensional 360-degree audio. The basic approach of Ambisonics is to treat sound in the conception of a 360-degree spherical scene with the sounds coming from different directions around a center point, much like 5.1, 5.1E, and 12D but adding in the third dimensional experience, and with more independence, because the channels are not specifically assigned as in 5.1 or 7.1 sound. (See, Figure 8-10).
The theory behind 3Di or ambisonics is a logical extension of Blumlein’s work, saying that when using the two figure of eight capsules positioned perpendicular to each other, any other figure of eight response could be created. This then establishes an ordered sound-the more channels, the higher the order.
As an example, 1st order ambisonics can represent a sound field using four signals (collectively known as B-Format). The W signal is an omni-directional pressure signal that represents the zeroth order component of the sound field and X, Y and Z are figure of eight microphones used to record the particle velocity in any one of the three dimensions.
The basic principle of ambisonics is to describe the whole three-dimensional sound-field theoretically at a single point in space. All directional information at this point is captured by a suitable microphone.
The captured signal is decoded later into multichannel loudspeaker signals or a binaural stereo signal for headphones. The important thing to note is that there is no need to consider the actual details of the reproduction system during the original recording or synthesis. The two parts of the system, encoding and decoding, are separate. This gives you the ability to position a sound object all around the listener-behind, in front of, to the right or to the left of and even below or above. The power of this concept is that these loudspeaker-independent signals may be manipulated to drive a variety of loudspeaker arrangements. In sum, Ambisonics encodes direction as a property of the recorded sound.
The benefit of 3DI is that it vastly increases the sense of envelopment, involvement, and realism within the audio experience that cannot be provided by Stereo, 5.1/5.1E, 8D or 12D (as they lack the dimensional depth), increasing the listener's experience with independent sounds moving in the 360 spatial region that conform to natural human perception, hearing, and sound awareness. This is especially true for so-called “higher order recordings”, for two important reasons: (1) 3DI makes it possible to locate sounds much more precisely when listening; and (2) 3DI increases the sense of actually “being there”, because the sound is modeled after human hearing, providing the listener with the real-time ability to determine where a voice may be coming from in relation to them or how close a motorcycle may be based its loudness and sound's original direction in relation to the listener's position. (See, Figure 8). This is 3DI.
Types of 3Di / Ambisonic Formats
There are two types of signals that should be mentioned as part of the Ambisonic or 3Di structuring and they are Ambisonic A and B type Formats. So lets take a look at them closely to determine the differences.
This is a signal that is captured on a four-microphone system, creating an individual channel per microphone recording and resulting in the four-channel signal. (See, Figure 10). A-format signals are not used in signal processing or transmission. They are only used for recording within the ambisonics system.
B Format, while derived from A format, is not the same as A Format and should not be assumed as such. B-Format renders the raw output of the individual channels to a perfectly aligned set of signals, so the spacing is compensated for and the sound is recorded as if from a virtual point in space-where the four-channels represent the outputs of four virtual microphones. (See, Figure 7)
Figure 7: 3Di / Ambisonic Microphone Sound Immersion
By manipulating the relative levels of these four channels, the output of any combination of first-order microphones pointing in any direction can be created. You can choose the orientation and polar pattern of your microphone “after the fact”. This format is the basis for post-production conversion and 3Di sound creation.
Figure 8: 2D Audio Compared to the 3DI Immersive Effect
Figure 9 provides you with a pictorial presentation of a 3Di sound's movement as it is being delivered, based on spacial placement. 3DI embodies all of the sound benefits and qualities of 12D, but as identified prior contains the added caveat of movement of more sounds throughout a 360 degree spherical plane or region while not limited to specific speaker assignment. When listening to a 3DI song or effect you may hear an artist's voice, bass, instrument, effects, or specific part of a song in a specific place relative to your perceived position at that particular point in time, but then notice that component moving around you in a 360 degree spherical plane while being able to physically point to where you perceive the moving sound to have originated, path of travel, and distance from you.
Figure 9: 3D Immersive Audio Sound Movement Pictorial Simulation
The 360 degree effect of 3DI is attributed to its four channel Ambisonic-B Format signal that creates the spherical sound shape-allowing the independent movements of some or all sounds (as determined by the 3DI creator) in all three axial directions (X, Y, Z, and Time). This capability is something that cannot be simulated or resolved within Stereo, 5.1/5.1E, 8D, or 12D sounds, as they lack the ability to independently simulate the movement or particular audio components in a spherical plain and are channel to speaker dependent, whereas 3Di / Ambisonics is not.
Figure 10: 3D Immersive Audio Development (Demonstrated in Adobe Audition (Extracted from My Personal Testing File))
3Di is a remarkable sound, providing an immersive 3-Dimensional sound experience through the application of reverberations and ambisonics. As a creator I have developed several 3Di tracks, but in the dire for continual improvement, I have advanced my post=production editing to Dolby Atmos.
Dolby Atmos® adds the flexibility and power of dynamic audio objects into traditional channel-based workflows. These audio objects allow content creators to control discrete sound elements irrespective of specific playback speaker configurations, including overhead speakers. The foundation of Dolby Atmos is based on two types of audio objects, static and dynamic. Static objects are commonly referred to as "bed objects" as they are defined as non-moving objects that are mapped to specific speaker locations. Typically, this is a 7.1.2 or 7.1.4 layout and is referred to simply as "the bed", which corresponds to traditional non-Atmos configurations like 7.1 and 5.1. Dynamic objects are audio objects that can freely move around the entirety of the listening field. The bed is essential because it provides a format that is still applicable for audio that works best in a channel based environment. Diffuse elements such as ambience, reverb, or anything else that doesn't need to dynamically move around the room would be panned to the bed. Additionally, content that was used in a dynamic object can be folded down into the bed when needed, based on prioritization and object management.
Dynamic objects are used for precise positioning and/or free movement of content within the listening space. One of the most immediate features of Dolby Atmos is the ability to represent sound on the Z axis, both above and below the listener. This doesn’t just allow for a height plane, it actually unlocks the ability for sound to be reproduced in its full three-dimensional glory. One can take some first steps and place sounds in zones along the listener’s perimeter, such as front, side, back, and above. This is appropriate for most content and can create extremely immersive and compelling experiences. The next steps would be to approach design from a layered approach and include concepts such as depth, frequency, proximity, perspective into creation of individual assets as well as the scene as a whole.
An example city environment can be used to demonstrate the use of these concepts when designing a sonic environment. To create the feel of being on the street level of a busy downtown city, the first thing to consider would be the spatial layers. Along the X and Y axes are sound sources that are on the ground level; vehicles in the street, pedestrians on the sidewalk, door sounds and music coming from shops, etc. One could bake all of these elements into a fixed-channel bed, or one could place these sources individually at their relative locations, either in the game engine or the audio engine, in order to build depth and perspective at the listener level. Taking the Z access into account, in this scenario most of the sound that would be coming from above the listener would be reflections and reverb of the street, getting bounced back from the sides of the buildings. One could add some point-source sounds of apartment life coming from some windows, pigeons cooing and flapping their wings, a distant jet passing overhead, etc. The reflections of the street sounds are diffuse and add “air” to what’s happening on the horizontal plane, and the individual spot sounds add detail and variation, plus stratification when placing sources at near, middle, and far distances relative to the listener.
Let’s not forget about the sonic frequency aspect of our scene. As a general rule of thumb, for vertical proximity low frequency content can be thought of and treated in the same manner as horizontal proximity. That is to say, the closer a sound is to the listener, the more low frequencies you may hear, regardless if it’s up and down, front and back, or side to side. Our ears use primarily mid and high frequency cues to perceive location, especially for the upper hemisphere of our surroundings, so bass frequencies don’t contribute as much to spatial perception. What’s more, not only are low bass frequencies more of a physical body sensation, most home theater setups rely on subwoofers to produce the bass signals, which will lower or “ground” the perception of a sound simply because that’s where the low end of the audio is being reproduced. To take this back to our example scene, if we had a garbage truck driving past the listener position, one could use a high-pass filter to modulate the amount of audible low frequency content based on proximity, whether it was on the same street as the player or a bridge above.
One of the benefits of Dolby Atmos from a mixing perspective is that with the addition of higher spatial resolution plus the height plane, an effectively larger sonic canvas is available. This additional space can be used for anything from exaggerated effects, to subtle spaciousness for mix clarity. While there are no hard and fast rules about how to use these new dimensions, one will quickly realize that the content itself will dictate how extreme or subdued one can be. A mind-bending, multi-perspective, abstract game might take full advantage of dynamic audio objects that are flying all over the place. A cinematic, first-person, period piece may benefit from merely a slight widening of the music and effects positioning for better clarity of the dialog.
Figure 11: The Dolby Atmos Experience
Unlike its predecessors, Dolby Atmos (for the most part) does not use channels. Instead, most sounds are broken down from their original waveform restrictions and treated as "objects", and instead of the sound being restricted to placement according to a waveform, the newly constructed objects can be freely assigned to a specific spatial location assigned by the creator or engineer. This assignment ability of sound objects is what permits Atmos sound to surpass the level of quality and restrictions of such predecessors as 5.1 or 7.1. (See, Figure 12). This ability to now treat sound as an object and not a channel to be limited in manipulation provides greater acoustic flexibility and improves the sound experience by making it more realistic and greater in depth. Even on a low-end stereo or headphone system, Atmos punches through by delivering a quality sound that towers over all other sound experiences.
Figure 12: Dolby Atmos Speaker Placement Example
From a post-production and sound engineering aspect, the creation of Atmos sound requires two main components: (1) the ability to reverse channelize a song (that would be the ability to take an existing stereo track and break it down into independent objectional archetypes); and (2) the ability and tools to create and place the archetypes on an Atmos environment. The ONLY software that exists that support the true creation of Atmos sound would be the AVID Dolby Production Suite or the official Dolby Media Producer Suite. I have tested both and found that the Dolby Media Producer Suite is the best product out of either of these two. Now, I personally do not mind sharing this with people, because I know the conventional and typical “music cutter” in IMVU is unwilling to simply fork out $11,000 dollars for the DTS Media Suite, and to those readers, good luck trying to find a crack for that, because like Adobe's new software management tools, everything is now cloud based and requires an iLOK authorization or cloud identifier. Fortunately, to resolve this issue on my end, I share a license with a friend of mine who works at Sony (one of the benefits of New York City, the access to the multitude amounts of people who wok within the entertainment industry) so during the day he is working away, and at night or on the weekends, it is my turn to access the DTS Media Suite.
Hands down Atmos sound is the best in the industry, and it is my pleasure to be the only creator within IMVU that possesses this skill and the tools to offer this high end sound product. Below I have attached the technical ADM specifications for Dolby Atmos Sound Configuring within the Atmos Engine. Feel free to send me a message via the Contact page should you desire to discuss this matter in more depth.