A lot of recorded music is now largely artificial. "Instruments" may be virtual, and even real ones recorded with microphones may be close miked, and perhaps only recorded in mono, then pan-potted into a stereo mix.
I don't know how surround sound mastering works, or is "normally" done. Some sound engineers use ambisonics, or soundfield mics, which I believe may have significant advantages, at least for production. This is not only for music, but also for films. Some engineers may use completely different techniques. Mastering for binaural recordings could also be different. Sometimes even professionals seem to get things wrong - for example in films - if someone is centre screen, the effect of using a stereo mix of equal L,R will sound different from a centre speaker, and sometimes (perhaps because of lack of time or suitable equipment) the positioning of voices corresponding to the visual images is completely wrong. We normally tolerate this, but it isn't natural.
For stereo most DAWs have a panning option for each track, though whether this is for mono tracks or stereo tracks - I just don't know. My suspicion (but I could be wrong) is that panning simply does a relative volume adjustment between the left and right channels. However, this might not be optimal. For low frequencies this would probably be the best option - but for higher frequencies it might actually be better to combine the panning operation with phase shifts. Even more sophisticated systems might also adjust the tonal characteristics of each recorded signal.
I just don't know what happens in typical DAWs - whether they "simply" do a L-R relative volume adjustment, or something much more intricate. Also, it wouldn't be impossible for each DAW to have several different algorithms, so that the engineers using them could choose which panning option to use for this kind of operation.
Things do get even more complicated if one takes into account ambience, and an artificial "virtual environment".
I don't know how surround sound mastering works, or is "normally" done. Some sound engineers use ambisonics, or soundfield mics, which I believe may have significant advantages, at least for production. This is not only for music, but also for films. Some engineers may use completely different techniques. Mastering for binaural recordings could also be different. Sometimes even professionals seem to get things wrong - for example in films - if someone is centre screen, the effect of using a stereo mix of equal L,R will sound different from a centre speaker, and sometimes (perhaps because of lack of time or suitable equipment) the positioning of voices corresponding to the visual images is completely wrong. We normally tolerate this, but it isn't natural.
For stereo most DAWs have a panning option for each track, though whether this is for mono tracks or stereo tracks - I just don't know. My suspicion (but I could be wrong) is that panning simply does a relative volume adjustment between the left and right channels. However, this might not be optimal. For low frequencies this would probably be the best option - but for higher frequencies it might actually be better to combine the panning operation with phase shifts. Even more sophisticated systems might also adjust the tonal characteristics of each recorded signal.
I just don't know what happens in typical DAWs - whether they "simply" do a L-R relative volume adjustment, or something much more intricate. Also, it wouldn't be impossible for each DAW to have several different algorithms, so that the engineers using them could choose which panning option to use for this kind of operation.
Things do get even more complicated if one takes into account ambience, and an artificial "virtual environment".
Comment