'24/192 Music Downloads ...and why they make no sense'

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • Bryn
    Banned
    • Mar 2007
    • 24688

    #16
    Originally posted by MrGongGong View Post
    Pauline Oliveros talks about being banned from a studio when she was a student because she was including ultrasonic sound in her pieces.
    Whilst they ARE (by definition !) out of range given the nature of sound surely they will have effects on the sounds in the audible spectra ?
    Quite. I recall reading an extended article on the subject of ultra sonic frequency interaction with the audio frequencies in an audio engineering magazine in a hospital waiting room a decade or so ago. I have tried to track it down since, without success. It's basic thrust was that the human ear/brain audio reception and interpretation system was by no means linear in operation. It argued in favour of sampling frequencies far above the Nyquist threshold in order not to introduce perceivable distortion due to the lack of such ultra-sonic/audible frequency spectrum interactions.

    Comment

    • Frances_iom
      Full Member
      • Mar 2007
      • 2434

      #17
      Originally posted by Bryn View Post
      Quite. I recall reading an extended article on the subject of ultra sonic frequency interaction with the audio frequencies ...
      One of my first engineering projects near 50 years ago for BBC was a frequency shifter for audio signals as in those pre digital days the various telephone links could introduce such shifts of a Hz or so and we needed a simulator - worked well and I carefully demoed on various types of music and then recorded the same on I think a small Studer - I had however forgotten the bias frequency used on mag tape and that the filters on the input of such recorders were not perfect - my shifter involved quad modulators/demods at ultrasonic frequencies which however escaped thro the filters and produced obvious whistles when you listened to the play back - I'd only listened to the i/p signals! and didn't think the high freqs would escape into the recorder ! one embaressed young engineer later I'd learnt my lesson - with modern digital recording signals above half the sampling frequency would be aliased back down into the audio passband hence the probable dislike of introducing them.

      Comment

      • Gordon
        Full Member
        • Nov 2010
        • 1425

        #18
        This might help give an idea of the human auditory system:



        and this from Bob Stuart is certainly worth a read although a bit old now:



        It's hard to believe that such a thing could be very linear looking at all the processes involved in passing a sound wave in the air to vibrations in a fluid inside the cochlea. I'd agree with Bryn that it is not likely to be linear!! Anyway we do know from Fletcher-Munsen that it certainly is not linear in its response to loudness.

        If it is non-linear, in the usually accepted meaning of that, then its behaviour when stimulated with sound will produce some "distortion" of that sound. Because we are unaware of this, having grown up with it, we accept the presence of any non-linear effects as "normal". Thus, despite the fact that our "hearing" runs out at around 20kHz, it doesn't necessarily mean that the hearing mechanism is not responsive in some way to the presence of higher frequencies.

        If the cochlea is a sort of spectrum analyser with remarkable resolution then is there any harmonic resonance going on? IOW if certain cells on the cochleal membrane respond only to say 1 kHz will they also do so to its harmonics at 2kHz etc. It isn't clear in anything I've read that this is or is not so. If these resonators behave as any usual physical vibrational system does then we'd expect some resonance. If not I'd say that the process has strong non-linear features. However few of humans seem to perceive anything beyond 20 KHz as a direct stimulus ie "hear" it. I know I tried with colleagues in the past to see of we could detect the presence of 20kHz plus [ie be aware of it if not actually hearing it] in a quiet lab but failed to do so reliably. The diagrams of the cochleal membrane suggest that hearing a range of frequencies is localised to particular places on the membrane and not therefore built using resonant effects.

        If Bryn is right and there is non-linearity at work then we might expect that say a 10 kHz pure tone added to, say, another pure tone at, say, 22 kHz, this one ordinarily being "inaudible", should produce intermodulation - as Frances discovered - caused by non-linearity in the ear. That IM would produce a spurious tone at 12kHz [and incidentally at 32kHz and also at many other higher order harmonics, eg 2F1+F2 at 2 kHz, 2F2+F1, etc] which should be audible; the more non-linear the ear the greater should be the amplitude of those IMs. We tried this in the lab too with no such expected result. One is tempted to think that the ear is not that non-linear OR that it is also a very complex organ that has managed to deal with its own "defects" by linearising itself. At very high levels of sound pressure there is often a sense of distortion but then the air itself becomes non-linear in this region.

        How the vibrations of the membrane connect to the nervous system seems a mystery but it is an electro-mechanical one. The causing of nerve cells to "fire" has a strong suggestion of "digital" about it. Is loudness a pulse counting exercise?

        Going back to the start of all this, we are really trying to discover why it is [or seems] that some people can "easily" detect "defective" processing of sound when others can't. I have a friend who is a wine buff and he can, it seems, detect grape varieties easily by taste whereas I cannot except very occcasionally. It is entirely possible that some folks have an auditory gift that has passed me by but the clear evidence for that gift seeems to vanish in the lab [see the Boston Audio Society experiments].

        In all my years in broadcasting dealing with both sound and video I have never met anyone with clearly demonstrable golden ears. My research group contributed to MPEG audio standards and especially AAC in the late 90s and some of that team were able to detect very small differences in AAC performance but only after many weeks and months of listening to the same test material. They knew what they were looking for and chose material that exposed it. They would tweak the parameters to deal with specific items of test material. They did most of their listening using headphones to cut out any extraneous distractions!! Even I could hear what they could once it was pointed out BUT when the same encoding settings were used on randomly selected material of different kinds their success rate fell like a stone.

        Perhaps close familiarity with the material and listening environment helps hone the senses, as might professional involvement in music performance where there is strong aural memory of the "real thing" [whatever it might be]. It has always been strange to me that world class conductors and musicians in general do not seem to get into a state about audio performance [don't bite the hand that feeds perhaps]. If digital audio is so bad why didn't the likes of Karajan refuse to have anything to do with it?
        Last edited by Gordon; 08-12-13, 16:22.

        Comment

        Working...
        X