Vinyl to CD - again

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • Gordon
    Full Member
    • Nov 2010
    • 1425

    #46
    These pictures might help with interpolation:

    This first one shows how a simple interpolation [the red dotted lines show the correct values] fails abysmally because a simple interpolation of the average of two adjacent samples [red full lines] is often incorrect as is seen in the very first attempt. The algorithm is too simple to see that the waveform does something far more complex than a simple average [the green dots on the black dotted lines] can follow.



    This second example shows what happens when the new samples [blue/green dots] generated using a perfect interpolator but are not simply related and positioned in time relative to the old. We require that the new samples lie exactly on the original waveform. But because each new sample is in a different position relative to the old it needs the coefficients to change every time and so each separate new sample of a set needs individual calculation. A simple repeating algorithm does not work at all well. This is true regardless of whether the conversion is in real time or not.



    This final illustration shows the general solution in the form of a digital filter producing a new [blue] sample value between the old [red] samples S3 ans S4. It gives better results by using more reference samples. In the picture the green dots are the interpolation and the blue full line the ideal result - here they are shown to be close as should be the case with a good conversion where the green dots lie exactly on the original waveform. The stranger the ratio between the sample rates the more complex the filter and ideally the coefficients have to change in a cycle lasting many sample periods [480 in the case of 48 kHz conversion to 44.1]. The filter shape is closey related to the impulse response of the channel itself.

    This is substantially true of the case for 48kHz conversion to 44.1. Provided the filter is linear it will not produce any unwanted results. However, in this case the filter also has to be aware of the possibilities of aliassing when the new lower sample rate tries to reconstruct high frequencies possibly present in the old 48 kHz samples [eg 23 kHz] that are above the Nyquist frequency of the new 44.1. If it took no notice of this alias potential it could render that 23kHz as 21kHz instead. No filter wil be perfect and the best ones will be complex and expensive to build. As Dave has said a non real time conversion will be able to afford this extra complexity but can take its time whereas a real time one has to be fast.

    Last edited by Gordon; 04-04-13, 15:29.

    Comment

    • umslopogaas
      Full Member
      • Nov 2010
      • 1977

      #47
      Phew! Fools walk in where angels fear to tread, this technical stuff is way over my head. But a thought occurred to me re. Gordon #33:

      "... - he's afraid they [his LPs] are wearing out from playing which is to some extent true."

      Agreed, but only to a very small extent. I'm 63 years old, I have thousands of LPs and I play them in strict sequence, A -Z by composer, most evenings when I'm not playing CDs. They are, and remain after my playing, in very good condition. Given that:

      1. Your friend is a vinyl fanatic
      2. He probably also owns thousands of LPs

      it is likely that he will probably not play any one disc more than a very few times each year for the rest of his life.

      The problem is therefore not that he will risk wearing out his discs so much as his equipment may not be up to the job. There is plenty of high quality cartridge and vinyl equipment out there, which can be used to play vinyl, to all effects endlessly, with no significant damage to the discs. Rather than buy all this esoteric, incomprehensible and expensive electronic stuff, I suggest he takes his money to a good hi-fi dealer ( I recommend Audio Destination in Tiverton, Devon), buy a top quality deck, arm and cartridge and go on playing the vinyl. While I wish him a long and happy life, if he does what I suggest he will wear out before the kit does.

      Comment

      • Dave2002
        Full Member
        • Dec 2010
        • 18078

        #48
        Originally posted by Gordon View Post
        As for the post processing software I can find very little so far [have not had that much time to do the research] that tells how the filtering to 44.1 is done. From 96 to 48 it is simple but from 48 to 44.1 it is not trivial because of the odd ratios. One of the advantages of oversampling is noise shaping because it spreads the quantising distortion over a wider bandwidth so I'd like to know how they would do noise shaping from 24 to 16 bits which is not disclosed either.
        I think if the 48 kHz sampled data is upsampled by a factor of 147, which takes the frequency up to 7.056 Mhz, then that data can then be downsampled by a factor of 160. As noted, it may be important to filter out any frequencies above 22kHz in the audio - which can be done even in the 7.056 MHz data. Upsampling from 44.1kHz to 48 kHz is the reverse process, but there should be no need to perform the anti-aliasing step. I believe that good results would require an additional smoothing filter in the 7.056 MHz domain. There may be short cuts which reduce the processing, and make real time conversion more practicable.

        From 96 kHz to 44.1 kHz - upsample 147, downsample 320.
        From 192 kHz to 44.1 kHz - upsample 147, downsample 640. etc.

        Comment

        • MrGongGong
          Full Member
          • Nov 2010
          • 18357

          #49
          Originally posted by umslopogaas View Post
          I have thousands of LPs and I play them in strict sequence, A -Z by composer, most evenings when I'm not playing CDs.
          Blimey
          poor Xenakis will hardly get a look in

          Comment

          • David-G
            Full Member
            • Mar 2012
            • 1216

            #50
            Originally posted by Dave2002 View Post
            I think if the 48 kHz sampled data is upsampled by a factor of 147, which takes the frequency up to 7.056 Mhz, then that data can then be downsampled by a factor of 160. As noted, it may be important to filter out any frequencies above 22kHz in the audio - which can be done even in the 7.056 MHz data. Upsampling from 44.1kHz to 48 kHz is the reverse process, but there should be no need to perform the anti-aliasing step. I believe that good results would require an additional smoothing filter in the 7.056 MHz domain. There may be short cuts which reduce the processing, and make real time conversion more practicable.

            From 96 kHz to 44.1 kHz - upsample 147, downsample 320.
            From 192 kHz to 44.1 kHz - upsample 147, downsample 640. etc.
            I wonder if you could explain what exactly upsampling and downsampling mean? Is upsampling a sort of curve-fit, and downsampling a sort of interpolation? Does upsampling by a factor of 147 mean sampling at a rate 96x147 = 14112 kHz; this would be rather fast, wouldn't it? A detailed explanation would be greatly appreciated!

            Comment

            • Dave2002
              Full Member
              • Dec 2010
              • 18078

              #51
              Originally posted by David-G View Post
              I wonder if you could explain what exactly upsampling and downsampling mean? Is upsampling a sort of curve-fit, and downsampling a sort of interpolation? Does upsampling by a factor of 147 mean sampling at a rate 96x147 = 14112 kHz; this would be rather fast, wouldn't it? A detailed explanation would be greatly appreciated!
              Pretty much. There are perhaps different flavours of upsampling, but the simplest is simply to replicate a sample - identically.

              Suppose we have data 10916, 22837, 20916, ... in a data stream.
              If we upsample this by a factor of 11 we get

              10916,10916,10916,10916,10916,10916,10916,10916,10 916,10916,10916,
              22837,22837,22837,22837,22837,22837,22837,22837,22 837,22837,22837,
              20916,20916,20916,20916,20916,20916,20916,20916,20 916,20916,20916, ...

              - 11 copies of each original sample -

              and if you are doing this in real time, then indeed the basic sample frequency is (here) increased by a factor of 11.

              Now, suppose you wanted to downsample by a factor of 12, you would simply count along in blocks of 12, rather than 11, and obviously the data would have to be clocked out at a different rate if done in real time. This is a bit crude, so it's likely that some form of interpolation filter would be applied to the upsampled data stream. This can be done in hardware by having the data clocked in to a buffer at one rate, then clocked out at the appropriate rate for downsampling. I think some people use the term upsampling to refer to the whole process, not just the sample replication, but I could be wrong. Similarly downsampling could refer to not only reducing the sample rate, but also applying anti-aliasing filters. Again, I could be wrong in the way the terminology is used most frequently.

              Upsampling by factors of 2 is usually pretty benign. Downsampling by factors of 2 still requires anti-aliasing, though is otherwise easy.

              Gordon's example is I think slightly more complex, as he might also be considering jitter in the sampling, which changes things further. For regular sampling, it is possible to define filters in terms of the z-transform, in which case it is fairly easy to design rather accurate filters which will give characteristics in both the amplitude and phase responses which are very close to the ideal, using either FIR (finite impulse response) filters, or IIR (infinite impulse response) filters. The book by Hamming on filters is helpful - http://books.google.co.uk/books?id=G...ontcover&hl=en

              Both upsampling and downsampling can introduce small errors - artefacts, but downsampling can also introduce aliasing for any frequencies in the source data which are above the Nyquist frequency - as mentioned earlier. This can be avoided by either using an analogue filter for audio before digitising, or by using a filter in the digitised sample - either at the original sampling frequency, or in the upsampled frequency. There are likely to be advantages in doing the anti-aliasing filtering (and any other filtering) in the upsampled data, as the filter can move unwanted artefacts out of audible range.

              Some forms of upsampling change the data rate, but use a different modulation method, such as delta modulation, sigma delta etc. rather than PCM. That's another story, but DSD encoding does that kind of thing.
              Last edited by Dave2002; 05-04-13, 11:25.

              Comment

              • Gordon
                Full Member
                • Nov 2010
                • 1425

                #52
                Originally posted by umslopogaas View Post
                .....Agreed, but only to a very small extent. I'm 63 years old, I have thousands of LPs and I play them in strict sequence, A -Z by composer, most evenings when I'm not playing CDs. They are, and remain after my playing, in very good condition. Given that:

                1. Your friend is a vinyl fanatic
                2. He probably also owns thousands of LPs
                You are right of course and that is what he also thinks - now - but also there is the prospect of the iPOD or the like lurking in his mind and perhaps a music server to hold everything without the faff of getting the LPs out - but I'd have thought that that is part of the vinyl buzz??. He is the same age as you and has some v good equipment already. I think he's settled down a bit now after the paranoia and realised that the digital route has its complications. It's what you get if you read too much!!

                Comment

                • Dave2002
                  Full Member
                  • Dec 2010
                  • 18078

                  #53
                  Originally posted by Gordon View Post
                  You are right of course and that is what he also thinks - now - but also there is the prospect of the iPOD or the like lurking in his mind and perhaps a music server to hold everything without the faff of getting the LPs out - but I'd have thought that that is part of the vinyl buzz??. He is the same age as you and has some v good equipment already. I think he's settled down a bit now after the paranoia and realised that the digital route has its complications. It's what you get if you read too much!!
                  I fear I am slightly older, and probably about to retire rather soon. I may try digitising LPs to fill my time later this year, or I could even open up a business doing that for others, or at least a service for friends. There could be merits in doing this for some recordings, but otherwise the suggestion from umslopgaas to just play the LPs is not a bad one. One benefit of digitising is that it could at least preserve the sound even if the LPs deteriorate or are otherwise damaged. This would be worthwhile for rare or now impossible to obtain recordings.

                  The convenience of being able to play from a server on demand is considerable, but the faff of digitising is probably going to be rather daunting. Even ripping CDs to hard drive, which can probably be done at 4-8 mins per CD gets to be too much of a bother, and I've reverted back to playing the originals in many cases, and using Spotify or Napster for on-demand listening. I may eventually get most of my material digitised, but one has to consider how much time it is worth spending on these things. If money is no object (not normally the case!) then getting the best possible available digital or digitised version is probably worthwhile - e.g Japanese CDs and SACDs and Blu Ray recordings, and master downloads. However, most of us can't afford that.

                  If each LP is only going to be played once or twice from now on .... , then it may be too time consuming to bother. However, it might work if each time an LP is played it is also digitised as a by product, then the extra effort might not be so large. Those who have tried this, such as HS and Clive H would be able to give better advice - and indeed have already done so.

                  Of course some people like challenges, and won't mind trying. To give a motoring analogy, it depends whether you want to go places, to drive, or spend your time tinkering with the engine.
                  Some people only want to listen to the music.

                  Comment

                  • Gordon
                    Full Member
                    • Nov 2010
                    • 1425

                    #54
                    Originally posted by David-G View Post
                    I wonder if you could explain what exactly upsampling and downsampling mean? Is upsampling a sort of curve-fit, and downsampling a sort of interpolation? Does upsampling by a factor of 147 mean sampling at a rate 96x147 = 14112 kHz; this would be rather fast, wouldn't it? A detailed explanation would be greatly appreciated!
                    We have been a bit lax in use of terms - yes, a curve fit idea is behind the process. Dave's explained well enough how this business works. Both terms up-sampling and down-sampling as well as interpolation mean much the same thing - ie estimating information you don't have but which can be calculated from what you do have.

                    In the case of audio, Up Sampling generates more new audio samples from the waveform you already have and so sort of improves the precision with which the audio is described BUT IT CANNOT MAKE INFORMATION YOU NEVER HAD! IOW up sampling to say 96 kHz from 44.1 does NOT create audio beyond the original input bandwidth. The benefit of U Samplng comes in the easing of the processing of the audio and the design of analogue anti-alias filters, particularly at the DAC. Regardless however of how they are implemented, those filters have to be made good enough and this can be done better in the digital domain.

                    Another term in use is Over Sampling where you deliberately sample at a rate far higher than you really need so that going the other way, ie from a 96 kHz original where you may have captured audio components up to 48 kHz, down to a 44.1 final you have more control over the processing. However, going this way you MUST lose anything in that original audio above 22 kHz. Vinylites think that this can happen to their LPs which is one reason for the paranoia and the interest in capturing a dub at 96 kHz!! There are other problems in the analogue domain than this such as cartridge loading, especially for moving magnet types, that will affect the sound more. Part of the complexity of down sampling filters is to be sure that you lose least information.

                    That complexity is worse when the ratio of the two sampling rates is strange. Sony's Dr Doi chose 44.1 kHz for CD for good reasons back in 1980 - bizarrely it's derived from the value of a colour TV frequency now totally obsolete - but now of course it's a bit of a curse. If he had followed the telecom world [and later AES] and broadcasters [eg 32 kHz for the NICAM system that fed BBC FM transmitters in the 70s] and had chosen a number that was a multiple of 8 kHz all would be a lot simpler.

                    In a 10 millisecond period there are exactly 480 samples at 48 kHz. In the same period at 44.1 there are only 441 so any rate change has to substitute one set for the other in a repeating pattern. Because 441 = 3 x 147 and 480 = 3 x 160 this resolves to swapping sets of 147 and 160 samples as Dave describes. One would not literally up sample by one and down by the other in 2 successive steps, requiring a lot of memory, it would all get done in the filter itself using less memory.

                    One could up sample from the 48 kHz set you have to start with to a rate of 147 x 48 kHz which is also = 160 x 44.1 KHz = 7.056 MHz. You have now massively increased the density of samples such that SOME OF THEM [only 147, see below] are actually exactly where they would have been but not necessarily what value they would have been if you had sampled at 44.1 in the first place. These are the ones you want so you ignore the rest - but which are they?? And where did you get them from anyway? All you have is a set of samples that Mr Nyquist has promised will be sufficient, you don't have the original waveform to re-sample anymore. To up sample you have to calculate them - how?

                    In a period of about 3.3 milliseconds [1/3 of 10] you've now made 147 x 160 = 23,520 samples which you may have to store pro tem. 160 of them are original samples set on the 48 kHz grid. If the rate ratio was simpler this large number would not be necessary. You've now to work out which 147 of them fall on the 44.1 grid to fit that 3.3 milliseconds. Once you know where they are [you should be able to work it out beforehand, once you know the ratios] a more efficient filtering process can be dedicated to doing the job on the fly by selecting the right ones as they come along. We have ignored any anti alias treatment here.

                    PS I suppose it should be obvious but having a digital version of some audio in say 96/24 is no guarantee that it is somehow magically better than the same audio captured at 44.1/16. If that audio was already bandlimited and noisy to boot then either format will do and so additional samples and bit depth is carrying little or no usefui information. If the source is an analogue master tape for example what bandwidth is there available anyway?? No practical analogue tape machine I've ever met was able to capture full amplitude high frequencies, even at 30ips there are limits and many studios used 15 as the rule; wide bandwidths are available only with low level audio.
                    Last edited by Gordon; 06-04-13, 08:21.

                    Comment

                    • Dave2002
                      Full Member
                      • Dec 2010
                      • 18078

                      #55
                      I found this article about "ripping" LPs - which might be of interest - http://www.computeraudiophile.com/co...ing-macintosh/

                      There are some sample audio files accessible - though you may have to look quite hard to find them, and compare them.

                      Comment

                      • Gordon
                        Full Member
                        • Nov 2010
                        • 1425

                        #56
                        Originally posted by Dave2002 View Post
                        I found this article about "ripping" LPs - which might be of interest - http://www.computeraudiophile.com/co...ing-macintosh/

                        There are some sample audio files accessible - though you may have to look quite hard to find them, and compare them.
                        Thanks Dave I had not seen that in my searches. The process they describe is sensible and the hardware used is sometimes well described in terms of detailed specs [except the HiLo device which is not]. This one is the best so far:



                        However there are holes in this. The claimed jitter is 42 pSecs RMS for all clock rates in both ADC and DAC and for up to 40kHz input [needing 96kHz sampling or higher, but then they say they use 128 times over-sampling which is confusing]. This is just about good enough for full 15/16 bits conversion accuracy at 48 kHz and audio of up to 24 kHz bandwidth but not 24 as claimed.

                        To give full spec at 24 bits jitter needs to be 256 times better [-48dB lower] OR the input audio level at high frequencies has to be 48 dB lower than FS. This spec relies on there being very low level high frequency content in the audio – but they then go on to advise doing the RIAA in software after capture – the unequalised audio off the disc is rich in HF thus challenging the jitter spec, quite the opposite of what you should do if you want to achieve a 16 bit or better bit precision!!! Their justification for this escapes me at present.

                        IF they band-limit the audio to a lot, lot less than half the sampling frequency [ie Nyquist] ie filter the Audio to 24 kHz but use 128 times 48 KHz over-sampling [6.144 MHz] then this 42 psecs jitter will be just about tolerable for full spec 22/23 bits. They aren’t clear about what the 128 is a multiple of eg set to 192 do they mean 128 times 192, but what then is the audio bandwidth, 96kHz or something else? It matters - a lot, IF you aspire to audiophilia.

                        They describe the careful set up details for the analogue vinyl playback including the cartridge loading [which is the correct thing to do because it affects the analogue frequency response esp for moving magnets] BUT then after all this talk of “analogue precision” [including doing the RIAA in software – so now where is that vinyl spirit gone eh?!] they give the user a vast range of “EQ” which could undo all that effort at the click of a mouse!! Why bother?

                        Once in the digital domain all that is needed is enough arithmetic range to deal with computation results. IF you don’t get the audio fully and precisely captured in the ADC first you are wasting your time trying to get it right afterwards. So, as we have said before, because you have captured in nominal 96/24 doesn’t mean that your LSBs carry anything at all useful. Clock jitter at the ADC is like adding noise to the input – just like dither - and it needs to be minimised to assure that the required bit precision is achieved. They do say in the article [I’m still working through the details] that the software suggested offers a 48 bit precision for arithmetic [floating point?] – given all that EQ and the 40 dB range of the RIAA they’ll need it!!!

                        This thread has caused a lot of thought on my part and so it has been highly educational. Many thanks therefore to all who contributed directly and also indirectly by raising aspects that I had not thought about in depth before. One lesson – specs don’t tell the whole truth!!!

                        Comment

                        • Dave2002
                          Full Member
                          • Dec 2010
                          • 18078

                          #57
                          I actually only stumbled across that article as I did a search to see if it is possible/feasible to do RIAA equalisation in software. At first didn't look feasible, then someone wrote that with a 16 bit ADC that dynamic range would be lost, then someone else suggested 24 bit ADC but doubted that it could work well, and finally at least one person has done it. I think the jury's still out as to whether it works well though. I'm guessing that cheap software packages won't do it anyway, though in the fullness of time ...

                          Comment

                          • Gordon
                            Full Member
                            • Nov 2010
                            • 1425

                            #58
                            Just so!! the RIAA requires that very low frequencies are cut 20 dB down [100 times less] on ca 1kHz and high frequencies 20dB up on that making a range of 40dB. The reverse happens at the pre amp. This means that the LF reaching the playback RIAA is -40dB [10,000 times] smaller than the HF. So, for a "flat" initial audio signal [ie pre RIAA at the cutter, which in practice is actually unlikely] if the full amplitude range of an ADC is filled by the HF absolute max amplitude then the max LF is 10,000 times smaller!! 14 bits gives you about a 14,000:1 ratio between just codable [only 2 bits at LF] and maximum so that RIAA is only just about do-able and then with a perfectly implemented ADC, ie one that really does do 14 bits, and no headroom to deal with actual variable cutting levels on disc. Not much dynamic range then! HMM!! One is banking on the actual real world audio having not much HF and also really good good level control at the ADC. OTBE 24 bits [if you can get them] give an extra 10 bits to cope with DR - approx 1,000:1 or 60dB, not much. All in all, best done in analogue?!?

                            BRAINSTORM!! somehow I got my sums wrong here - don't know how I made 40dB into 10,000, old age I suppose. This changes the results a bit!! Now we only have 7 bits difference between the top treble [>10KHz] and the deep bass [20Hz] across the RIAA range. So out of say 16 [real] bits available we have 9 to spend on DR support giving 512 times or 54dB somewhat better than before!! With 24 bits the DR becomes 8 bits or 48dB better still. That's more like it - but I am still not convinced.
                            Last edited by Gordon; 09-04-13, 21:19.

                            Comment

                            • Stunsworth
                              Full Member
                              • Nov 2010
                              • 1553

                              #59
                              Originally posted by Dave2002 View Post
                              I actually only stumbled across that article as I did a search to see if it is possible/feasible to do RIAA equalisation in software
                              It's possible...

                              Steve

                              Comment

                              • Dave2002
                                Full Member
                                • Dec 2010
                                • 18078

                                #60
                                Originally posted by Stunsworth View Post
                                Apparently so, yet the concerns about dynamic range would seem to be real, as rather fully described by Gordon. The proof of the pudding .... I will try listening to some of the examples later during the week.

                                Vinyl digitising could be even more complex if the changes in the curves for different scenarios are taken into consideration. There were, I believe, slightly different curves for 78s, and some early LPs have different RIAA curves. Also, was it not the case that the cutting engineers adjusted the curves a bit across the disc, so as to minimise end of side distortion? ***

                                Was the phase response well defined in the RIAA curves? Digital signal processing can be very accurate with respect both to phase and to amplitude, though a theoretical accuracy can be blown out of the water if the issues raised about dynamic range dominate, which seems likely.

                                It would seem best to do the RIAA in analogue, perhaps with a choice of different RIAA curves, and if wanted to do minor tweaking afterwards as at least the dynamic range limitations would be much more controlled and there'd be more room to play with in the digital domain.

                                *** Maybe it was only the levels which were changed, not the frequency response.
                                Last edited by Dave2002; 09-04-13, 04:32.

                                Comment

                                Working...
                                X