I'd be interested to know how time stretching/compressing of digital music samples is done - mostly.
Small changes in pitch could be done by stretching or compressing a section of music. This would "distort" the sound in so far as the duration of each section would change - but for small pitch changes, most of us would not notice a difference of one or two seconds over a period of a minute. Even people with good absolute pitch might not notice small consistent differences in section length affecting the perceived pitches.
A simple method would be to simply put the digital data sample in a buffer, then clock it out using a timing signal based on a raised or lowered clock frequency. With so many circuits now completely digital, with very accurate clocks, I'm not sure that this method would be used much. It would have the merit of being fast in operation.
The process could be simulated in software, which would be a lot slower, but may well be the way some editing systems do this.
These approaches would only work for small fixed amounts of stretch or compression. If there was any variation in the degree of stretch or compression, this would probably be very audible, and unpleasant.
I'm fairly sure that audio software is available to do this sort of thing - but most of us don't have access to it. Things get more complex with video, where decisions may have to be made as to whether to stretch/compress the video data or the audio tracks, or both. Audio and video editing tools can probably do most of these things, though they may not always do the changes in the way which is desired by the creator. One creative artist may want to show a video sequence with a gradual "slowing down" of time - by stretching the video, but may want the backing track to carry on with similar pitching to before, while another may want the audio to take a dive. It becomes technically more complicated if such operations are applied to both video and audio components at the same time, though I suspect that some film makers do this "all the time".
Small changes in pitch could be done by stretching or compressing a section of music. This would "distort" the sound in so far as the duration of each section would change - but for small pitch changes, most of us would not notice a difference of one or two seconds over a period of a minute. Even people with good absolute pitch might not notice small consistent differences in section length affecting the perceived pitches.
A simple method would be to simply put the digital data sample in a buffer, then clock it out using a timing signal based on a raised or lowered clock frequency. With so many circuits now completely digital, with very accurate clocks, I'm not sure that this method would be used much. It would have the merit of being fast in operation.
The process could be simulated in software, which would be a lot slower, but may well be the way some editing systems do this.
These approaches would only work for small fixed amounts of stretch or compression. If there was any variation in the degree of stretch or compression, this would probably be very audible, and unpleasant.
I'm fairly sure that audio software is available to do this sort of thing - but most of us don't have access to it. Things get more complex with video, where decisions may have to be made as to whether to stretch/compress the video data or the audio tracks, or both. Audio and video editing tools can probably do most of these things, though they may not always do the changes in the way which is desired by the creator. One creative artist may want to show a video sequence with a gradual "slowing down" of time - by stretching the video, but may want the backing track to carry on with similar pitching to before, while another may want the audio to take a dive. It becomes technically more complicated if such operations are applied to both video and audio components at the same time, though I suspect that some film makers do this "all the time".
Comment