Making virtual instruments - Kalimba - using Reaper

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • Dave2002
    Full Member
    • Dec 2010
    • 18034

    Making virtual instruments - Kalimba - using Reaper

    This video shows how to make a Virtual Instrument for a simple instrument - a Kalimba (otherwise known under many names - such as mblra....) Typically these seme to have about 17 notes. I wonder if I should buy one - they're not very expensive.





    https://youtu.be/B2iEhJcrHxI Making the VI

    The video explanation follows a few basic principles:

    1. Record the samples - here 5 at a time - and for some reason 2 microphones were used - so I think actually 10 samples - or were those used for stereo? Also record the "silence" to pick up noise.

    2. Remove the noise in the recorded tracks. Here the tool used was RX Elements. I suspect that the noise reduction in Audacity would do almost as good a job - or maybe even as good or better.

    3. Import the tracks into Reaper, and then find the transient starts for each sample. This is a technique which can be used in other DAWs too - some of which also do this automatically.

    4. Then split the tracks at the transient starts - and then back off each region slightly - which as mentioned should do every region by exactly the same small amount - so the each separate region includes a very short period of time before the transient.

    5. The next phase is to import all the clips into a sampler - which I assume here is Kontakt. The tracks are all given systematic names, ready for auto mapping, which associates each clip with a specific note on the keyboard. For this particular instrument, the clips were only done at a generic dynamic level, as a decision was made that the notes on the instrument don't change much if done at different dynamics - which is not the case for some other instruments.

    6. The clip names are all associated with pitches and given appropriate names, and then an automap function is used to mape them.

    7. There were then some additional tweaks, such as refining the tuning for each sample, and also the volume levels.

    The overall process can be followed in other DAWS, and using other sampling synthesisers, such as Alchemy or EXS24 in Logic.

    More complex instruments may respond differently depending on dynamics or articulation. Those can be modelled by associating different clip samples with velocity in Midi, and the automation features in a DAW. That would then give a slightly (possibly significantly) different sound for each note, depending on how hard each key on the keyboard is struck.

    Other refinements are to shape the decay at the end of each sample - which would be particularly appropriate for the kalimba instrument as it isn't an instrument which can produce a sustained sound. Instruments (e.g. violins) which can produce a sustained note, and also have other articulations - staccato, spiccato and plucked - are likely to require a more complex approach - either with recording different samples, or simulating the articulation effects by altering the attack and decay on each sample in the sampler.

    Of course the VI doesn't have to be realistic. In the video initially there was an extended range on the first implementations of the VI, which went way outside the range of the kalimba. If the sounds are interesting enough, then why not keep them?
  • MrGongGong
    Full Member
    • Nov 2010
    • 18357

    #2
    I've not watched this
    BUT

    2 mics = Stereo (you have been spending too much time with Logic matey... NOT 2 panned MONO tracks FFS )

    Record it well and there is NO "noise" to get rid of
    but if you do then Audacity isn't as good as other options

    BUT

    If the sounds are interesting enough, then why not keep them?
    Seems to be the most important bit IMV

    Comment

    • Dave2002
      Full Member
      • Dec 2010
      • 18034

      #3
      Originally posted by MrGongGong View Post
      I've not watched this
      BUT

      2 mics = Stereo (you have been spending too much time with Logic matey... NOT 2 panned MONO tracks ... )
      OK - but whereas I've always thought stereo was/is a good idea, I really wonder if it makes a difference for this kind of work. If the guy in the video is trying to capture the sound of the instrument, why bother with picking up room sounds as well - particularly if he's going to filter most of it out anyway?

      I thought he might have been using the additional mic just to double the number of samples for the process - guess not. He really was trying to get stereo clips I suppose.

      He mentions "round robin", which I take to mean that since he's recorded 5 clips for each note, that he expects them to be played in rotation in the playback proess - maybe that's what Kontakt does. I did wonder, or whether perhaps it used some peculiar merging algorithm on the samples. Perhaps most sampling synthesisers have similar strategies for replay - I don't know.

      Record it well and there is NO "noise" to get rid of
      but if you do then Audacity isn't as good as other options
      You may be right, though I've had acceptable results with Audacity before, including removing unwanted background noise from commercially released recordings - noise which stuck out like a sore thumb!

      BUT
      Seems to be the most important bit IMV
      But that wasn't the option chosen. Why one would want an accurate rendition of a kalimba I'm not sure - but then again maybe - why not? Could just a real one, and plug in audio.

      I rather liked the extended low end of the sampled "instrument" - who cares whether that's not what the "real" instrument does?

      Comment

      Working...
      X