This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Compatibility between audio parameters

I have few doubts regarding compatibility between various audio parameters:

  1. How can I determine sampling rate, audio bit depth, no of channels of a already recorded audio sample.
  2. Suppose I play this same recorded audio sample with my application whose hardware parameters are different from the sample which I received. So in this case what would be the impact on the sound quality.
  3. What would be the impact of using different (at capture application and playback application) sampling rate, audio bit depth, no of channels.

If possible kindly provide me a link which is related to my queries.

  • Harman,

    1.  If you are talking about a file containing multiple samples, then the answer is:  either the file contains only sample data, in which case you have to know the sample rate, bit depth, number of channels via some other mechanism (e.g., file name or convention) or the file contains the metadata in addition to the the audio samples (e.g., 'wav' files).

    2.  You get garbage if the number of bytes per sample differs or the number of channels differs.  If the sample rate differs, you get sped-up or slowed-down audio.

    3.  'Sample Rate Conversion' is typically used to convert audio streams having different sample rates.  Bit depth differences can usually be converted via some form of reformatting (e.g., 24bit -> 16bit typically just drops the least-significant byte).  Changing the number of channels is 'upmix' or 'downmix';  the obvious approach is zero-content insertion for added channels and simply 'dropping' deleted channels (though you could combine channels via addition - but that will typically result in reduced headroom and/or saturation/overflow).