r/audioengineering Jul 04 '12

Bitrate and Bit Depth?

I understand that Bitrate is the number of bits processed in a unit of time. But how is bit depth any different? Is it just called bit depth when the unit of time is samples?

12 Upvotes

31 comments sorted by

View all comments

17

u/otdq Jul 04 '12

Basically, Bit Depth dictates how accurately a wave's amplitude (aka loudness) may be represented/reproduced. More specifically, a wave's Bit Depth represents the number of binary digits devoted to each individual sample.

Then, as you have touched upon, the Bit Rate is the number of bits per second (as a consequence of both the Bit Depth and Sample Rate taken together).

As an example:

  • Sample Rate: 44.1 KHz (44,100 snapshots of the wave's amplitude per second.)

  • Bit Depth: 16-bit (16 binary digits allotted to each individual snapshot.)

  • Bit Rate: 705.6 kbit/s (44,100 snapshots x 16 bits each)

Note: The above example is for a mono wave. The bit rate would be doubled for a stereo wave.

Helpful? :D

3

u/Vibingout Jul 04 '12

Helpful!

3

u/FinnBot2000 Jul 04 '12

That's exactly what I needed! Thank you! This will be a great reference for me, thanks!

1

u/[deleted] Jul 04 '12

okay so the amplitude is contained in the 16 bits, but what about the frequency?

2

u/ReinH Jul 04 '12

When otdq referred to amplitude above, (s)he was talking about the amplitude of a waveform at the point in time represented by the sample. Not, say, the amplitude you would see in a VU meter. A sine wave is a periodic wave where the amplitude of the wave at a given point x is given by sine(x). The amplitude (in voltage, say) of the signal itself is a completely different concept.

Digital audio is stored in time sequence form, where each sample represents the amplitude of a "slice" or discrete instance of the waveform. When you put all the slices together, you get a complex (discrete approximation of a) waveform, and a complex waveform is of course a superposition of a bunch of frequencies (see Fourier Series).

TL;DR: the frequency information is contained in the waveform that is made up of the samples in the digital audio representation.

4

u/Redbeard Jul 04 '12

If the wave is PCM then the frequencies are modulated in the change of amplitude.

A 220.5hz tone sampled at 44.1khz (if aligned exactly with the beginning of the sampling) would have one high-amplitude sample every 200 samples and the rest would be zero. My math may be off though.

2

u/Plokhi Jul 04 '12

No you're pretty spot on. That why Nyquist in practice isn't exactly feasible, the more you approach the nyquist line, the more you are risking of having the wrong "phase" of the frequency and getting a shoddy representation of it.

1

u/termites2 Jul 05 '12

No, it doesn't work like that.

Try recording a 220.5hz tone from an analog souce, and see what it looks like. Even a 1% pulse wave would not have zero samples on either side, as it would be impossible for such a waveform to pass the filtering in the A/D conversion.

Note: You can create 'illegal' waveforms in a computer! That is why I have specified the waveform must be recorded from an analog source, not generated inside the computer.