Tom Ramcigam (magicmarmot) wrote,
Tom Ramcigam


The core premise of Information Theory is that everything can be broken
down into two things: signal and noise. The signal is what
you want to retain, the noise is everything else.
In a signal processing environment, it makes perfect sense, and has a
lot of mathematical rigor that I really don't want to go into here (if
you're interested, look up "autocorrelation" and you'll see what I
Noise can be further broken down into two categories: correlated noise
(distortion) and uncorrelated noise (noise).

Uncorrelated noise is completely independent of the signal, and tends to
be random in nature, like white noise. Correlated noise is dependent on
the signal, and is usually introduced into the system by nonlinear
responses that change the shape of the signal (harmonic distortion) or
the symmetry of response (intermodulation distortion).

Uncorrelated noise is relatively easy to deal with by using the
information density bandwidth. For instance, if you repeat a siognal
over and over and average it out over several iterations, the
uncorrelated noise has a tendency to cancel itself out. One common
technique that is used a lot in CD players and A/D converters is
A CD player plays samples at 44.1 kHz (44,100 samples per second). With
something like 8x oversampling, samples are actually taken at 8 x
44100, or 352800 samples per second, then a filter is used to average
those samples down to the "real" rate.

Let me give an example using a set of numbers.


If you plotted these numbers on a graph, it would look something like

        │        │
────────┘        └────────

Now we introduce uncorrelated noise with the same amplitude range
(0-100), which for this purpose will be a set of random numbers:

[98, 14, 77, 2, 3, 86, 40, 12, 17, 9, 55, 76, 8, 90, 28, 17, 6, 66, 12,
83, 65, 4, 33, 1]

Adding them together, we get

[98, 14, 77, 2, 3, 86, 40, 12, 117, 109, 155, 176, 108, 190, 128, 117,
6, 66, 12, 83, 65, 4, 33, 1]

(BTW, Excel is wonderful for analyzing this kind of thing. If you want the excel file it is HERE.)

If you plot this out on a graph, it looks pretty ugly.

So now what we'll try is a 4x "sliding window" oversampling: start off
at the first position, take four consecutive numbers and average them;
slide to the next position and average those numbers and so on. (at the
end you run into a problem of a boundary... For now just assume values
of 0.)

For instance:
98 + 14 + 77 + 2 = 191, /4 = 47.75
Slide one position
14 + 77 + 2 + 3 = 96, /4 = 24 (and so on).

The final array values are:

If you plot this out in a graph and compare it to the signal + noise
graph above, you can see how the effect of this oversampling has
effectively "smoothed out" the graph, making it easier to see the "hump"
in the middle. The higher the number of oversamples, the smoother the
graph becomes; high-end digital audio systems regularly use 256 x

Correlated noise, or distortion is much harder to deal with. Because it
is related to the signal, it is almost impossible to remove on the
receiving end, and takes some huge processing power and some knowledge
of the distortion that the system introduces. It can be done by a
process known as deconvolution using the impulse response
of the system, assuming that the impulse response is static.
When it works, it's almost magical. It can be used in forensic labs to
clear up audio from badly recorded tapes, remove reverb and improve
intelligibility, but it rarely sounds pretty.
The best way to deal with distortion is to remove it from the system by
good design practices, and this is where I have a sticking point with
many computer audio cards. Most of them list a spec called THD+N (total
harmonic distortion plus noise) with a really really small figure. The
problem is that harmonic distortion is really pretty easy to get rid of,
but it is not the major problem with soundcards. The big culprit is
intermodulation distortion.

Intermodulation distortion (IMD) is not something that is easily
understood, but the symptoms are easy to spot when you know what you're
listening for.

IMD is measured by sending two pure sine waves into a system, say at
1000 and 10,000 Hz. If there were no IMD, what would come out of the
system would be two sine waves at 1000 and 10,000 Hz. With IMD however,
you get artifacts at the sum and difference harmonics; for instance,
there will be a signal at 8000, 9000, 11,000, 12,000 Hz and so on. The
more of these "ghost" signals there are, the higher the IMD and the
worse the system sounds.
Soundblaster's high end stuff has huge IMD problems. And even
Digidesign's really expensive cards have ugly problems. It shows up the
easiest when you listen to a ride cymbal: on a good system, you can hear
the cymbal ring with clarity. On a system with bad IMD, it sounds a lot
like white noise.

But information comes in a different context as well, a philosophical
one. Consider that everything that you percieve through your senses can
be thought of as information being processed; that means that what you
percieve as "reality" is just information. Signal + noise. Your brain
does a whole lot of processing with perceptual filters to try and boost
the signal-to-noise ratio (S/N), but you also apply personal perceptual
filters to what you receive as "cooked" data.

How good are your filters?


  • (no subject)

    It finally happened. It had to, really. I was in the bottom two cut from LJ-Idol this week. I made it to the top 50, from some rather larger…

  • Mayville

    "Too many bats in the belfry, eh?" The question came from a small man in the scrubs-and-robe garb of an inmate. He looked a little like a garden…

  • LJ-Idol

    Another batch of entries. Consistently amazed at how good the writing is. Voting is open for…

  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.