The Sound Science of Audio Codecs

by Andrew Prokop | Arrow Systems Integration

Source: No Jitter

I have never been happy with the answer "because." No matter what the subject or question, I am not satisfied until I am told the whys, wherefores, and possible exceptions. While I can't claim to fully understand every explanation I'm provided (I still don't completely fathom relativity), I want the opportunity to try. I won't know my limits until they've been stretched.

This year for the International Avaya Users Group's annual conference, Converge2015, one of the organizers asked me to speak about audio codecs. My first reaction was, "Is there anything I can say about codecs that hasn't already been said?" After all, G.711 has been around since 1972. How can anyone with a few years of communications under his or her belt not know about a codec that was invented before cell phones, the World Wide Web, and PCs?

After mulling it over for a few days, it suddenly hit me. Instead of simply running through the different codecs, I should explain why they exist in the first place. In other words, if G.711 has been around since 1972 and it has been doing a pretty good job all these years, why do we also have G.726, G.729, G.722, etc.?

This led me to the root question that all audio codecs share: What is sound and how do we take the noise that comes from our mouths and turn it into something that can be transported across an IP network?

Let's find out.

 

 

Read More >

Share this article

The thoughts and opinions in these blogs belong to the individual blogger and do not necessarily represent the views or opinions of Arrow Systems Integration.