The code immediately below just sets up a few things that will be used for the demo, including generating the input signal that we will be working with. These lines do not directly relate to the filtering aspect of the demonstration.
from IPython.display import Audio
%run DemoSignals.ipynb
rcParams['figure.figsize'] = 10, 8
This demonstration presents a very contrived scenario that is unlikely to crop up in the real world. I would have demonstrated a more relevant application of filtering (perhaps selecting a radio station, for example), but I instead chose to work with audio signals for a few reasons: - We can directly observe these signals using our sense of hearing - Audio deals with real-valued samples - Audio is already at baseband; no need to worry about mixing down higher frequency signals - Allows us to concentrate on the FIR filtering instead of getting hung up on these other aspects - (...and audio is of personal interest to me)
Despite the contrived nature of this demonstration, there are still analogies that can be drawn to more common filtering applications.
Given an audio signal containing three melodies of non-overlapping frequencies, use FIR filtering to isolate each of the melodies.
First, we will look at using the Window Design Method to generate low pass, bandpass, and high pass filters to filter each of these signals. Then, we will take a quick look at using the Parks-McClellan method to filter the signals.
fs = 8000
specgram(input_signal, Fs=fs)
Audio(input_signal, rate=fs)
First we'll take a look at creating a low pass filter to pass only the "channel 1" audio.
From above the above list of knowns (and the spectrogram), we see that channel 1 contains frequencies up to around \(785\text{Hz}\). Channel 2 starts at around \(932\text{Hz}\). This leaves a very narrow transition band of \(932-785=147\text{Hz}\), or around \(0.018375f_s\). We will need a steep rolloff to accomplish this.
The first step to the window design method is to design an ideal "brick wall" frequency response in the frequency domain. What size FFT should we start with?
Let's give this a try...
N = 128
response = zeros(N)
response[range(-13, 14)] = 1
stem(response);
Now to get the filter coefficients, take the inverse FFT of the ideal filter response above
coefficients = real(fft.ifft(response)) # Numerical error results in a tiny imaginary part while taking IFFT; discard it
stem(coefficients);
We're not quite there. We're interested in the coefficients centered around time \(t=0\). Since the IFFT considers the time domain periodic, we simply wrap the signal (see Figure 5-18 in the text).
coefficients = roll(coefficients, N/2-1)
stem(coefficients);
We also want to truncate the coefficients so that they're symmetric. Lop off the unmatched 128th coefficient:
coefficients = coefficients[:-1]
stem(coefficients);
We've got our filter coefficients. Now we just need to convolve them with the input signal to get our filtered output. From Equation 5-6 in the text:
\[ y(n)=\sum_{k=0}^{M-1} h(k)x(n-k) \]
Here's a straightforward implementation of the convolution operation (of course, we could just use numpy's convolve()
function, or do this in some other more "Pythonic" manner, but let's be explicit for now):
def my_convolution(x, h):
""" Naive convolution implementation
Args:
x: the input signal
h: the filter coefficients
Returns:
the convolution of x and h
"""
M = len(h)
y = []
for n in arange(0, M + len(x) - 1):
y.append(0)
for k in arange(0, M):
if (k < len(h) and (n -k) < len(x)):
y[n] += h[k] * x[n - k]
return y
Cross our fingers and try out our filter:
output_signal = my_convolution(input_signal, coefficients)
specgram(output_signal, Fs=fs)
Audio(output_signal, rate=fs)
That filter performed just "okay". We can see from the spectrogram (and we can hear if we listen carefully) that the other channels are still apparent in the signal.
To understand why, let's take a look at the frequency response of the filter that we created. Remember, we can obtain the frequency response of an FIR filter simply by taking the FFT of the filter coefficients.
pad_to = 2048
freq_axis = linspace(0, fs, pad_to, False)
plot(freq_axis, 20 * log10(abs(fft.fft(coefficients, pad_to))));
The frequency response above shows us that although we achieved a quick rolloff, the side lobes in the stop band are still relatively high.
Let's see what happens if we apply a windowing function to our filter. We'll use the Blackman window because of it's good side lobe attenuation.
w = blackman(len(coefficients))
coefficients_windowed = w * coefficients
output_signal = my_convolution(input_signal, coefficients_windowed)
specgram(output_signal, Fs=fs)
Audio(output_signal, rate=fs)