If you are a current student, please Log In for full access to the web site.
Note that this link will take you to an external site (https://shimmer.csail.mit.edu) to authenticate, and then you will be redirected back to this page.
1) Measuring Frequency
For lab this week we're going to be working on the pieces in our system which do the math to detect the resulting frequency shift in our ultrasonic wave from the Doppler Effect. The high-level block diagram below shows the pieces of our system that we've built so far and what we'll specifically be building in Lab 7.
From our work in Lab 4, we generate a 40kHz square wave, amplify it with an operational amplifier circuit before transmitting it out via an ultrasonic transducer; this was our transmitter stage. The sent waves get reflected off of objects and a small fraction of that energy will be recovered by the ultrasonic receiver we built up and tested in Lab 6, which recovers the signal and amplifies it back up so we can do some useful work and processing on it.
What we'll be doing in Lab 7 is the first part of this processing. In particular, we want to compare the original frequency we sent out to the frequency we're getting back, since the difference in those frequencies contains information about movement (thanks to the Doppler Effect). We can express this shift using the following equation where \delta f is our frequency shift, v is the velocity of an object, and c is the speed of sound.
2) Frequency Difference
If we want to figure out the difference between two frequencies, how can we do it? Trig identities are the key. Let's just say we have two signals. The first is v_i:
The second is v_r which is shifted from the frequency of v_i by \delta f:
If we multiply these two signals, we can use the following trig identity:
This indicates that the result of multiplying v_i(t) and v_r(t), would come out to be:
which simplifies to (using the fact that \cos\left(-x\right)=\cos\left(x\right):
And this is really useful because it is telling us that one of the resulting signals will have a frequency based on the difference of our two input frequencies, and the other one will be based on the sum of our two input frequencies. One of those signals (the difference) has all key information we want, namely \delta f.
In case you have any doubts that this trig identity actually works, below is an image of the result of multiplying two sinusoids together, with f_0=30Hz and \delta f=5Hz, resulting in a second frequency of 35Hz: (in lab we'll be working at a much higher frequency. The 30 Hz signal here is just for an example)
Pretty neat! You can see solid evidence of two distinct sinusoidal behaviors. One is oscillating at approximately 65 Hz (the sum of 35 Hz and 30 Hz) and one at roughly 5 Hz (the difference between 35 Hz and 30 Hz).
We're still not out of the woods. We have two signals in our result but only want one. While we'll talk about this more in lab, it turns out that we'll be able to remove the high-frequency component (our sum) with some circuitry and end up only getting the difference signal out.
We're almost done. We've been using multiplication in our discussion above, but how do we multiply two signals in circuits? We can multiply a signal by a constant (many of our previous op amp circuits), and we can do things like add signals and take their difference (summing amplifier and difference amplifier), but how do we multiply two signals? We haven't covered that yet.
There are actual analog multipliers in existence. These are circuits that can take in two analog signals and do exactly what we say: multiply them. For our system we're going to take advantage of a few things to keep it simple, however:
- While we care a lot about the difference in frequency between our transmitted and received signals, since that's what contains information about the velocity of the moving object, we don't care about the shape of our signal. In fact if you think about what our initial reference signal is in our circuit, it is not a sine wave, rather a 40 kHz square wave!
- Some traditional digital logic gates (as in AND gates, NOR gates, and a particular pair which we'll discuss below) can effectively perform the multiplication action on incoming signals. The caveat is that these signals need to be digital in nature.
To get started, if we don't care about our signal shape, let's just convert our sine waves so that they are one of two values (one form of digital). We will discretize the signals in the following way:
- +1 if the value of the sine wave is above 0
- -1 if the value of the sine wave is below 0
When we do that to our two frequencies above, and we then take this result and just multiply it as usual (like we did before with our full analog sine wave from before) we get the following:
This looks really interesting. There are two signals above, one at 30 Hz (f_0), and one at 35 Hz (f_0+\delta f) just like before, and the bottom is the result of their multiplication. If we zoom in as shown below, you can see there's a digital signal at roughly twice the frequency of our two original signals (their sum f_0+f_0+\delta f), albeit at varying duty cycle. In looking at the image above, you can also see that there's a much slower pattern going on as well at around 5Hz, and that should make sense since that is the difference signal (our f_0 - (f_0+\delta f)=-\delta f)
So it seems that even with digital signals the aforementioned trig identity holds up! (we can thank Fourier for that).
Moving on, this digital multiplication that we have above looks like the following if we do a mapping of (inputs)\to output:
- (-1, -1) \to +1 (when both signals are -1, generate a +1)
- (-1, +1) \to -1 (when signals are -1 and +1, generate a -1)
- (+1, -1) \to -1 (when signals are +1 and -1, generate a -1)
- (+1, +1) \to +1 (when both signals are +1, generate a +1)
If we relabel our "-1" signal as "0" and "+1+ as just "1", we could rewrite these four combinations as follows (all -1 become 0 and all +1 become 1, but we keep the input/output results above):
- (0, 0) \to 1
- (0, 1) \to 0
- (1, 0) \to 0
- (1, 1) \to 1
This relationship of 1's and 0's is exactly the same as a type of logic gate known as an Exclusive NOR gate, which is shown below (along with it's Truth table):
What this means is that an XNOR gate, which we can purchase for buy off the street for very little money (<$0.50, versus ~ $10 for a true analog multiplier), can effectively multiply two binary signals for us in this representation.
So what we need to do to implement this (and what we'll be doing in lab) is the following:
- Take the amplified recevier signal and convert it to digital using a comparator. The reference signal we generate to go into the transmitter is already a square wave so we don't need to put that signal through a comparator.
- Multiply the resultant signal with the original square wave signal using an XNOR gate to produce a signal containing the signals that express the sum and (more importantly to us) the difference of the two input signals' frequencies!