This lecture is part of a course on probability, estimation theory, and random signals. Welcome back to lecture slide set on power spectral densities. In the last videos, we have been looking that the power spectral density as a way of representing the statistics of a random process, but by viewing in the frequency domain, the time domain. So the power spectral density gives you the average power of a signal at particular frequencies. Effectively, this power spectral density is the statistics of the process with respect to itself. In this video, we're going to extend this definition to talk about the so-called cross spectral density function. So the cross power spectral density function of two jointly stationary processes, x and y, takes on an actual definition of being the discrete time Fourier transform of the cross correlation of those two processes. So if cross correlation between X and Y is the expected value of x of n times y conjugate of N minus L b_n. The cross power spectral density frequency domain is just fat DTFT. Here again, we're effectively setting sampling period equal to one, but if it's not equal to one, you would just include the period in this face a. Now as with the power spectral density function, the cost correlation that can be recovered from the cross power spectral density Vc PSD simply by taking the inverse DTFT. And that's as it was before. So we're going to see in the next handout when we study what happens as random signals go for linear systems, that the cross power spectral density is an incredibly powerful tool. For the moment, it might seem very abstract, but we can still view the cross power spectral density as a measure of the contribution of a particular frequency, 2V cross-correlation sequence. Now, unlike for power spectral density rho cross spectrum, which is a shorthand notation of cross power spectral density for cross spectrum is in general a complex function of omega. And that's because the cross-correlation function doesn't satisfy the same properties that the autocorrelation function does. For example, it's not a conjugate symmetric function. Similarly, the cost correlation is not a positive definite function, which doesn't put any constraints on the actual cross spectrum. Nevertheless, for cross power spectral density does have a number of properties, some of which just follow from the DTFT. For example, the cross power spectral density is still periodic in frequency with a period of two pi over T. But of course we've set t equal to one. So if I can just be equal to two pi in this case. So this is a property of a DTFT. I'm of course the fact that the signal is sampled in time. Great so easily that just means that you need to only consider plotting the cross power spectral density over one period, for example, between minus pi and t and plus pi and T. Now the next property he relies on the fact that the cross correlation between a signal x and signal y is in fact related to the cross correlation of a signal y. I'm a signal x by this definition here. It's not quite conjugate symmetry because the ordering of the signals in the calculation, because it's an expectation operator is important. But nevertheless, what we can see here is that the cross power spectral density between the signal x and a signal y is just the conjugates of the cross power spectral density between y, between x and avoids were fitness signals around and we have to conjugate it. Now the third property is that if a process is X of n and y of n are the, are both, we'll then the cross-correlation is wheel. And then it turns out that the cross power spectral density is conjugate symmetric. Now this is relatively easy to prove simply by writing down the DTFT. We can then set, for example, omega hat equal to minus i omega. So we're going to calculate e to the j omega hat, which becomes an e to the plus j omega L. But of course we know a couple of properties. We know that the conjugate of a sum is the sum of the conjugates and the conjugate of a product is the product of conjugates. So that means that in this expression at the bottom, I can introduce a conjugate sign around behold expression and change the sign of the exponent here, which is effectively conjugating it. And that's because the cross correlation function was deemed to be Lille. So that's a fairly straightforward proof which hourly fee to verify. Now back in the chapter on stochastic processes, we introduced the idea of normalised cross correlation function. And that was to remove the effect of the amplitudes of the signals so that signals of high amplitude is didn't appear. It's simply gave a high cross-correlation just by virtue of having a strong amplitude. So we introduce the idea of a normalised cross correlation function. So the normalised cross correlation function took the actual cross-correlation and simply divided by the standard deviations. While for stationary random process of cross, this expansion would simplify being request correlation as a function of a lag evenly. But since evaluates is now constant, the denominator is constant. So what we could do is we can take the Fourier transform of both sides surveys, normalised cross correlation functions, and come up with an equivalent expression. So this leads to something cool over coherence function. So in the numerator, this coherence function, we simply have the DTFT of a cross-correlation function, an n-bit denominator. We now have a couple of expressions which you're effectively equivalent to normalising by the standard deviations that we had in the normalised cross correlation functions. But instead, we're actually going to divide by the power signal x at a particular frequency. So this is effectively the variance by a particular frequency. And similarly for y as well. So this is very much equivalent to that normalised cross correlation function, but it is being done in the frequency domain. Now this coherence function has a number of interesting properties. For example, it can be shown that this coherence function is bounded between 01. And the very final exercise associated with this topic asked you to prove that. The question, does God you fever proof, but it's a very interesting question to look at. Now the coherence function does arise a number of times. And so it's very important to be aware of. Now so far, our focus has been on looking at the spectral densities, which is the Fourier transform of the correlation functions, either the auto correlation and cross correlation. But as we know from our signals and systems very dealing with the discrete time Fourier transform occasionally has problems and the DTFT does not always exist. So therefore was an interesting question about what happens if he took the z transform instead of the Fourier transform. So if we took the second moments, correlation functions and the cross correlation functions, and instead we took the z transform. Then we end up with what's called the complex spectral density wishes is that transform of the autocorrelation sequence. And the complex cross spectral density, which is the z-transform of a cross-correlation sequence. Now the reason that we're introducing was that transform technique is because as we'll see in the next chapter, when we study signals going through linear systems, it can actually be much more convenient to work in the z transform domain when it can be to work in the Fourier transformed domain. While slightly more abstract, these concepts of complex spectral densities are incredibly important. So by definition, are really quite straightforward. The complex spectral density, as I said, is said transform movie of the auto correlation sequence. And we're complex cross spectral density is as a transform of the cross-correlation. And these are given by this definition here. Now notice for bees are what are called bilateral or two sided Z transforms. Now if you're not comfortable, reveals that transform fairy than I would recommend going back to the chapters on linear systems of 0s, on discrete time linear systems, where it gives examples of how to take bilaterals that transforms. I'm, for example, to work out regions of convergence or the RAC. However, I will give numerous examples of how to take these transforms through the examples that we're going to be doing in my next handhelds. Now just as with our of a signal and system theory, we know that you can get between us that transform and the Fourier transform by setting that equal e to the j omega t. Now, if we do that and we know that the unit circle is within the region of convergence, which means that we can evaluate the Z transform at the point z equals e to the j omega. In this case again, T is equal to one here. But if you want to generalise this and you can set that equal each Germany t. So if we do that, then we find that the power spectral density is the complex spectral density set that equal e to the j omega t, Or just each of January if T is one. And similarly we get very similar expressions or complex cross spectral density functions. So just to get you more comfortable way of takings and transforms, let's just do a simple example. So we're going to find the complex spectral density of this rather interesting sequence of code, an interleaved example. So we're gonna let a be a real constant. Such that the magnitude of a is less than one. And if we were to plot the sequence, so r of n against en then is effectively an exponential decay, a two-sided exponential decay. But this time it's only going to be defined for even values including zealot. So if you set n equal to 0, you get 0, which is one. But if n is one, you get a value of 0 because its not defined for odd numbers. If n is equal to two, which is the next even number, when you've got a to the one, which is just a and then n equal to 30 again. And when f n is four, you've got a squared. And when you're going to get this pattern and so on. So what we're going to do is to work through this and just take the z transform of the sequence. So we're going to take the two sided of the bilaterals and transform, which means harbour limits from minus infinity to infinity of the sequence that is minus L. Now the easiest way of taking the SAT transform is to split this summation into two parts. Where are we going to do the sum over the odd integers plus the sum over the even integers. Now, the easiest way of representing an even number is to take any integer. For example, let's call it l e i multiplied by two. So that means that if Ellie is, for example, 1234 and so on, then you're going to get the sequence 2468. So you didn't get the even sequence. Similarly for the odd sequence, we can write this as two times l naught plus one. So again, L naught is any integer. And if L naught has a sequence 1234, and so on. And to our knowledge, plus one is a sequence 357. And of course, if you just set L 0 equal to 0, then you will get one number one. So that works out quite nicely. So if we were to do that, then we would like is that transform using these two summations here, you can see that the summations are over all integers. But in actual terms, and this has been modified so that you're effectively ended doing EVA, odd integers. And similarly from a even signed. Now we're told we're all vow was equal to 0 for any odd number. So we know that this term disappears to 0. So for the even terms, we know that we can replace the sequence by a to the two Hadi over two, so the 2's cancel. And that gives us this term on the right-hand side. Notice we still got the modulus sign. So whilst we have a summation over all integers, the next step is to divide the summation that we have into two summations. So we're going to divide the inter sum from minus infinity to 0 plus the sum from 0 to infinity. But note that we've counted zeri twice. And if we look at the term, but we're assuming EVA, if we set L equal to 0 and then we've just got to value of one. So you're effectively, we've counted the number one twice, so we're just going to subtract one. Okay, so this is very similar to an example of what we've done previously, Emily's topics. But I would encourage you to go and work through this yourself just to verify that you're following this. Okay? So as I've said, we're going to split this into two summations. Now that allows us to replace a to the model. By a e to the minus LE, if that integer is negative. Whereas if it's positive, we can just drop the modulus sign. Now the second term on the right-hand side is going to be easily doubt way of using geometric progressions. But with time on the left hand side, because it's a summation over negative integers, we would actually do a little bit back to if, for example, we redefined l hat as B minus LA. And that's just to change the summation signs around positive. It's just easy to mate plate when we come to use our geometric progressions. I'm also going to revive these term. So for example, this term here is the same as a over z squared audit power. Haley, I gave those manipulations and I end up with these expressions here. And at this point, I'm going to ask you to consider what you would do next. So please pause the video if you're not sure what to do next, have think about it. Okay, so I hope I've given you a bit of time to have had to think about that. But as we've always that transforms effectively, what we're going to do is use a geometric progression. So we know that a geometric progression for an infinite summation, say The sum of r to the l over old integers from n equal to 0 to infinity is just one over one minus r. So what we're going to do is to set r equal to A's and squared. And the first summation. In a second summation, we're going to set r equal to a squared. So if we apply that, then the first summation ends up being one over one minus a z squared. Second summation becomes one over one minus a z to the minus two. And whenever we subtract minus one, great, so we could just leave our expression VAB, which is free terms. It's not necessarily particularly elegant. So undervalued east we could just combine, say for example, the second two terms. And if we do that and we get risks rather elegant expression here. Well, you might try and ban, combine this to a common denominator, and that's absolutely fine as well. And if you were to do that, you would actually find that it comes out in a small elegant expression shown here. And that's really up to you. Now one reason for leaving it in these two terms is because effectively it's already done a partial fraction decomposition. So what we can see an environment is a discussion about how we take inverse complex spectrum tendencies. And usually that does involve a partial fraction expansion of some kind. But do you notice that if we leave it in this form here, which in the box, but you can actually come in two different expressions because I got this expression by combining the last two terms. But of course I could combine the minus one with the first ten, in which case you would have got this alternative expression at the bottom. Release information invest is because usually when you take inverse transforms, you're going to use tables of results. And depending on how your the table of inverse transforms is presented, then sometimes you need to manipulate the z-transform expression. You've got to match what's in the table. Don't expect years that transform to be in precisely the reform in the table that you're using, you might have to do a little bit of extra work to match up to the table that you've been given. Okay, but I hope you're comfortable with taking that bilaterals that transform. As I said, the usual trick is make sure you're aware of geometric progression formulas or other formulas are of a similar and can be used. So I've already just hinted at how you take the inverse z transform. Now, I mentioned the sub tables, but just before I get onto that, noticed about the formal definition for the inverse of a complex spectral density. And the cross spectral densities are found by contour integration. So that is given by this interesting expression here. But notice an integral isn't on the traditional limits, is to be taken counter-clockwise in the region of convergence. But very rarely would you do this contour integration. And therefore in practise, these integrals are usually sold by taking pre calculations and tables. So what does a table of that transforms look like? Well, I've provided one and a handout and just in a moment I'm going to have a look at values. But before I do, I should note that there were a couple of properties of complex spectral densities that parallel some of the properties that we had for the cross power spectral density and a summarised here. So the first property at different forms of conjugate symmetry. But notice that the conjugate symmetry occurs both in terms of a z variable but, Or say overall function. So you might want to try and prove these results yourself, but I think what I'll do is cover varies in more detail when we actually ii sum. Similarly for the case when v random process is real, actually we have a very simple expression for complex spectral density, and that is shown here. So if you're wondering what the physical interpretation of that is, well, imagine what happens if you set z equal e to the j omega. That gives you the power spectral density. And it just says that px x of e to the j omega is px x of e to the minus j omega. In other words, it's a symmetric function, what we already knew that. So that's a nice definition that links in with a fact that we were already aware of. So be aware of a few properties will come in very often in the next handout, but I will use them at the time and remind you of those. Okay, so I mentioned that I was going to just highlights and tables bilateral transforms. So the table is in the handout being effectively a data sheet that summarises what the z transform as for in the following table, we're going to assume that this constant a is less than or equal to one. I'll show you what that means in a moment. And we're also going to use the step function in discrete-time, which just indicates whether your function is defined for positive values of time or negative values of time. Now the difference between the unilateral z-transform and the bilateral transform is that we do have to consider regions of interests. And again, that is very important, but we'll deal with region of convergences when necessary, as we go through some examples. Finally, just before we get onto the table, we should note that if the signal is 0 for negative time samples, then it's known as a causal signal. If the signal is 0 for all positive time samples is known as an acausal signal. So with all of those various definitions in mind, this is a typical z-transform table for some very simple signals. We have at the top here, we just have a step function which is 0 for negative time samples and then one for positive time samples. The region of convergence here is outside the unit circle. Now notice that that means, that its discrete-time Fourier transform doesn't exist because the point z equals e to the j omega is outside the ROC. Well, that actually makes sense because we know that the DTFT does not technically exist for a step function. So don't be surprised by that result. Our second entry is a reflected step function for negative time values. And you can see that instead of having z to the minus one, we've got a zed. Where you have a zed, that generally corresponds to a negative time sample; where you've got a zed to the minus one, that's a positive sample. The region of convergence is again inside the unit circle, but doesn't include it. So it's DTFT does not exist. In a third example, we've got an exponential decay. Notice by the way that all of these are so-far causal. And that becomes a familiar expression. Now this will decay away if a is less than one. And that means that if you drew the region of convergence, then it is outside the circle of radius a. But because the modulus of a is less than one, then this actually includes the region of convergence. And that means that the DTFT does exist. So that means that you can set zed equal e to the j omega and therefore get the frequency response. Now, I will let you explore the other examples I've got here, which are just variations of exponential decays and inverted exponential decays. And also weighted exponential decays as you've gotten to the bottom here. And there are also more examples of further exponential decays in the table, that is in the handout. I haven't managed to put them all on this slide, but you should go and study those. Now, as I mentioned previously, a variety of equivalent expressions can result from some simple manipulations. And so other tables of z-transforms may appear to list different results, but there are actually equivalent if you manipulate them. I've already looked at one example in a previous slide, but here are a couple more where if we were to look at the two sided exponential decay, then you can write this in a variety of different forms as shown here. Now there's no need to go into this too much now in this video, but be aware of that. And my final bullet point: I've already said, it basically means that sometimes you have to do a little bit of work and sometimes it can be difficult to find the exact transform relations in tables. Okay, so this was the last topic in our handout on the frequency domain representation of random signals. We've extended the ideas that we've had in the previous topics on power spectral densities to extend them to taking a Fourier transform of cross-correlations. So that introduced the idea of a cross power spectral density. And it's effectively the second moment but in the frequency domain. We've also taken our theory and instead of doing it in the Fourier domain, we decided to do it in the z-domain. And that's because it is a very powerful technique for analysing linear systems. Especially when we come to look at the statistics of random signals at the output of a linear system. And that's what we're going to do in the next chapter. So until then, thank you very much.