Difference between revisions of "ECE 280/Concept List/S24"

From PrattWiki
Jump to navigation Jump to search
 
(13 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Lecture 1 - 8/28 - Class introduction; basic signal classifications==
+
== Lecture 1 - 1/11 - Class introduction; basic signal classifications==
 
* Class logistics and various resources on [https://canvas.duke.edu Canvas]
 
* Class logistics and various resources on [https://canvas.duke.edu Canvas]
 
* Definition of signals and systems from OW
 
* Definition of signals and systems from OW
Line 26: Line 26:
  
  
== Lecture 2 - 9/1 - Periodicity, even and odd, basic transformations, steps and ramps ==
+
== Lecture 2 - 1/16 - Periodicity, even and odd, basic transformations, steps and ramps ==
 
* More on periodic signals
 
* More on periodic signals
 
** The sum or difference of two periodic signals will be periodic if their periods are commensurable (i.e. if their periods form a rational fraction) or if any aperiodic components are removed through addition or subtraction.
 
** The sum or difference of two periodic signals will be periodic if their periods are commensurable (i.e. if their periods form a rational fraction) or if any aperiodic components are removed through addition or subtraction.
Line 46: Line 46:
 
** Unit cubic: $$c(t)=\frac{1}{6}t^3\,u(t)$$
 
** Unit cubic: $$c(t)=\frac{1}{6}t^3\,u(t)$$
  
== Lecture 3 - 9/4 - Signal Transformations, power and energy with transformations, impulse functions==
+
== Lecture 3 - 1/18 - Signal Transformations, power and energy with transformations, impulse functions==
 
* Signal transformations
 
* Signal transformations
 
** $$z(t)=K\,x(a(t-t_0))+C$$ with
 
** $$z(t)=K\,x(a(t-t_0))+C$$ with
Line 75: Line 75:
 
* Elevators are arguably more complicated than toasters...
 
* Elevators are arguably more complicated than toasters...
  
<!--
+
== Lecture 5 - 1/25 - System Properties ==
 
 
 
 
== Lecture 5 - 9/11 - System Interconnections and System Properties ==
 
* Systems can be connected together in cascade, parallel, and feedback configurations
 
** For LTI systems, systems in cascade will have their impulse responses convolved together and their transfer functions multiplied by each other
 
** For LTI systems, systems in parallel will have their impulse responses added and their transfer functions multiplied by each other
 
** For LTI systems, systems with negative feedback are a little more complicated
 
 
* System properties - see [[System_Properties]] for some ways to check some of the properties
 
* System properties - see [[System_Properties]] for some ways to check some of the properties
 
** Linearity (linear versus nonlinear)
 
** Linearity (linear versus nonlinear)
Line 95: Line 88:
 
*** Systems whose responses depend only on current and previous values of the independent variable are causal; if they depend at all on future values, they are non-causal.
 
*** Systems whose responses depend only on current and previous values of the independent variable are causal; if they depend at all on future values, they are non-causal.
 
*** Real systems with time $$t$$ as the independent variable are causal; systems with location as the independent value may be non-causal
 
*** Real systems with time $$t$$ as the independent variable are causal; systems with location as the independent value may be non-causal
** Invertibility (invertible versus non-invertible)
 
*** Invertible systems are systems where you can uniquely determine what the input was based on what the output is.
 
*** Determining invertibility is...complicated...
 
  
== Lecture 6 - 9/15 - LTI Systems and Convolution ==
+
== Lecture 6 - 1/30 - Discrete LTI Systems and Convolution ==
 +
* If a discrete system is LTI, $$y[n]=x[n]\ast h[n]=\sum_{k=-\infty}^{\infty}x[k]\,h[n-k]=\sum_{k=-\infty}^{\infty}x[n-k]\,h[k]$$; this is discrete convolution
 +
* The results of Problem 1.54 in OW will be INCREDIBLY USEFUL for these
 +
* $$u[n]\ast u[n]=(n+1)u[n]=r[n]$$ - note that this is 1 when $$n=0$$, unlike $$r(t)=0$$ when $$t=0$$
 +
 
 +
== Lecture 7 - 2/1 - More Discrete LTI Systems and Convolution ==
 +
* Graphical convolution -see link for demo app on Canvas
 +
 
 +
== Lecture 8 - 2/5 - LTI Systems and Convolution ==
 
* Introduction to LTI system analysis:
 
* Introduction to LTI system analysis:
 
** Define the step and impulse functions as given above
 
** Define the step and impulse functions as given above
Line 112: Line 110:
 
** Given those definitions, and assuming a linear-time invariant system:<br>$$\begin{align*}
 
** Given those definitions, and assuming a linear-time invariant system:<br>$$\begin{align*}
 
\mbox{Definition}&~ & \delta(t)\,&\longrightarrow\,h(t)\\
 
\mbox{Definition}&~ & \delta(t)\,&\longrightarrow\,h(t)\\
\mbox{Time Invariant}&~ & \delta(t-\tau)\,&\longrightarrow\,h(t-\tau)\\
+
\mbox{Time Invariance}&~ & \delta(t-\tau)\,&\longrightarrow\,h(t-\tau)\\
 
\mbox{Linearity (Homogeneity)}&~ & x(\tau)\,\delta(t-\tau)\,&\longrightarrow\,x(\tau)\,h(t-\tau)\\
 
\mbox{Linearity (Homogeneity)}&~ & x(\tau)\,\delta(t-\tau)\,&\longrightarrow\,x(\tau)\,h(t-\tau)\\
 
\mbox{Linearity (Superposition)}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau\,&\longrightarrow\,\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\
 
\mbox{Linearity (Superposition)}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau\,&\longrightarrow\,\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\
Line 125: Line 123:
 
y(t)=h(t)*x(t)=\int_{-\infty}^{\infty}x(t-\tau)\,h(\tau)\,d\tau
 
y(t)=h(t)*x(t)=\int_{-\infty}^{\infty}x(t-\tau)\,h(\tau)\,d\tau
 
\end{align*}$$
 
\end{align*}$$
* Quick review of frequency analysis using impedance and division to get a transfer function
 
** Reminder of translating between time and frequency domain with $$\frac{d}{dt}\leftrightarrows j\omega$$
 
** Discussion about "illegal" circuit conditions (instant voltage change across capacitor or instant current change through inductor) and "weird" circuit conditions (voltage in parallel with an inductor or current source in series with a capacitor)
 
** ECE 110 use $$e^{j\omega t}$$ as the model signal for frequency analysis; we will eventually use $$e^{st}$$ where $$s=\sigma+j\omega$$
 
* Step response of an RC circuit
 
 
== Lecture 7 - 9/18 - More Convolution ==
 
* Recap of derivation of convolution
 
* Using convolution to find response of an RC circuit to a signal
 
 
* Basic convolution properties - see [[Convolution Shortcuts]]  
 
* Basic convolution properties - see [[Convolution Shortcuts]]  
 
* Graphical convolution (see [[ECE_280/Examples/Convolution]]) ("flip 'n' shift")
 
* Graphical convolution (see [[ECE_280/Examples/Convolution]]) ("flip 'n' shift")
  
== Lecture 8 - 9/22 - System Properties based on Impulse Response ==
+
== Lecture 9 - 2/8 - More Convolution; System Properties based on Impulse Response;LCCDDE ==
 +
* Step response and impulse response of an RC circuit
 +
* Using convolution to find response of an RC circuit to a signal
 
* System properties:
 
* System properties:
 
** LTI systems have impulse responses; if you are given an impulse response for a system, it is most likely an LTI system (or else the impulse response is not as useful...)
 
** LTI systems have impulse responses; if you are given an impulse response for a system, it is most likely an LTI system (or else the impulse response is not as useful...)
Line 143: Line 134:
 
** Causal: $$h(t)=0$$ for all $$t<0$$
 
** Causal: $$h(t)=0$$ for all $$t<0$$
 
** Stable: $$\int_{-\infty}^{\infty}|h(t)|\,dt<\infty$$  
 
** Stable: $$\int_{-\infty}^{\infty}|h(t)|\,dt<\infty$$  
** Invertible: If $$h^{inv}(t)$$ exists such that $$h(t)\ast h^{inv}(t)=\delta(t)$$
+
* LCCDDE Solutions
 +
** Isolate all y terms on one side and x terms on the other
 +
** Shift index to get a leading $$a_0y[n]$$ on the left
 +
** If anything other than $$x[n]$$ is on the right, temporarily replace it with $$x[n]$$ to solve for $$h'[n]$$  
 +
** Get characteristic polynomial to solve for $$\gamma$$; if there are repeated $$\gamma$$ values, you will have polynomial geometric series
 +
** Use $$n=0$$, $$n=1$$, etc to get coefficients; for $$n$$-th differences, use $$n$$ time values to get coefficients
 +
** See more info on the LCCDDE helper PDF linked on Canvas.
 +
 
 +
== Lecture 10 - 2/13 - 1st and 2nd Order Differential Equations ==
 +
* First-order systems - impulse response is a decaying exponential
 
* Second-order systems - Characteristic polynomial for general second-order differential equation:
 
* Second-order systems - Characteristic polynomial for general second-order differential equation:
 
**<center>$$
 
**<center>$$
Line 159: Line 159:
 
*** If $$\zeta$$ and $$\omega_n$$ are both positive, real part is negative meaning exponential decay
 
*** If $$\zeta$$ and $$\omega_n$$ are both positive, real part is negative meaning exponential decay
  
== Lecture 9 - 9/25 - Discrete Convolution; Convolution Review ==
+
== Lecture 11 - 2/15 - Correlation ==
* If a discrete system is LTI, $$y[n]=x[n]\ast h[n]=\sum_{k=-\infty}^{\infty}x[k]\,h[n-k]=\sum_{k=-\infty}^{\infty}x[n-k]\,h[k]$$; this is discrete convolution
 
* The results of Problem 1.54 in OW will be INCREDIBLY USEFUL for these
 
* $$u[n]\ast u[n]=(n+1)u[n]=r[n]$$ - note that this is 1 when $$n=0$$, unlike $$r(t)=0$$ when $$t=0$$
 
* Graphical convolution redux - full example in resources section of Sakai
 
 
 
== Lecture 10 - 9/29 - LCCDDE Solutions ==
 
* Advance equation to get all non-negative shifts
 
* Get characteristic polynomial
 
* For n-th differences, use n time values to get coefficients
 
* See more info on Sakai resources page
 
 
 
== Lecture 11 - 10/2 - Correlation ==
 
 
Pre-script: in all of the equations below we are assuming real-valued signals; if the signals are complex, one of the terms in the integrand is generally taken as a complex conjugate.
 
Pre-script: in all of the equations below we are assuming real-valued signals; if the signals are complex, one of the terms in the integrand is generally taken as a complex conjugate.
 
* Correlation Function: two different versions: $$\begin{align*}
 
* Correlation Function: two different versions: $$\begin{align*}
Line 193: Line 181:
 
\mbox{MOC}_{xy}&=\frac{\left(\max(r_{xy}(t)\right)^2}{r_{xx}(0)\,r_{yy}(0)}\end{align*}$$</center> which will be some value between 0 and 1.  A 1 means that $$y(t)$$ is a shifted, scaled version of $$x(t)$$.
 
\mbox{MOC}_{xy}&=\frac{\left(\max(r_{xy}(t)\right)^2}{r_{xx}(0)\,r_{yy}(0)}\end{align*}$$</center> which will be some value between 0 and 1.  A 1 means that $$y(t)$$ is a shifted, scaled version of $$x(t)$$.
  
 +
== Lecture 12 - 2/20 - Fourier Series Introduction ==
 +
* For an LTI system, <center><math>
 +
\begin{align*}
 +
x(t) &\rightarrow y(t)\\
 +
\delta(t)&\rightarrow h(t)\\
 +
e^{j\omega t}&\rightarrow  e^{j\omega t} \int_{-\infty}^{\infty}h(t)e^{-j \omega t}dt=e^{j\omega t} H(j\omega)
 +
\end{align*}</math></center> where $$H(j\omega)$$ is the Fourier Transform of the impulse response, $$h(t)$$
 +
* Fourier Series representation can be used on signals that are periodic, bounded, have a finite number of local minima and maxima in a given period, and have a finite number of discontinuities in a given period. 
 +
** Sidebar: [https://en.wikipedia.org/wiki/Zeno%27s_paradoxes Zeno's Paradoxes]
 +
* Main formulas:
 +
<center><math>
 +
\begin{align*}
 +
x(t)&=\sum_{k=-\infty}^{\infty}\mathbb{X}[k]\,e^{jk\omega_0t}
 +
\end{align*}
 +
</math></center>
 +
* For periodic signals consisting of pure sinusoids,
 +
<center><math>
 +
\begin{align*}
 +
x(t)&=A\,\cos(p\omega_0t)+B\,\sin(q\omega_0t) &
 +
\mathbb{X}[k]&=\begin{cases}
 +
k=p & \frac{A}{2}\\
 +
k=q & \frac{B}{j2}\\
 +
k=-q & -\frac{B}{j2}\\
 +
k=-p & \frac{A}{2}
 +
\end{cases}
 +
\end{align*}
 +
</math></center>
 +
* For signals with a finite number of non-zero Fourier Series coefficients, synthesis can be done by noting the real part translates to a cosine at twice that amplitude and the imaginary part translates to a sine at negative twice the amplitude:
 +
<center><math>
 +
\begin{align*}
 +
\mathbb{X}[k]&=\begin{cases}
 +
k=5 & -2 \\
 +
k=4 & j3 \\
 +
k=2 & 4-j5\\
 +
k=-2 & 4+j5\\
 +
k=-4 & -j3\\
 +
k=-5 & -2
 +
\end{cases} &
 +
x(t)&=-4\cos(5\omega_0t)-6\sin(4\omega_0t)+8\cos(2\omega_0t)+10\sin(2\omega_0t)
 +
\end{align*}
 +
</math></center>
 +
 +
 +
== Lecture 13 - 2/22 - Test 1 ==
 +
Test
 +
 +
== Lecture 14 - 2/27 - Fourier Series Analysis and Properties ==
 +
* Even signals have purely real Fourier Series coefficients; odd signals have purely imaginary Fourier Series coefficients
 +
* For real signals, $$\mathbb{X}[-k]=\mathbb{X}^*[k]$$
 +
* <center><math>
 +
\begin{align*}
 +
x(t)&=\sum_{k=-\infty}^{\infty}\mathbb{X}[k]\,e^{jk\omega_0t} &
 +
\mathbb{X}[k]&=\frac{1}{T}\int_Tx(t)\,e^{-jk\omega_0t}\,dt
 +
\end{align*}
 +
</math></center>
 +
* $$\mbox{sinc}(x)=\frac{\sin(\pi x)}{\pi x}$$ which means $$\mbox{sinc}(0)=1$$ and $$\mbox{sinc}(n)=0$$ for all integers $$n$$ other than 0.
 +
* The Fourier Series coefficients for a centered rectangular pulse with height $$A$$, width $$W$$, and period $$T$$ are given by $$\mathbb{X}[k]=A\frac{W}{T}\mbox{sinc}\left(n\frac{W}{T}\right)$$
 +
 +
== Lecture 15 - 2/29 - FS Properties ==
 +
* Time shift property: $$y(t)=x(t-t_0)\rightarrow\mathbb{Y}[k]=e^{-jk\omega_0t_0}\mathbb{X}[k]$$
 +
* Derivative property: $$y(t)=\frac{dx(t)}{dt}\rightarrow\mathbb{Y}[k]=jk\omega_0\mathbb{X}[k]$$
 +
* Integral property: $$y(t)=\int_{-\infty}^tx(\tau)\,d\tau\mbox{ with }\bar{x}(t)=0\rightarrow\mathbb{Y}[k]=\frac{1}{jk\omega_0}\mathbb{X}[k]$$
 +
* Centered Impulse train: $$x(t)=\sum_{k=-\infty}^{\infty}\delta(t-kT)\rightarrow\mathbb{X}[k]=\frac{1}{T}$$
 +
* Integrals of impulses: see [[Fourier_Series#Singularity_Function_Shortcuts]]
 +
* Transfer functions: Given a transfer function $$\mathbb{H}(j\omega)$$, $$\mathbb{Y}[k]=\mathbb{X}[k]\mathbb{H}(jk\omega_0)$$
  
== Lecture 12 - 10/6 - Fourier Series ==
+
== Lecture 16 - 3/5 - Fourier Transform ==
 +
* General reference: [[Fourier Transforms]]
 +
* Synthesis and analysis: <center><math>
 +
\begin{align*}
 +
x(t)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}X(j\omega)e^{j\omega t}dt &
 +
X(j\omega)&=\int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt\end{align*}</math></center>
 +
* Impulse:  $$x(t)=\delta(t)\rightarrow\mathbb{X}(j\omega)=1$$
 +
* Exponential decay: $$x(t)=e^{-at}u(t)\rightarrow\mathbb{X}(j\omega)=\frac{1}{j\omega+a}$$ if $$a>0$$
 +
* Time derivative property: $$y(t)=\frac{dx(t)}{dt}\rightarrow \mathbb{Y}(j\omega)=j\omega\mathbb{X}(j\omega)$$
 +
* Frequency derivative property: $$y(t)=t\,x(t)\rightarrow \mathbb{Y}(j\omega)=j\frac{d\mathbb{X}(j\omega)}{d\omega}$$
 +
* Polynomial exponential decay: $$x(t)=t\,e^{-at}u(t)\rightarrow\mathbb{X}(j\omega)=\frac{1}{(j\omega+a)^2}$$ if $$a>0$$
 +
* Partial fraction expansion and the cover-up method
 +
 
 +
== Lecture 17 - 3/7 - More Fourier Transforms ==
 +
* Integral property
 +
* Time shift property
 +
* Integrals of impulse functions
 +
* Convolution property
 +
* Multiplication property
 +
* Fourier Transforms for periodic functions
 +
** Fourier transforms for cos and sin
 +
* MOAT: for $$a>0$$, <center><math>\begin{align*}e^{-at}\left(A\,\cos(\omega_xt)+B\,\sin(\omega_xt)\right)\,u(t)\leftrightarrow \frac{A(j\omega+a)+B(\omega_x)}{(j\omega+a)^2+\omega_x^2}\end{align*}</math></center>
 +
 
 +
== Lecture 18 - 3/19 - Sampling ==
 +
* Multiplication property
 +
* Impulse trains
 +
* Nyquist criterion
 +
* Reconstruction / zero-order hold
 +
* Duality
 +
 
 +
== Lecture 19 - 3/21 - More FS ==
 +
* Parseval's Theorem
 +
* Time shifts
 +
* Solving differential equations and determining function type based on discriminant
 +
 
 +
== Lecture 20 - 3/26 - Bode Plots ==
 +
* Decibels
 +
* Straight-line approximations
 +
 
 +
 
 +
== Lecture 21 - AM Communication 3/28 ==
 +
* Multiplication property
 +
* Synchronous demodulation
 +
* Fragility of synchronous demodulation
 +
* Asynchronous demodulation
 +
 
 +
== Lecture 22 - 4/2 - Laplace Transform ==
 +
* Derivation from LTI properties
 +
* Synthesis and analysis equations
 +
* Region of Convergence
 +
* LT of impulse, step, exponential, complex exponential, sinusoid
 +
* Frequency shift property
 +
* LT of exponential sinusoid
 +
* Derivative, integral, frequency derivative properties
 +
 
 +
== Lecture 23 - 4/4 - Test 2==
 +
* Test 2
 +
 
 +
== Lecture 24 - 4/9 - Laplace Transform Properties==
 +
* Review
 +
* Time scale/reversal property
 +
* Use of ROC to determine ILT
 +
* LT of semi-periodic signals
 +
 
 +
<!--
 
See [[ECE 280/Fall 2023/20241006]]
 
See [[ECE 280/Fall 2023/20241006]]
  
== Lecture 13 - 10/9 - Fourier Series Properties ==
+
 
== Lecture 14 - 10/13 - Filters, Energy, Time shift ==
 
== Lecture 15 - 10/20 - Fourier Transform ==
 
== Lecture 16 - 10/23 - Test 1 ==
 
== Lecture 17 - 10/27 - FS Properties ==
 
== Lecture 18 - 10/30 - Sampling ==
 
== Lecture 19 - 11/3 - FS Applications ==
 
== Lecture 20 - 11/6 - FS Applications II ==
 
== Lecture 21 - 11/10 - Communication Systems ==
 
== Lecture 22 - 11/13 - DSB and Envelope Detection ==
 
== Lecture 23 - 11/17 - Test 2 ==
 
== Lecture 24 - 11/20 - Laplace Transform ==
 
== Lecture 25 - 11/27 - Laplace Transform Properties ==
 
 
== Lecture 26 - 12/1 - Applications of LT ==
 
== Lecture 26 - 12/1 - Applications of LT ==
 
== Lecture 27 - 12/4 - Unilateral LT; Linearization ==
 
== Lecture 27 - 12/4 - Unilateral LT; Linearization ==
Line 221: Line 326:
 
** Time scaling of $$a$$ changes the energy or power by a factor of $$\frac{1}{a}$$
 
** Time scaling of $$a$$ changes the energy or power by a factor of $$\frac{1}{a}$$
 
** Neither time shift nor reversal impact energy or power, so you can shift and flip signal components to more mathematically convenient locations.
 
** Neither time shift nor reversal impact energy or power, so you can shift and flip signal components to more mathematically convenient locations.
 +
-->
 +
 +
<!--
 +
* Systems can be connected together in cascade, parallel, and feedback configurations
 +
** For LTI systems, systems in cascade will have their impulse responses convolved together and their transfer functions multiplied by each other
 +
** For LTI systems, systems in parallel will have their impulse responses added and their transfer functions multiplied by each other
 +
** For LTI systems, systems with negative feedback are a little more complicated
 +
** Invertibility (invertible versus non-invertible)
 +
*** Invertible systems are systems where you can uniquely determine what the input was based on what the output is.
 +
*** Determining invertibility is...complicated...
 
-->
 
-->

Latest revision as of 13:46, 10 April 2024

Lecture 1 - 1/11 - Class introduction; basic signal classifications

  • Class logistics and various resources on Canvas
  • Definition of signals and systems from OW
  • Systems will often be represented with block diagrams. System operations for linear, time-invariant (more on that later) systems may be characterized in the frequency domain using transfer functions.
  • Signal classifications
    • Dimensionality ($$x(t)$$, $$g(x, y)$$, etc)
    • Continuous versus discrete
  • Analog versus digital and/or quantized
    • Periodic versus aperiodic
      • Generally $$x(t)=x(t+kT)$$ for all integers k (i.e. $$x(t)=x(t+kT), k\in \mathbb{Z}$$). The period $$T$$ (sometimes called the fundamental period $$T_0$$) is the smallest value for which this relation is true
      • A periodic signal can be defined as an infinite sum of shifted versions of one period of the signal: $$x(t)=\sum_{n=-\infty}^{\infty}g(t\pm nT)$$ where $$g(t)$$ is only possibly nonzero within one particular period of the signal and 0 outside of that period.
  • Energy, power, or neither
    • Energy signals have a finite amount of energy: $$E_{\infty}=\int_{-\infty}^{\infty}|x(\tau)|^2\,d\tau<\infty$$
      • Examples: Bounded finite duration signals; exponential decay
    • Power signals have an infinite amount of energy but a finite average power over all time: $$P_{\infty}=\lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}|x(\tau)|^2\,d\tau=\lim_{T\rightarrow\infty}\frac{1}{2T}\int_{-T}^{T}|x(\tau)|^2\,d\tau<\infty$$ and $$E_{\infty}=\infty$$
      • Examples: Bounded infinite duration signals, including periodic signals
      • For periodic signals, only need one period (that is, remove the limit and use whatever period definition you want): $$P_{\infty}=\frac{1}{T}\int_{T}|x(\tau)|^2\,d\tau$$
    • If both the energy and the overall average power are infinite, the signal is neither an energy signal nor a power signal.
      • Examples: Certain unbounded signals such as $$x(t)=e^t$$
  • Useful math shortcut
    • For a trapezoidal pulse
      $$x(t)=\begin{cases}mt+b, &0<t\leq\Delta t\\0,&\mathrm{otherwise}\end{cases}$$
      where
      $$x(0)=b=H_1,~x(\Delta t)=b+m\,\Delta t=H_2$$
      the energy is:
      $$E=\frac{(b+m\,\Delta t)^3-b^3}{3m}=\frac{H_1^2+H_1H_2+H_2^2}{3}\Delta t$$
    • For a rectangular pulse where $$H_1=H_2=A$$, this yields:
      $$E=A^2\,\Delta t$$
    • For a triangle pulse where $$H_1=0$$ and $$H_2=A$$, this yields:
      $$E=\frac{1}{3}A^2\,\Delta t$$
  • If a signal has components that have finite total energy and other components that have finite average power, the finite total energy parts do not contribute to the average power.


Lecture 2 - 1/16 - Periodicity, even and odd, basic transformations, steps and ramps

  • More on periodic signals
    • The sum or difference of two periodic signals will be periodic if their periods are commensurable (i.e. if their periods form a rational fraction) or if any aperiodic components are removed through addition or subtraction.
    • The period of a sum or difference of periodic signals, if periodic, will be at most the least common multiple of the component signal periods; the actual period could be less than this period depending on interference
    • The product of two signals with periodic components will have elements at frequencies equal to the sums and differences of the frequencies in the first signal and the second signal. If the periods represented by those components are commensurable, the signal will be periodic, and again the upper bound on the period will be the least common multiple of the component periods (the sums and differences, not the original).
    • Best bet is to combine the signals, determine the angular frequencies of each component, and determine if all pairs of frequencies are commensurable; if they are, find the largest number that can be multiplied by integers to get all the component frequencies - that number is the fundamental frequency $$\omega_0$$.
  • Evan and Odd
    • Purely even signals: $$x(t)=x(-t)$$ (even powered polynomials, cos, $$|t|$$)
    • Purely odd: $$x(t)=x(-t)$$ (odd-powered polynomials, sin)
    • Even component: $$\mathcal{Ev}\{x(t)\}=x_e(t)=\frac{x(t)+x(-t)}{2}$$
    • Odd component: $$\mathcal{Od}\{x(t)\}=x_o(t)=\frac{x(t)-x(-t)}{2}$$
    • $$x_e(t)+x_o(t)=x(t)$$
    • The even and odd components of $$x(t)=e^{at}$$ end up being $$\cosh(at)$$ and $$\sinh(at)$$
    • The even and odd components of $$x(t)=e^{j\omega t}$$ end up being $$\cos(\omega t)$$ and $$j\,\sin(\omega t)$$
  • Singularity functions - see Singularity_Functions and specifically Singularity_Functions#Accumulated_Differences
    • Unit step: $$u(t)=\begin{cases}1, t>0\\0, t<0\end{cases}$$
    • Unit ramp: $$r(t)=\int_{-\infty}^{t}u(\tau)\,d\tau=\begin{cases}t, t>0\\0, t<0\end{cases}=t\,u(t)$$
    • Unit quadratic: $$q(t)=\frac{1}{2}t^2\,u(t)$$
    • Unit cubic: $$c(t)=\frac{1}{6}t^3\,u(t)$$

Lecture 3 - 1/18 - Signal Transformations, power and energy with transformations, impulse functions

  • Signal transformations
    • $$z(t)=K\,x(a(t-t_0))+C$$ with
    • $$K$$: vertical scaling factor
    • $$\pm a$$: time scaling (with reversal if negative); $$|a|>1$$ speeds things up / compresses the signal while $$|a|<1$$ slows things down / expands the signal
    • $$t_0$$: time shift
    • $$C$$: vertical shift
    • Get into the form above first; for example, rewrite $$3\,x\left(\frac{t}{2}+4\right)$$ as $$3\,x\left(\frac{1}{2}(t+8)\right)$$ first, find the "new origin" (the old point (0, 0) will be at the new point $$(t_0, C)$$ and then go from there
  • Power and energy and transformations
    • Power of $$A\,\cos(\omega t+\phi)=\frac{A^2}{2}$$ - note that $$A$$ is a factor but neither $$\omega$$ nor $$\phi$$ are.
    • In previous lectures, we looked at power signals and their overall average power $$P_{\infty}$$ and energy signals and their total energy $$E_{\infty}$$
    • We've now looked at a transformation of variables $$z(t)=K\,x(\pm a(t-t_0))+C$$
    • If $$x(t)$$ is an energy signal with energy $$E_{\infty,x}$$ or a power signal with overall average power $$P_{\infty,x}$$, then the different transformations above have the following individual impacts:
      • A multiplicative factor of $$K$$ in the transformation means the energy or power of $$z(t)$$ will be the energy or power of $$x(t)$$ multiplied by $$K^2$$
      • A time scale of $$a$$ in the transformation means the energy of $$z(t)$$ will be the energy of $$x(t)$$ multiplied by $$\frac{1}{a}$$; time scaling has no influence on average power
      • Time reversal has no influence on energy or power
      • Time shifts have no influence on energy or power
      • Vertical shifts have a complicated influence on power
  • Definition of the impulse function: Area of 1 at time 0; 0 elsewhere
    • Sifting property - figure out when $$\delta$$ fires off, see if that argument happens or if there are restrictions based on integral limits
  • Integrals with unit steps - figure out when integrand might be non-zero and work from there
  • See Singularity_Functions and especially Singularity_Functions#General_Simplification_of_Integrals and Singularity_Functions#Convolution_Integral_Simplification_with_Step_Function_Product_as_Part_of_Integrand

Lecture 4 - 1/23 - System Representations and Interconnections

  • Systems may be defined generally with an input and an output or specifically for linear and time invariant (LTI) systems with a transfer function
  • The frequency representation of the output of an LTI system is the frequency representation of the input of the system multiplied by the transfer function of the circuit.
  • The transfer function of a feedback system can be found by tracing the signals around the system
  • Elevators are arguably more complicated than toasters...

Lecture 5 - 1/25 - System Properties

  • System properties - see System_Properties for some ways to check some of the properties
    • Linearity (linear versus nonlinear)
      • Common nonlinearities include additive constants, non-unity powers of signals
    • Time invariance (time invariant versus time-varying)
      • Common time-varying elements include $$t$$ outside of arguments of signals, time reversals, or time scales other than 1
    • Stability (stable versus unstable)
      • Common instabilities involve inverses, integrals, some trig functions, and derivatives if you are including discontinuities
    • Memoryless (memoryless versus having memory)
      • Memoryless signals can *only* depend on "right now"; some debate about derivatives
    • Causality (causal versus non-causal)
      • Systems whose responses depend only on current and previous values of the independent variable are causal; if they depend at all on future values, they are non-causal.
      • Real systems with time $$t$$ as the independent variable are causal; systems with location as the independent value may be non-causal

Lecture 6 - 1/30 - Discrete LTI Systems and Convolution

  • If a discrete system is LTI, $$y[n]=x[n]\ast h[n]=\sum_{k=-\infty}^{\infty}x[k]\,h[n-k]=\sum_{k=-\infty}^{\infty}x[n-k]\,h[k]$$; this is discrete convolution
  • The results of Problem 1.54 in OW will be INCREDIBLY USEFUL for these
  • $$u[n]\ast u[n]=(n+1)u[n]=r[n]$$ - note that this is 1 when $$n=0$$, unlike $$r(t)=0$$ when $$t=0$$

Lecture 7 - 2/1 - More Discrete LTI Systems and Convolution

  • Graphical convolution -see link for demo app on Canvas

Lecture 8 - 2/5 - LTI Systems and Convolution

  • Introduction to LTI system analysis:
    • Define the step and impulse functions as given above
    • Define the impulse response $$h(t)$$ as the response to an impulse $$\delta(t)$$; that is, $$\delta(t)\,\longrightarrow\,h(t)$$
    • This will be mathematically very useful and physically impossible to measure, though we may be able to measure it approximately using a high-amplitude, short duration rectangular or other pulse with an area of 1.
    • Define the step response $$y_{\mbox{step}}(t)$$ as the response to an impulse $$u(t)$$; that is, $$u(t)\,\longrightarrow\,y_{\mbox{step}}(t)$$
    • This will be more likely to be physically obtainable but mathematically not quite as useful. Forutunately...
    • The step and impulse responses are related in the same ways as the step and impulse:
      $$\begin{align*} \delta(t)&=\frac{d}{dt}u(t) & u(t)&=\int_{-\infty}^t\delta(\tau)\,d\tau\\ h(t)&=\frac{d}{dt}y_{\mbox{step}}(t) & y_{\mbox{step}}(t)&=\int_{-\infty}^th(\tau)\,d\tau \end{align*}$$
    • Given those definitions, and assuming a linear-time invariant system:
      $$\begin{align*} \mbox{Definition}&~ & \delta(t)\,&\longrightarrow\,h(t)\\ \mbox{Time Invariance}&~ & \delta(t-\tau)\,&\longrightarrow\,h(t-\tau)\\ \mbox{Linearity (Homogeneity)}&~ & x(\tau)\,\delta(t-\tau)\,&\longrightarrow\,x(\tau)\,h(t-\tau)\\ \mbox{Linearity (Superposition)}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau\,&\longrightarrow\,\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\ \mbox{Sifting}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau=x(t)\,&\longrightarrow\,y(t)=\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\ \end{align*}$$
    • Punchline: For an LTI system with impulse response $$h(t)$$ and input signal $$x(t)$$ the output signal is given by the convolution integral:
      $$ \begin{align*} y(t)=x(t)*h(t)=\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau \end{align*}$$
and through a transformation of variables can also be given by:
$$ \begin{align*} y(t)=h(t)*x(t)=\int_{-\infty}^{\infty}x(t-\tau)\,h(\tau)\,d\tau \end{align*}$$

Lecture 9 - 2/8 - More Convolution; System Properties based on Impulse Response;LCCDDE

  • Step response and impulse response of an RC circuit
  • Using convolution to find response of an RC circuit to a signal
  • System properties:
    • LTI systems have impulse responses; if you are given an impulse response for a system, it is most likely an LTI system (or else the impulse response is not as useful...)
    • Memoryless: $$h(t)=K\delta(t)$$
    • Causal: $$h(t)=0$$ for all $$t<0$$
    • Stable: $$\int_{-\infty}^{\infty}|h(t)|\,dt<\infty$$
  • LCCDDE Solutions
    • Isolate all y terms on one side and x terms on the other
    • Shift index to get a leading $$a_0y[n]$$ on the left
    • If anything other than $$x[n]$$ is on the right, temporarily replace it with $$x[n]$$ to solve for $$h'[n]$$
    • Get characteristic polynomial to solve for $$\gamma$$; if there are repeated $$\gamma$$ values, you will have polynomial geometric series
    • Use $$n=0$$, $$n=1$$, etc to get coefficients; for $$n$$-th differences, use $$n$$ time values to get coefficients
    • See more info on the LCCDDE helper PDF linked on Canvas.

Lecture 10 - 2/13 - 1st and 2nd Order Differential Equations

  • First-order systems - impulse response is a decaying exponential
  • Second-order systems - Characteristic polynomial for general second-order differential equation:
    • $$ \begin{align*} \frac{d^2y(t)}{dt^2}+2\zeta \omega_n\frac{dy(t)}{dt}+\omega_n^2y(t)&=f(t)\\ s^2+2\zeta\omega_n s+\omega_n^2&=0\\ s&=-\zeta\omega_n\pm\omega_n\sqrt{\zeta^2-1} \end{align*} $$
      where $$\zeta$$ is the damping ratio and $$\omega_n$$ is the natural frequency.
    • Depending on value of $$\zeta$$, system can be:
      • Undamped: $$\zeta=0$$, $$s$$=purely imaginary complex conjugates, homogeneous response is sinusoidal
      • Unerdamped: $$0<\zeta<1$$, $$s$$=complex conjugates with real parts, homogeneous response is exponential sinusoid
      • Critically damped: $$\zeta=1$$, $$s$$=repeated real roots, homogeneous response is polynomial exponential
      • Overdamped: $$\zeta>1$$, $$s$$=two different purely real roots, homogeneous response is exponential
      • If $$\zeta$$ and $$\omega_n$$ are both positive, real part is negative meaning exponential decay

Lecture 11 - 2/15 - Correlation

Pre-script: in all of the equations below we are assuming real-valued signals; if the signals are complex, one of the terms in the integrand is generally taken as a complex conjugate.

  • Correlation Function: two different versions: $$\begin{align*} \phi_{xy}(t)&=\int_{-\infty}^{\infty}x(t+\tau)\,y(\tau)\,d\tau) \\ r_{xy}(t)&=\int_{-\infty}^{\infty}x(\tau)\,y(t+\tau)\,d\tau\end{align*}$$
    • What kind of overlap do two signals have as you move one of the signals relative to the other?
  • Autocorrelation Function: two different versions, but same result regardless:$$\begin{align*} \phi_{xx}(t)&=\int_{-\infty}^{\infty}x(t+\tau)\,x(\tau)\,d\tau) \\ r_{xx}(t)&=\int_{-\infty}^{\infty}x(\tau)\,x(t+\tau)\,d\tau\end{align*}$$
    • What kind of overlap does a signal have with itself as you move it relative to itself?
  • Autocorrelation:$$\phi_{xx}(0)$$ or $$r_{xx}(0)$$
    • What kind of overlap does a signal have with itself not accounting for any time shift?
    • For real-valued signals, note that this is the same as the energy of the signal!
  • In all cases, correlation can be written as convolution using
    $$\begin{align*} \phi_{xy}(t)&=x(t)*y(-t)\\ r_{xy}(t)&=x(-t)*y(t)\end{align*}$$
    but mathematically this leads to issues where an integrand may contain products of step functions facing the same way. One way to fix that is to find a way to either write $$y(-t)$$ as a function $$y_m(t)$$ that uses right-facing steps or write $$x(-t)$$ as a function $$x_m(t)$$ that uses right-facing steps then note that
    $$\begin{align*} \phi_{xy}(t)&=x(t)\ast y_m(t)\\ r_{xy}(t)&=x_m(t)\ast y(t)\end{align*}$$
  • None of the measures above give a great sense of how similar one signal is to another because they are all influenced by the scale of each signal. To get a dimensionless, normalized Measure of Correlation between two signals, you can calculate:
    $$\begin{align*} \mbox{MOC}_{xy}&=\frac{\left(\max(\phi_{xy}(t)\right)^2}{\phi_{xx}(0)\,\phi_{yy}(0)}\\ \mbox{MOC}_{xy}&=\frac{\left(\max(r_{xy}(t)\right)^2}{r_{xx}(0)\,r_{yy}(0)}\end{align*}$$
    which will be some value between 0 and 1. A 1 means that $$y(t)$$ is a shifted, scaled version of $$x(t)$$.

Lecture 12 - 2/20 - Fourier Series Introduction

  • For an LTI system,
    \( \begin{align*} x(t) &\rightarrow y(t)\\ \delta(t)&\rightarrow h(t)\\ e^{j\omega t}&\rightarrow e^{j\omega t} \int_{-\infty}^{\infty}h(t)e^{-j \omega t}dt=e^{j\omega t} H(j\omega) \end{align*}\)
    where $$H(j\omega)$$ is the Fourier Transform of the impulse response, $$h(t)$$
  • Fourier Series representation can be used on signals that are periodic, bounded, have a finite number of local minima and maxima in a given period, and have a finite number of discontinuities in a given period.
  • Main formulas:
\( \begin{align*} x(t)&=\sum_{k=-\infty}^{\infty}\mathbb{X}[k]\,e^{jk\omega_0t} \end{align*} \)
  • For periodic signals consisting of pure sinusoids,
\( \begin{align*} x(t)&=A\,\cos(p\omega_0t)+B\,\sin(q\omega_0t) & \mathbb{X}[k]&=\begin{cases} k=p & \frac{A}{2}\\ k=q & \frac{B}{j2}\\ k=-q & -\frac{B}{j2}\\ k=-p & \frac{A}{2} \end{cases} \end{align*} \)
  • For signals with a finite number of non-zero Fourier Series coefficients, synthesis can be done by noting the real part translates to a cosine at twice that amplitude and the imaginary part translates to a sine at negative twice the amplitude:
\( \begin{align*} \mathbb{X}[k]&=\begin{cases} k=5 & -2 \\ k=4 & j3 \\ k=2 & 4-j5\\ k=-2 & 4+j5\\ k=-4 & -j3\\ k=-5 & -2 \end{cases} & x(t)&=-4\cos(5\omega_0t)-6\sin(4\omega_0t)+8\cos(2\omega_0t)+10\sin(2\omega_0t) \end{align*} \)


Lecture 13 - 2/22 - Test 1

Test

Lecture 14 - 2/27 - Fourier Series Analysis and Properties

  • Even signals have purely real Fourier Series coefficients; odd signals have purely imaginary Fourier Series coefficients
  • For real signals, $$\mathbb{X}[-k]=\mathbb{X}^*[k]$$
  • \( \begin{align*} x(t)&=\sum_{k=-\infty}^{\infty}\mathbb{X}[k]\,e^{jk\omega_0t} & \mathbb{X}[k]&=\frac{1}{T}\int_Tx(t)\,e^{-jk\omega_0t}\,dt \end{align*} \)
  • $$\mbox{sinc}(x)=\frac{\sin(\pi x)}{\pi x}$$ which means $$\mbox{sinc}(0)=1$$ and $$\mbox{sinc}(n)=0$$ for all integers $$n$$ other than 0.
  • The Fourier Series coefficients for a centered rectangular pulse with height $$A$$, width $$W$$, and period $$T$$ are given by $$\mathbb{X}[k]=A\frac{W}{T}\mbox{sinc}\left(n\frac{W}{T}\right)$$

Lecture 15 - 2/29 - FS Properties

  • Time shift property: $$y(t)=x(t-t_0)\rightarrow\mathbb{Y}[k]=e^{-jk\omega_0t_0}\mathbb{X}[k]$$
  • Derivative property: $$y(t)=\frac{dx(t)}{dt}\rightarrow\mathbb{Y}[k]=jk\omega_0\mathbb{X}[k]$$
  • Integral property: $$y(t)=\int_{-\infty}^tx(\tau)\,d\tau\mbox{ with }\bar{x}(t)=0\rightarrow\mathbb{Y}[k]=\frac{1}{jk\omega_0}\mathbb{X}[k]$$
  • Centered Impulse train: $$x(t)=\sum_{k=-\infty}^{\infty}\delta(t-kT)\rightarrow\mathbb{X}[k]=\frac{1}{T}$$
  • Integrals of impulses: see Fourier_Series#Singularity_Function_Shortcuts
  • Transfer functions: Given a transfer function $$\mathbb{H}(j\omega)$$, $$\mathbb{Y}[k]=\mathbb{X}[k]\mathbb{H}(jk\omega_0)$$

Lecture 16 - 3/5 - Fourier Transform

  • General reference: Fourier Transforms
  • Synthesis and analysis:
    \( \begin{align*} x(t)&=\frac{1}{2\pi}\int_{-\infty}^{\infty}X(j\omega)e^{j\omega t}dt & X(j\omega)&=\int_{-\infty}^{\infty}x(t)e^{-j\omega t}dt\end{align*}\)
  • Impulse: $$x(t)=\delta(t)\rightarrow\mathbb{X}(j\omega)=1$$
  • Exponential decay: $$x(t)=e^{-at}u(t)\rightarrow\mathbb{X}(j\omega)=\frac{1}{j\omega+a}$$ if $$a>0$$
  • Time derivative property: $$y(t)=\frac{dx(t)}{dt}\rightarrow \mathbb{Y}(j\omega)=j\omega\mathbb{X}(j\omega)$$
  • Frequency derivative property: $$y(t)=t\,x(t)\rightarrow \mathbb{Y}(j\omega)=j\frac{d\mathbb{X}(j\omega)}{d\omega}$$
  • Polynomial exponential decay: $$x(t)=t\,e^{-at}u(t)\rightarrow\mathbb{X}(j\omega)=\frac{1}{(j\omega+a)^2}$$ if $$a>0$$
  • Partial fraction expansion and the cover-up method

Lecture 17 - 3/7 - More Fourier Transforms

  • Integral property
  • Time shift property
  • Integrals of impulse functions
  • Convolution property
  • Multiplication property
  • Fourier Transforms for periodic functions
    • Fourier transforms for cos and sin
  • MOAT: for $$a>0$$,
    \(\begin{align*}e^{-at}\left(A\,\cos(\omega_xt)+B\,\sin(\omega_xt)\right)\,u(t)\leftrightarrow \frac{A(j\omega+a)+B(\omega_x)}{(j\omega+a)^2+\omega_x^2}\end{align*}\)

Lecture 18 - 3/19 - Sampling

  • Multiplication property
  • Impulse trains
  • Nyquist criterion
  • Reconstruction / zero-order hold
  • Duality

Lecture 19 - 3/21 - More FS

  • Parseval's Theorem
  • Time shifts
  • Solving differential equations and determining function type based on discriminant

Lecture 20 - 3/26 - Bode Plots

  • Decibels
  • Straight-line approximations


Lecture 21 - AM Communication 3/28

  • Multiplication property
  • Synchronous demodulation
  • Fragility of synchronous demodulation
  • Asynchronous demodulation

Lecture 22 - 4/2 - Laplace Transform

  • Derivation from LTI properties
  • Synthesis and analysis equations
  • Region of Convergence
  • LT of impulse, step, exponential, complex exponential, sinusoid
  • Frequency shift property
  • LT of exponential sinusoid
  • Derivative, integral, frequency derivative properties

Lecture 23 - 4/4 - Test 2

  • Test 2

Lecture 24 - 4/9 - Laplace Transform Properties

  • Review
  • Time scale/reversal property
  • Use of ROC to determine ILT
  • LT of semi-periodic signals