ECE 280/Concept List/F23

From PrattWiki
Revision as of 19:09, 28 September 2023 by DukeEgr93 (talk | contribs)
Jump to navigation Jump to search

Lecture 1 - 8/28 - Class introduction; basic signal classifications

  • Class logistics and various resources on Canvas
  • Definition of signals and systems from OW
  • Systems will often be represented with block diagrams. System operations for linear, time-invariant (more on that later) systems may be characterized in the frequency domain using transfer functions.
  • Signal classifications
    • Dimensionality ($$x(t)$$, $$g(x, y)$$, etc)
    • Continuous versus discrete
  • Analog versus digital and/or quantized
  • Periodic
    • Generally $$x(t)=x(t+kT)$$ for all integers k (i.e. $$x(t)=x(t+kT), k\in \mathbb{Z}$$). The period $$T$$ (sometimes called the fundamental period $$T_0$$) is the smallest value for which this relation is true
    • A periodic signal can be defined as an infinite sum of shifted versions of one period of the signal: $$x(t)=\sum_{n=-\infty}^{\infty}g(t\pm nT)$$ where $$g(t)$$ is only possibly nonzero within one particular period of the signal and 0 outside of that period.
  • Energy, power, or neither
    • Energy signals have a finite amount of energy: $$E_{\infty}=\int_{-\infty}^{\infty}|x(\tau)|^2\,d\tau<\infty$$
      • Examples: Bounded finite duration signals; exponential decay
    • Power signals have an infinite amount of energy but a finite average power over all time: $$P_{\infty}=\lim_{T\rightarrow\infty}\frac{1}{T}\int_{-T/2}^{T/2}|x(\tau)|^2\,d\tau=\lim_{T\rightarrow\infty}\frac{1}{2T}\int_{-T}^{T}|x(\tau)|^2\,d\tau<\infty$$ and $$E_{\infty}=\infty$$
      • Examples: Bounded infinite duration signals, including periodic signals
      • For periodic signals, only need one period (that is, remove the limit and use whatever period definition you want): $$P_{\infty}=\frac{1}{T}\int_{T}|x(\tau)|^2\,d\tau$$
    • If both the energy and the overall average power are infinite, the signal is neither an energy signal nor a power signal.
      • Examples: Certain unbounded signals such as $$x(t)=e^t$$
  • Useful math shortcut
    • For a trapezoidal pulse
      $$x(t)=\begin{cases}mt+b, &0<t\leq\Delta t\\0,&\mathrm{otherwise}\end{cases}$$
      where
      $$x(0)=b=H_1,~x(\Delta t)=b+m\,\Delta t=H_2$$
      the energy is:
      $$E=\frac{(b+m\,\Delta t)^3-b^3}{3m}=\frac{H_1^2+H_1H_2+H_2^2}{3}\Delta t$$
    • For a rectangular pulse where $$H_1=H_2=A$$, this yields:
      $$E=A^2\,\Delta t$$
    • For a triangle pulse where $$H_1=0$$ and $$H_2=A$$, this yields:
      $$E=\frac{1}{3}A^2\,\Delta t$$

Lecture 2 - 9/1 - Periodicity, even and odd, basic transformations, steps and ramps

  • Conclusion of "homework" from previous class: $$x(t)=1/\sqrt{t}$$ for $$t>1$$ has infinite total energy but 0 average power. This is related to Gabriel's horn, which has a finite volume but an infinite surface area.
  • More on periodic signals
    • The sum or difference of two periodic signals will be periodic if their periods are commensurable (i.e. if their periods form a rational fraction) or if any aperiodic components are removed through addition or subtraction.
    • The period of a sum of periodic signals will be at most the least common multiple of the component signal periods; the actual period could be less than this period depending on interference
    • The product of two signals with periodic components will have elements at frequencies equal to the sums and differences of the frequencies in the first signal and the second signal. If the periods represented by those components are commensurable, the signal will be periodic, and again the upper bound on the period will be the least common multiple of the component periods.
    • Best bet is to combine the signals, determine the angular frequencies of each component, and determine if all pairs of frequencies are commensurable; if they are, find the largest number that can be multiplied by integers to get all the component frequencies - that number is the fundamental frequency $$\omega_0$$.
  • Evan and Odd
    • Purely even signals: $$x(t)=x(-t)$$ (even powered polynomials, cos, $$|t|$$)
    • Purely odd: $$x(t)=x(-t)$$ (odd-powered polynomials, sin)
    • Even component: $$\mathcal{Ev}\{x(t)\}=x_e(t)=\frac{x(t)+x(-t)}{2}$$
    • Odd component: $$\mathcal{Od}\{x(t)\}=x_o(t)=\frac{x(t)-x(-t)}{2}$$
    • $$x_e(t)+x_o(t)=x(t)$$
    • The even and odd components of $$x(t)=e^{at}$$ end up being $$\cosh(at)$$ and $$\sinh(at)$$
    • The even and odd components of $$x(t)=e^{j\omega t}$$ end up being $$\cos(\omega t)$$ and $$j\,\sin(\omega t)$$
  • Singularity functions - see Singularity_Functions and specifically Singularity_Functions#Accumulated_Differences
    • Unit step: $$u(t)=\begin{cases}1, t>0\\0, t<0\end{cases}$$
    • Unit ramp: $$r(t)=\int_{-\infty}^{t}u(\tau)\,d\tau=\begin{cases}t, t>0\\0, t<0\end{cases}$$
  • Signal transformations
    • $$z(t)=K\,x(\pm a(t-t_0))$$ with
    • $$K$$: vertical scaling factor
    • $$\pm a$$: time scaling (with reversal if negative); $$|a|>1$$ speeds things up / compresses the signal while $$|a|<1$$ slows things down / expands the signal
    • $$t_0$$: time shift
    • Get into the form above first; for example, rewrite $$3\,x\left(\frac{t}{2}+4\right)$$ as $$3\,x\left(\frac{1}{2}(t+8)\right)$$ first

Lecture 3 - 9/4 - Power and energy with transformations, impulse functions

Lecture 4 - 9/8 - Impulse functions, integration with impulses and steps


Lecture 5 - 9/11 - System Interconnections and System Properties

  • Systems can be connected together in cascade, parallel, and feedback configurations
    • For LTI systems, systems in cascade will have their impulse responses convolved together and their transfer functions multiplied by each other
    • For LTI systems, systems in parallel will have their impulse responses added and their transfer functions multiplied by each other
    • For LTI systems, systems with negative feedback are a little more complicated
  • System properties - see System_Properties for some ways to check some of the properties
    • Linearity (linear versus nonlinear)
      • Common nonlinearities include additive constants, non-unity powers of signals
    • Time invariance (time invariant versus time-varying)
      • Common time-varying elements include $$t$$ outside of arguments of signals, time reversals, or time scales other than 1
    • Stability (stable versus unstable)
      • Common instabilities involve inverses, integrals, some trig functions, and derivatives if you are including discontinuities
    • Memoryless (memoryless versus having memory)
      • Memoryless signals can *only* depend on "right now"; some debate about derivatives
    • Causality (causal versus non-causal)
      • Systems whose responses depend only on current and previous values of the independent variable are causal; if they depend at all on future values, they are non-causal.
      • Real systems with time $$t$$ as the independent variable are causal; systems with location as the independent value may be non-causal
    • Invertibility (invertible versus non-invertible)
      • Invertible systems are systems where you can uniquely determine what the input was based on what the output is.
      • Determining invertibility is...complicated...

Lecture 6 - 9/15 - LTI Systems and Convolution

  • Introduction to LTI system analysis:
    • Define the step and impulse functions as given above
    • Define the impulse response $$h(t)$$ as the response to an impulse $$\delta(t)$$; that is, $$\delta(t)\,\longrightarrow\,h(t)$$
    • This will be mathematically very useful and physically impossible to measure, though we may be able to measure it approximately using a high-amplitude, short duration rectangular or other pulse with an area of 1.
    • Define the step response $$y_{\mbox{step}}(t)$$ as the response to an impulse $$u(t)$$; that is, $$u(t)\,\longrightarrow\,y_{\mbox{step}}(t)$$
    • This will be more likely to be physically obtainable but mathematically not quite as useful. Forutunately...
    • The step and impulse responses are related in the same ways as the step and impulse:
      $$\begin{align*} \delta(t)&=\frac{d}{dt}u(t) & u(t)&=\int_{-\infty}^t\delta(\tau)\,d\tau\\ h(t)&=\frac{d}{dt}y_{\mbox{step}}(t) & y_{\mbox{step}}(t)&=\int_{-\infty}^th(\tau)\,d\tau \end{align*}$$
    • Given those definitions, and assuming a linear-time invariant system:
      $$\begin{align*} \mbox{Definition}&~ & \delta(t)\,&\longrightarrow\,h(t)\\ \mbox{Time Invariant}&~ & \delta(t-\tau)\,&\longrightarrow\,h(t-\tau)\\ \mbox{Linearity (Homogeneity)}&~ & x(\tau)\,\delta(t-\tau)\,&\longrightarrow\,x(\tau)\,h(t-\tau)\\ \mbox{Linearity (Superposition)}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau\,&\longrightarrow\,\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\ \mbox{Sifting}&~ & \int_{-\infty}^{\infty}x(\tau)\,\delta(t-\tau)\,d\tau=x(t)\,&\longrightarrow\,y(t)=\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau\\ \end{align*}$$
    • Punchline: For an LTI system with impulse response $$h(t)$$ and input signal $$x(t)$$ the output signal is given by the convolution integral:
      $$ \begin{align*} y(t)=x(t)*h(t)=\int_{-\infty}^{\infty}x(\tau)\,h(t-\tau)\,d\tau \end{align*}$$
and through a transformation of variables can also be given by:
$$ \begin{align*} y(t)=h(t)*x(t)=\int_{-\infty}^{\infty}x(t-\tau)\,h(\tau)\,d\tau \end{align*}$$
  • Quick review of frequency analysis using impedance and division to get a transfer function
    • Reminder of translating between time and frequency domain with $$\frac{d}{dt}\leftrightarrows j\omega$$
    • Discussion about "illegal" circuit conditions (instant voltage change across capacitor or instant current change through inductor) and "weird" circuit conditions (voltage in parallel with an inductor or current source in series with a capacitor)
    • ECE 110 use $$e^{j\omega t}$$ as the model signal for frequency analysis; we will eventually use $$e^{st}$$ where $$s=\sigma+j\omega$$
  • Step response of an RC circuit

Lecture 7 - 9/18 - More Convolution

Lecture 8 - 9/22 - System Properties based on Impulse Response

  • System properties:
    • LTI systems have impulse responses; if you are given an impulse response for a system, it is most likely an LTI system (or else the impulse response is not as useful...)
    • Memoryless: $$h(t)=K\delta(t)$$
    • Causal: $$h(t)=0$$ for all $$t<0$$
    • Stable: $$\int_{-\infty}^{\infty}|h(t)|\,dt<\infty$$
    • Invertible: If $$h^{inv}(t)$$ exists such that $$h(t)\ast h^{inv}(t)=\delta(t)$$
  • Second-order systems - Characteristic polynomial for general second-order differential equation:
    • $$ \begin{align*} \frac{d^2y(t)}{dt^2}+2\zeta \omega_n\frac{dy(t)}{dt}+\omega_n^2y(t)&=f(t)\\ s^2+2\zeta\omega_n s+\omega_n^2&=0\\ s&=-\zeta\omega_n\pm\omega_n\sqrt{\zeta^2-1} \end{align*} $$
      where $$\zeta$$ is the damping ratio and $$\omega_n$$ is the natural frequency.
    • Depending on value of $$\zeta$$, system can be:
      • Undamped: $$\zeta=0$$, $$s$$=purely imaginary complex conjugates, homogeneous response is sinusoidal
      • Unerdamped: $$0<\zeta<1$$, $$s$$=complex conjugates with real parts, homogeneous response is exponential sinusoid
      • Critically damped: $$\zeta=1$$, $$s$$=repeated real roots, homogeneous response is polynomial exponential
      • Overdamped: $$\zeta>1$$, $$s$$=two different purely real roots, homogeneous response is exponential
      • If $$\zeta$$ and $$\omega_n$$ are both positive, real part is negative meaning exponential decay

Lecture 9 - 9/25 - Discrete Convolution; Convolution Review

  • If a discrete system is LTI, $$y[n]=x[n]\ast h[n]=\sum_{k=-\infty}^{\infty}x[k]\,h[n-k]=\sum_{k=-\infty}^{\infty}x[n-k]\,h[k]$$; this is discrete convolution
  • The results of Problem 1.54 in OW will be INCREDIBLY USEFUL for these
  • $$u[n]\ast u[n]=(n+1)u[n]=r[n]$$ - note that this is 1 when $$n=0$$, unlike $$r(t)=0$$ when $$t=0$$
  • Graphical convolution redux - full example in resources section of Sakai