Korrekturen eingepflegt
This commit is contained in:
@@ -4,7 +4,8 @@ The chapter begins with the description of signals, the problem of them interfer
|
||||
Filters are used in various functional designs, therefore a short explanation into the concepts of Finite Impulse Response- and Infinite Impulse Response filters is indispensable.\\
|
||||
At this point an introduction into adaptive noise reduction follows, including a short overview of the most important steps in history, the general concept of ANR, its design possibilities and its optimization possibilities in regard of error calculation.\\
|
||||
With this knowledge covered, a realistic signal flow diagram of an implanted CI system with corresponding transfer functions is designed, essential to implement ANR on a low-power digital signal processor.\\
|
||||
At the end of chapter two, high-level Python simulations shall function as a practical demonstration of the recently presented theoretical background.
|
||||
At the end of chapter two, high-level Python simulations shall function as a practical demonstration of the recently presented theoretical background.\\ \\
|
||||
Throughout this thesis, sampled signals are denoted in lowercase with square brackets (e.g. {x[n]}) to distinguish them from time-continuous signals (e.g. {x(t)}). Vectors are notaded in lowercase bold font, whereas matrix are notaded in uppercase bold font. Scalars are notated in normal lowercase font.\\
|
||||
\subsection{Signals and signal interference}
|
||||
A signal is a physical parameter (e.g. pressure, voltage) changing its value over time. Whereas in nature, a signal is always analog, meaning continuous in both time and amplitude, a digital signal is represented in a discrete form, being sampled at specific time intervals and quantized to finite amplitude levels.\\ \\
|
||||
The term "signal interference" describes the overlapping of unwanted signals or noise with the desired signal, degrading the overall quality and intelligibility of the processed information. A simple example of signal interference is shown in Figure \ref{fig:fig_interference} - the noisy signal (top) consists out of several signals of different frequencies, representing both the desired signal and unwanted noise. The cleaned signal (bottom) shows the desired signal after unwanted frequencies has been cut off by a filter.\\ \\
|
||||
@@ -28,15 +29,14 @@ Before digital signal processing can be applied to an analog signal like human v
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.8\linewidth]{Bilder/fig_nyquist.jpg}
|
||||
\caption{Adequate (top) and inadequate (bottom) sampling frequency of a signal. \cite{source_dsp_ch1}}
|
||||
\caption{Adequate (top) and inadequate
|
||||
(bottom) sampling frequency of a signal. \cite{source_dsp_ch1}}
|
||||
\label{fig:fig_nyquist}
|
||||
\end{figure}
|
||||
Throughout this thesis, sampled signals are denoted in lowercase with square brackets (e.g. {x[n]}) to distinguish them from time-continuous signals
|
||||
(e.g. {x(t)}).\\
|
||||
The discrete digital signal can be viewed as a sequence of finite samples with its amplitude being a discrete value, like a 16- or 32-bit integer. A signal vector of the length N, containing N samples, is therefore notated as
|
||||
\noindent The discrete digital signal can be viewed as a sequence of finite samples with its amplitude being a discrete value, like a 16- or 32-bit integer. A signal vector of the length N, containing N samples, is therefore notated as
|
||||
\begin{equation}
|
||||
\label{equation1}
|
||||
x[n] = [x[n-N+1],x[n-N+2],...,x[n-1],x[n]]
|
||||
x = [x[n-N+1],x[n-N+2],...,x[n-1],x[N]]
|
||||
\end{equation}
|
||||
where x[n] is the current sample and x[n-1] is the preceding sample.
|
||||
\subsubsection{Time domain vs. frequency domain}
|
||||
@@ -60,7 +60,7 @@ During the description of transfer functions, the term ``filter'' was used but n
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.8\linewidth]{Bilder/fig_lowpass.jpg}
|
||||
\caption{Behavior of a low-pass-filter. At the highlighted frequency $f_c$ of 3400 Hz, the amplitude of the incoming signal is reduced to 70\%. \cite{source_dsp_ch2}}
|
||||
\caption{Behavior of a second order Butterworth low-pass-filter. At the highlighted frequency $f_c$ of 3400 Hz, the amplitude of the incoming signal is reduced to 70\%. \cite{source_dsp_ch2}}
|
||||
\label{fig:fig_lowpass}
|
||||
\end{figure}
|
||||
\subsection{Filter designs}
|
||||
@@ -87,7 +87,7 @@ Equation \ref{equation_iir} specifies the input-output relationship of a IIR fil
|
||||
\label{equation_iir}
|
||||
y[n] = \sum_{k=0}^{M} b_kx[n-k] - \sum_{k=0}^{N} a_ky[n-k] = b_0x[n] + ... + b_Mx[n-M] - a_0y[n] - ... - a_Ny[n-N]
|
||||
\end{equation}
|
||||
Figure \ref{fig:fig_iir} visualizes a simple IIR filter with two feedforward coefficients and two feedback coefficient. The first sample passes through the adder after it was multiplied with $b_0$. After that, it is passed back after being multiplied with $a_0$. The second sample is then processed the same way - this time multiplied with $b_1$ and $b_1$. After two samples, the response of this exemplary IIR filter is complete.
|
||||
Figure \ref{fig:fig_iir} visualizes a simple IIR filter with two feedforward coefficients and two feedback coefficients. The first sample passes through the adder after it was multiplied with $b_0$. After that, it is passed back after being multiplied with $a_0$. The second sample is then processed the same way - this time multiplied with $b_1$ and $b_1$. After two samples, the response of this exemplary IIR filter is complete.
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.8\linewidth]{Bilder/fig_iir.jpg}
|
||||
@@ -95,9 +95,9 @@ Figure \ref{fig:fig_iir} visualizes a simple IIR filter with two feedforward coe
|
||||
\label{fig:fig_iir}
|
||||
\end{figure}
|
||||
\subsubsection{FIR- vs. IIR-filters}
|
||||
Due to the fact, that there is no feedback, a FIR filter offers unconditional stability, meaning that the filter response always converges, no matter how the coefficients are set. The disadvantages of the FIR design is the relatively flat frequency response and the higher number of needed coefficients needed to achieve a certain frequency response compared to its Infinite Impulse Response counterpart.\\ \\
|
||||
Due to the fact, that there is no feedback, a FIR filter offers unconditional stability, meaning that the filter response always converges, no matter how the coefficients are set. The disadvantages of the FIR design is the relatively flat frequency response and the higher number of needed coefficients needed to achieve a sharp frequency response compared to its Infinite Impulse Response counterpart.\\ \\
|
||||
The recursive nature of an IIR filter, in contrary, allows achieving a sharp frequency response with significantly fewer coefficients than an equivalent FIR filter, but it also opens up the possibility, that the filter response diverges, depending on the set coefficients.\\ \\
|
||||
A higher number of needed coefficients implies, that the filter itself needs more time to complete its signal response, as more samples are needed to pass the filter.
|
||||
A higher number of needed coefficients implies, that the filter itself needs more time to complete its signal response, as the group delay is increased.
|
||||
|
||||
\subsection{Introduction to Adaptive Noise Reduction}
|
||||
\subsubsection{History}
|
||||
@@ -150,12 +150,12 @@ As we will see in the following chapters, a real world application of an adaptiv
|
||||
\item The reference noise signal $x[n]$ fed into the adaptive filter could also be contaminated with parts of the target signal. If this circumstance occurs is not handled properly, it could lead to the undesired removal of parts of the target signal from the output signal $š[n]$.
|
||||
\end{itemize}
|
||||
The goal of the adaptive filter is therefore to minimize this error signal over time, thereby improving the quality of the output signal by reducing it by its noise-component.\\
|
||||
The minimization of the error signal $e[n]$ can be achieved by applying different error metrics used to evaluate the performance of an adaptive filter, including:
|
||||
The minimization of the error signal $e[n]$ can be achieved by applying different error metrics and algorithms used to evaluate the performance of an adaptive filter, including:
|
||||
\begin{itemize}
|
||||
\item Mean Squared Error (MSE): This metric calculates the averaged square of the error between the expected value and the observed value over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications.
|
||||
\item Least Mean Squares (LMS): This metric focuses on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications.
|
||||
\item Least Mean Squares (LMS): The LMS is an algorithm, focused on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications.
|
||||
\item Normalized Least Mean Squares (NLMS): An extension of the LMS algorithm that normalizes the step size based on the input signal, improving convergence speed.
|
||||
\item Recursive Least Squares (RLS): This metric aims to minimize the weighted sum of squared errors, providing faster convergence than the LMS metric but at the cost of higher computational effort.
|
||||
\item Recursive Least Squares (RLS): This algorithm aims to minimize the weighted sum of squared errors, providing faster convergence than the LMS algorithm but at the cost of higher computational effort.
|
||||
\end{itemize}
|
||||
As computational efficiency is a key requirement for the implementation of real-time ANR on a low-power DSP, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter.
|
||||
|
||||
@@ -185,23 +185,23 @@ If we square the error signal and calculate the expected value, we receive the M
|
||||
The terms contained in Equation \ref{equation_j} can be further be defined as:
|
||||
\begin{itemize}
|
||||
\item $\sigma^2$ = $E(d^2[n])$: The expected value of the squared corrupted target signal - a constant term independent of the filter coefficients $w$.
|
||||
\item P = $E(d[n]x[n])$: The cross-correlation between the corrupted target signal and the reference noise signal - a measure of how similar these two signals are.
|
||||
\item R = $E(x^2[n])$: The auto-correlation (or serial-correlation) of the reference noise signal - a measure of the similarity of a signal with it's delayed copy and therefore of the signal's spectral power.
|
||||
\item \textbf{P} = $E(d[n]x[n])$: The cross-correlation between the corrupted target signal and the reference noise signal - a measure of how similar these two signals are.
|
||||
\item \textbf{R} = $E(x^2[n])$: The auto-correlation (or serial-correlation) of the reference noise signal - a measure of the similarity of a signal with it's delayed copy and therefore of the signal's spectral power.
|
||||
\end{itemize}
|
||||
Equation {\ref{equation_j}} can therefore be further simplified and written as:
|
||||
\begin{equation}
|
||||
\label{equation_j_simple}
|
||||
J = \sigma^2 - 2wP + w^2R
|
||||
J = \sigma^2 - 2w\textbf{P} + w^2\textbf{R}
|
||||
\end{equation}
|
||||
As every part of Equation \ref{equation_j_simple} beside $w^2$ is constant, $j$ is a quadratic function of the filter coefficients $w$, offering a calculable minimum. To find this minimum, the derivative of $J$ with respect to $w$ can be calculated and set to zero:
|
||||
\begin{equation}
|
||||
\label{equation_j_gradient}
|
||||
\frac{dJ}{dw} = -2P + 2wR = 0
|
||||
\frac{dJ}{dw} = -2\textbf{P} + 2w\textbf{R} = 0
|
||||
\end{equation}
|
||||
Solving Equation \ref{equation_j_gradient} for $w$ delivers the equation to calculate the optimal coefficients for the Wiener filter:
|
||||
\begin{equation}
|
||||
\label{equation_w_optimal}
|
||||
w_{opt} = {P}R^{-1}
|
||||
w_{opt} = \textbf{P}\textbf{R}^{-1}
|
||||
\end{equation}
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
@@ -212,31 +212,31 @@ Solving Equation \ref{equation_j_gradient} for $w$ delivers the equation to calc
|
||||
\noindent If the Wiener filter now consists not out of one coefficient, but out of several coefficients, Equation \ref{equation_wien} can be written in a matrix form as
|
||||
\begin{equation}
|
||||
\label{equation_wien_matrix}
|
||||
y[n] = \sum_{k=0}^{M} w_kx[n-k] = \textbf{W}^T\textbf{X}[n]
|
||||
y[n] = \sum_{k=0}^{M} w_kx[n-k] = \textbf{w}^T\textbf{x}
|
||||
\end{equation}
|
||||
where \textbf{X} is the input signal matrix and \textbf{W} the filter coefficient matrix.
|
||||
where \textbf{x} is the input signal vector and \textbf{w} the filter coefficient vector.
|
||||
\begin{gather}
|
||||
\label{equation_input_vector}
|
||||
\textbf{X}[n] = [x[n],x[n-1],...,x[n-M]]^T \\
|
||||
\textbf{x} = [x[n],x[n-1],...,x[n-M]]^T \\
|
||||
\label{equation_coefficient_vector}
|
||||
\textbf{W}[n] = [w_0,w_1,...,w_M]^T
|
||||
\textbf{w} = [w_0,w_1,...,w_M]^T
|
||||
\end{gather}
|
||||
Equation \ref{equation_j} can therefore also be rewritten in matrix form to:
|
||||
\begin{equation}
|
||||
\label{equation_j_matrix}
|
||||
J = \sigma^2 - 2\textbf{W}^TP + \textbf{W}^TR\textbf{W}
|
||||
J = \sigma^2 - 2\textbf{w}^T\textbf{P} + \textbf{w}^T\textbf{R}\textbf{w}
|
||||
\end{equation}
|
||||
After settings the derivative of Equation \ref{equation_j_matrix} to zero and solving for $W$, we receive the optimal filter coefficient matrix:
|
||||
After settings the derivative of Equation \ref{equation_j_matrix} to zero and solving for $\textbf{w}$, we receive the optimal filter coefficient matrix:
|
||||
\begin{equation}
|
||||
\label{equation_w_optimal_matrix}
|
||||
\textbf{W}_{opt} = PR^{-1}
|
||||
\textbf{w}_{opt} = \textbf{P}\textbf{R}^{-1}
|
||||
\end{equation}
|
||||
\noindent For a large filter, the numerical solution of Equation \ref{equation_w_optimal_matrix} can be computational expensive, as it involves the inversion of potential large matrix. Therefore, to find the optimal set of coefficients $w$, the concept of gradient descent, introduced by Widrow\&Stearns in 1985, can be applied. The gradient decent algorithm aims to minimize the MSE iteratively sample by sample, by adjusting the filter coefficients $w$ in small steps towards the direction of the steepest descent to find the optimal coefficients. The update rule for the coefficients using gradient descent can be expressed as
|
||||
\begin{equation}
|
||||
\label{equation_gradient}
|
||||
w(n+1) = w(n) - \mu \frac{dJ}{dw}
|
||||
\end{equation}
|
||||
where $\mu$ is the constant step size determining the rate of convergence. Figure \ref{fig:fig_w_opt} visualizes the concept of stepwise minimization of the MSE using gradient descent. After the derivative of $J$ with respect to $w$ reaches zero, the optimal coefficients $w_{opt}$ are found and the coefficients are no longer updated.
|
||||
where $\mu$ is the constant step size determining the rate of convergence. Figure \ref{fig:fig_w_opt} visualizes the concept of stepwise minimization of the MSE using gradient descent. After the derivative of $J$ with respect to $\textbf{w}$ reaches zero, the optimal coefficients $\textbf{w}_{opt}$ are found and the coefficients are no longer updated.
|
||||
\begin{figure}[H]
|
||||
\centering
|
||||
\includegraphics[width=0.9\linewidth]{Bilder/fig_gradient.jpg}
|
||||
@@ -266,17 +266,17 @@ The LMS algorithm therefore updates the filter coefficients $w[n]$ after every s
|
||||
\label{fig:fig_anr_implant}
|
||||
\end{figure}
|
||||
\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an ANR implementation, without a detailed description how the corrupted target signal $d[n]$ and the reference noise signal $x[n]$ are formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The target signal sensor receives the target- and noise signal over their respective transfer functions and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\
|
||||
Additionally, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $D_n$, $F_n$, and $C_n$ describe the path from the signal sources to the chassis of the cochlear implant, where the sensors are located. As the sources and the relative location of the user to the sources can vary, these transfer functions are time-variant and unknown. From the chassis, there are two options for continuing the signal path - either directly to the microphone membranes of the respective sensors, represented through the transfer function $G$, or through mechanical vibrations of the implant's chassis, represented through the transfer functions $A$ and $B$. As the mechanical properties of the implanted cochlear systems are fixed, these transfer functions do not change over time, so they can be seen as time-invariant and known.\\ \\
|
||||
Additionally, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $D_n$, $F_n$, and $C_n$ with $A_n$ describe the path from the signal sources to their respective sensors inside the cochlear implant system. As the sources and the relative location of the user to the sources can vary, these transfer functions are time-variant and unknown. In the case of the noise signal, we establish the possibility, that the noise signal reaches the target signal sensor not only through the air, represented through the transfer function $C_n + A_n$, but also through mechanical vibrations, represented through the transfer function $C_n + B$. This circumstance, together with the fact, that the mechanical properties of CI system are fixed and therefore can be seen as time-invariant and known, allows us to apply an hybrid static/adaptive filter design for the ANR implementation, as described in chapter 2.5.2.\\ \\
|
||||
The corrupted target signal $d[n]$ can therefore be mathematically described as:
|
||||
\begin{equation}
|
||||
\label{equation_dn}
|
||||
d[n] = s[n] + n[n] = t[n] * (D_nG) + v[n] * ((F_nG) + (C_nA))
|
||||
d[n] = s[n] + n[n] = t[n] * (D_n) + v[n] * (F_n)
|
||||
\end{equation}
|
||||
where $t[n]$ and $v[n]$ are the target- and noise signals at their respective source, $s[n]$ is the recorded target signal and $v[n]$ is the recorded corruption noise after passing the transfer functions.\\ \\
|
||||
The noise reference signal $x[n]$ can be mathematically described as:
|
||||
\begin{equation}
|
||||
\label{equation_xn}
|
||||
x[n] = v[n] * (C_nB)
|
||||
x[n] = v[n] * (C_n * (A_n + B))
|
||||
\end{equation}
|
||||
where $v[n]$ is the noise signal at its source and $x[n]$ is the recorded reference noise signal after passing the transfer functions.\\ \\
|
||||
Another possible signal interaction could be the leakage of the target signal into the noise signal sensor, leading to the partial removal of the target signal from the output signal. This case is not illustrated in Figure \ref{fig:fig_anr_implant} as it won't be further evaluated in this thesis, but shall be mentioned for the sake of completeness.\\ \\
|
||||
|
||||
Reference in New Issue
Block a user