diff --git a/chapter_02.tex b/chapter_02.tex index 15e86d5..9e3e640 100644 --- a/chapter_02.tex +++ b/chapter_02.tex @@ -122,9 +122,9 @@ Although active noise cancellation and adaptive noise reduction share obvious si \caption{The basic idea of an adaptive filter design for noise reduction.} \label{fig:fig_anr} \end{figure} -\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The target signal sensor (top) aims to recieve the target signal and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the corruption noise signal $n[n]$, whereas the noise signal sensor aims to recieve (ideally) only the noise signal and outputs the recorded reference noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption-noise signal is uncorellated to the speech signal, and therefore seperable from it. In addition, we asume, that the corruption-noise signal is correlated to the noise signal, as it originitaes from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the aproximated speech signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted CI system. +\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The target signal sensor (top) aims to recieve the target signal and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the corruption noise signal $n[n]$, whereas the noise signal sensor aims to recieve (ideally) only the noise signal and outputs the recorded reference noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption-noise signal is uncorellated to the recorded target signal, and therefore seperable from it. In addition, we asume, that the corruption noise signal is correlated to the reference noise signal, as it originitaes from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the aproximated target signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted CI system. \subsubsection{Fully adaptive vs. hybrid filter design} -The basic ANR concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every sample.\\ \\ +The basic ANR concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every evaluation step.\\ \\ To reduce the required computing power, a hybrid static/adaptive filter design can be taken into consideration instead. In this approach, the inital fully adaptive filter is split into a fixed and an adaptive part - the static filter removes a certain, known, or estimated, frequency portion of the noise signal, whereas the adaptive part only has to adapt to the remaining, unforecastable, noise parts. This approach reduces the number of coefficients required to be adapted, therefore lowering the required computing power. \begin{figure}[H] \centering @@ -146,7 +146,7 @@ As we will see in the following chapters, a real world application of an adaptiv \begin{itemize} \item The error signal $e[n]$ is not a perfect representation of the recorded target signal $s[n]$ present in the corrputed target signal $d[n]$, as the adaptive filter can only approximate the noise signal based on its current coefficients, which in general do not represent the optimal solution at that given time. \item Altough, the corruption noise signal $n[n]$ and the reference noise signal $x[n]$ are correlated, they are not identical, as they take different signal paths from the noise source to their respective sensors. This discrepancy can lead to imperfect noise reduction, as the adaptive filter has to estimate the relationship between these two signals. -\item The recorded target signal $s[n]$ is not directly available, as it is only available contaminated with the corruption noise signal $n[n]$ in the form of $d[n]$ and there is no reference available. Therefore, the error signal $e[n]$, respectively $š[n]$, of the adaptive filter serves as an approximation of the clean speech signal and is used as an indirect measure of the filter's performance, guiding the adaptation process by its own stepwise minimization. +\item The recorded target signal $s[n]$ is not directly available, as it is only available combined with the corruption noise signal $n[n]$ in the form of $d[n]$ while there is no reference available. Therefore, the error signal $e[n]$, respectively $š[n]$, of the adaptive filter serves as an approximation of the clean speech signal and is used as an indirect measure of the filter's performance, guiding the adaptation process by its own stepwise minimization. \item The reference noise signal $x[n]$ fed into the adaptive filter could also contaminated with parts of the target signal. If this circumstance occurs, it can lead to undesired effects if not handled properly. \end{itemize} The goal of the adaptive filter is therefore to minimize this error signal over time, thereby improving the quality of the output signal by reducing it by it´s noise-component.\\ @@ -160,7 +160,7 @@ The minimization of the error signal $e[n]$ can by achieved by applying differen As computaional efficiency is a key requirement for the implementation of real-time ANR on a low-power digital signal processor, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter. \subsubsection{The Wiener filter and Gradient Descent} -Before the Least Mean Squares algorithm can be explained in detail, the Wiener filter and the concept of gradient descent have to be introduced. \\ \\ +Before the Least Mean Squares algorithm can be explained in detail, the Wiener filter and the concept Gradient Descent have to be introduced. \\ \\ \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{Bilder/fig_wien.jpg} @@ -215,12 +215,12 @@ Solving Equation \ref{equation_j_gradient} for $w$ delivers the equation to calc y[n] = \sum_{k=0}^{M} w_kx[n-k] = \textbf{W}^T\textbf{X}[n] \end{equation} where \textbf{X} is the input signal matrix and \textbf{W} the filter coefficient matrix. -\begin{align} +\begin{gather} \label{equation_input_vector} \textbf{X}[n] = [x[n],x[n-1],...,x[n-M]]^T \\ \label{equation_coefficient_vector} \textbf{W}[n] = [w_0,w_1,...,w_M]^T -\end{align} +\end{gather} Equation \ref{equation_j} can therefore also be rewritten in matrix form to: \begin{equation} \label{equation_j_matrix} @@ -231,12 +231,12 @@ After settings the derivative of Equation \ref{equation_j_matrix} to zero and so \label{equation_w_optimal_matrix} \textbf{W}_{opt} = PR^{-1} \end{equation} -\noindent For a large filter, the numerical solution of Equation \ref{equation_w_optimal_matrix} can be computational expensive, as it involves the inversion of potential large matrix. Therefore, to find the optimal set of coefficients $w$, the concept of gradient descent, introduced by Widrow\&Stearns in 1985, can be applied. The gradient decent algortihm aims to to minimize the MSE $J$ iteratively sample by sample by adjusting the filter coefficients $w$ in small steps towards the direction of the steepest descent to find the optimal coefficients. The update rule for the coefficients using gradient descent can be expressed as +\noindent For a large filter, the numerical solution of Equation \ref{equation_w_optimal_matrix} can be computational expensive, as it involves the inversion of potential large matrix. Therefore, to find the optimal set of coefficients $w$, the concept of gradient descent, introduced by Widrow\&Stearns in 1985, can be applied. The gradient decent algortihm aims to to minimize the MSE iteratively sample by sample by adjusting the filter coefficients $w$ in small steps towards the direction of the steepest descent to find the optimal coefficients. The update rule for the coefficients using gradient descent can be expressed as \begin{equation} \label{equation_gradient} w(n+1) = w(n) - \mu \frac{dJ}{dw} \end{equation} -where $\mu$ is the constant step size determining the rate of convergence. Figure \ref{fig:fig_w_opt} visualizes the concept of stepwise minimization of the MSE $J$ using gradient descent. After the derivative of $J$ with respect to $w$ r4aches zero, the optimal coefficients $w_{opt}$ are found and the coefficients are no longer updated. +where $\mu$ is the constant step size determining the rate of convergence. Figure \ref{fig:fig_w_opt} visualizes the concept of stepwise minimization of the MSE using gradient descent. After the derivative of $J$ with respect to $w$ r4aches zero, the optimal coefficients $w_{opt}$ are found and the coefficients are no longer updated. \begin{figure}[H] \centering \includegraphics[width=0.9\linewidth]{Bilder/fig_gradient.jpg} @@ -245,12 +245,12 @@ where $\mu$ is the constant step size determining the rate of convergence. Figur \end{figure} \subsubsection{The Least Mean Squares algorithm} The given approach of the steepest decent algorithm in the subchapter above still involves the calculation of the derivative of the MSE $\frac{dJ}{dw}$, which is also a compuational expensive operation to calulate, as it requires knowledge of the statistical properties of the input signals (cross-correlation P and auto-correlation R). Therefore, in energy critical real-time applications, like the implementation of ANR on a low-power DSP, a sample-based aproxmation in form of a Least Mean Squares (LMS) algorithm is used instead. The LMS algorithm approximates the gradient of the MSE by using the instantaneous estimates of the cross-correlation and auto-correlation. To achieve this, we remove the statistical expectation out of the MSE $J$ and take the derivative to obtain a samplewise approximate of $\frac{dJ}{dw[n]}$. -\begin{align} +\begin{gather} \label{equation_j_lms} J = e[n]^2 = (d[n]-wx[n])^2 \\ \label{equation_j_lms_final} \frac{dJ}{dw[n]} = 2(d[n]-w[n]x[n])\frac{d(d[n])-w[n]x[n]}{dw[n]} = -2e[n]x[n] -\end{align} +\end{gather} The result of Equation \ref{equation_j_lms_final} can now be inserted into Equation \ref{equation_gradient} to receive the LMS update rule for the filter coefficients: \begin{equation} \label{equation_lms} @@ -258,15 +258,15 @@ The result of Equation \ref{equation_j_lms_final} can now be inserted into Equat \end{equation} The LMS algorithm therefore updates the filter coefficients $w[n]$ after every sample by adding a correction term, which is is calculated by the error signal $e[n]$ and the reference noise signal $x[n]$, scaled by the constant step size $\mu$. By iteratively applying the LMS algorithm, the filter coefficients converge towards the optimal values that minimize the mean squared error between the target signal and the filter output. When a predefined acceptable error level is reached, the adaptation process can be stopped to save computing power.\\ \\ \subsection{Signal flow diagram of an implanted cochlear implant system} - Now equipped with the necessary theoretical background about signal processing, adaptive noise reduction and the LMS algorithm, a realistic signal flow diagram wwith the relevant transfer functions of an implanted cochlear implant system can be designed, which will serve as the basis for the implementation of ANR on a low-power digitial signal processor. + Now equipped with the necessary theoretical background about signal processing, adaptive noise reduction and the LMS algorithm, a realistic signal flow diagram with the relevant transfer functions of an implanted cochlear implant system can be designed, which will serve as the basis for the implementation of ANR on a low-power digital signal processor. \begin{figure}[H] \centering \includegraphics[width=1.1\linewidth]{Bilder/fig_anr_implant.jpg} \caption{Realstic implant design.} \label{fig:fig_anr_implant} \end{figure} -\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an ANR implementation, without a detailed description how the corrupted targed signal $d[n]$ and the reference noise signal $x[n]$ is formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The target signal sensor recieves the target signal and the noise signal over their respective transfer functions and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\ -AAdittionaly, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $D_n$, $F_n$, and $C_n$ describe the path from the signal sources to the chasis of the cochlear implant, where the sensors are located. As the sources and the relative location of the user to the sources can vary, these transfer functions are time-variant and unknown. From the chasis, there are two options for continuing the signal path - either directly to the microphone membranes of the respective sensors, represented through the transfer function $G$, or through mechanical vibrations of the implant´s chasis, represented through the transfer functions $A$ and $B$. As the mechanical properties of the implanted cochlear systems are fixed, these transfer functions do not change over time, so they can be seen as time-invariant and known.\\ \\ +\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an ANR implementation, without a detailed description how the corrupted targed signal $d[n]$ and the reference noise signal $x[n]$ are formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The target signal sensor recieves the target signal and the noise signal over their respective transfer functions and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\ +Adittionaly, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $D_n$, $F_n$, and $C_n$ describe the path from the signal sources to the chasis of the cochlear implant, where the sensors are located. As the sources and the relative location of the user to the sources can vary, these transfer functions are time-variant and unknown. From the chasis, there are two options for continuing the signal path - either directly to the microphone membranes of the respective sensors, represented through the transfer function $G$, or through mechanical vibrations of the implant´s chasis, represented through the transfer functions $A$ and $B$. As the mechanical properties of the implanted cochlear systems are fixed, these transfer functions do not change over time, so they can be seen as time-invariant and known.\\ \\ The corrupted target signal $d[n]$ can thereforebe mathematically described as: \begin{equation} \label{equation_dn} @@ -280,7 +280,8 @@ The noise reference signal $x[n]$ can be mathematically described as: \end{equation} where $v[n]$ is the noise signal at its source and $x[n]$ is the recorded reference noise signal after passing the transfer functions.\\ \\ Another possible signal interaction could be the leakage of the target signal into the noise signal sensor, leading to undesired effects. This case is not illustrated in Figure \ref{fig:fig_anr_implant} as it wont be further evaluated in this thesis, but shall be mentioned for the sake of completeness at this point.\\ \\ +At this point, the thereotical background and the fundamentals of adaptive noise reduction have been adequatly introduced and explained as necessary for the understanding of the following chapters of this thesis. The next chapter will now focus on practical high level simulations of different filter concepts and LMS algorithm variations to evaluate their performance in regard of noise reduction quality before the actual implementation on a low-power digital signal processor is conducted. + + -We assume at this point, that the corruption-noise signal is uncorellated to the speech signal, and therefore seperable from it. In addition, we asume, that the corruption-noise signal is correlated to the noise signal, as it originitaes from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the aproximated speech signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which is represented through the transfer functions $H_{sd}$ and $H_{sx}$ in Figure \ref{fig:fig_anr_implant}. These transfer functions describe how much of the speech signal leaks into the target signal sensor and into the noise signal sensor respectively. This contamination can lead to undesired effects like signal distortion if not handled properly. Therefore, these transfer functions have to be taken into consideration when deriving the overall system´s transfer function. -\subsection{Derivation of the system’s transfer function based on the problem setup}