Rechtschreibkorrektur
This commit is contained in:
4
.vscode/ltex.dictionary.en-US.txt
vendored
4
.vscode/ltex.dictionary.en-US.txt
vendored
@@ -4,3 +4,7 @@ antiphase
|
|||||||
Lueg
|
Lueg
|
||||||
IIR
|
IIR
|
||||||
Zobel
|
Zobel
|
||||||
|
Widrow&Stearns
|
||||||
|
IIR-filters
|
||||||
|
unforecastable
|
||||||
|
self-reliantely
|
||||||
|
|||||||
1
.vscode/ltex.disabledRules.en-US.txt
vendored
Normal file
1
.vscode/ltex.disabledRules.en-US.txt
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
AFTERWARDS_US
|
||||||
2
.vscode/ltex.hiddenFalsePositives.en-US.txt
vendored
2
.vscode/ltex.hiddenFalsePositives.en-US.txt
vendored
@@ -1 +1,3 @@
|
|||||||
{"rule":"ABOUT_ITS_NN","sentence":"^\\QIn contrary to it's counterpart, it also uses past output samples in addition to current and past input samples - therefore the response of an IIR-filter theoretically continues indefinitely.\\E$"}
|
{"rule":"ABOUT_ITS_NN","sentence":"^\\QIn contrary to it's counterpart, it also uses past output samples in addition to current and past input samples - therefore the response of an IIR-filter theoretically continues indefinitely.\\E$"}
|
||||||
|
{"rule":"THE_SUPERLATIVE","sentence":"^\\QNormalized Least Mean Squares (NLMS): An extension of the LMS algorithm that normalizes the step size based on the input signal, improving convergence speed.\\E$"}
|
||||||
|
{"rule":"MORFOLOGIK_RULE_EN_US","sentence":"^\\QThe necessity for the use of electric filters arose the first time in the beginnings of the 20th century with the development of the quite young fields of tele- and radio-communication.\\E$"}
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
\section{Theoretical Background}
|
\section{Theoretical Background}
|
||||||
The following subchapters shall supply the reader with the theoretical foundation of digital signal processing to better understand the following implementation of ANR on a low-power signal processor.\\ \\
|
The following subchapters shall supply the reader with the theoretical foundation of digital signal processing to better understand the following implementation of ANR on a low-power signal processor.\\ \\
|
||||||
The chapter begins with the decription of signals, the problem of them interfering and the basics of digital signal processing in general, covering fundamental topics like signal representation, transfer functions and filters.\\
|
The chapter begins with the description of signals, the problem of them interfering and the basics of digital signal processing in general, covering fundamental topics like signal representation, transfer functions and filters.\\
|
||||||
Filters are used in various functional designs, therefore a short explanation into the concepts of Finite Impulse Response- and Infinite Impulse Response filters is indispensable.\\
|
Filters are used in various functional designs, therefore a short explanation into the concepts of Finite Impulse Response- and Infinite Impulse Response filters is indispensable.\\
|
||||||
At this point an introduction into adaptive noise reduction follows, including a short overview of the most important steps in history, the general concept of ANR, its design possibilities and its optimization possibilities in regard of error calculation.\\
|
At this point an introduction into adaptive noise reduction follows, including a short overview of the most important steps in history, the general concept of ANR, its design possibilities and its optimization possibilities in regard of error calculation.\\
|
||||||
With this knowledge covered, a realistic signal flow diagram of an implanted CI system with corresponding transfer functions is designed, essential to implement ANR on a low-power digital signal processor.\\
|
With this knowledge covered, a realistic signal flow diagram of an implanted CI system with corresponding transfer functions is designed, essential to implement ANR on a low-power digital signal processor.\\
|
||||||
@@ -11,7 +11,7 @@ The term "signal interference" describes the overlapping of unwanted signals or
|
|||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width=0.8\linewidth]{Bilder/fig_interference.jpg}
|
\includegraphics[width=0.8\linewidth]{Bilder/fig_interference.jpg}
|
||||||
\caption{Noisy signal containing different frequencyies and cleaned signal. \cite{source_dsp_ch1}}
|
\caption{Noisy signal containing different frequencies and cleaned signal. \cite{source_dsp_ch1}}
|
||||||
\label{fig:fig_interference}
|
\label{fig:fig_interference}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\noindent In cochlear implant systems, speech signals must be reconstructed with high spectral precision to ensure intelligibility for the user. As signal interference can cause considerable degradation to the quality of said final audio signal, the objective of this thesis shall be the improvement of implant technology in regard of adaptive noise reduction.
|
\noindent In cochlear implant systems, speech signals must be reconstructed with high spectral precision to ensure intelligibility for the user. As signal interference can cause considerable degradation to the quality of said final audio signal, the objective of this thesis shall be the improvement of implant technology in regard of adaptive noise reduction.
|
||||||
@@ -24,7 +24,7 @@ Digital signal processing describes the manipulation of digital signals on a dig
|
|||||||
\caption{Block diagram of processing an analog input signal to an analog output signal with digital signal processing in between. \cite{source_dsp_ch1}}
|
\caption{Block diagram of processing an analog input signal to an analog output signal with digital signal processing in between. \cite{source_dsp_ch1}}
|
||||||
\label{fig:fig_dsp}
|
\label{fig:fig_dsp}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
Before digital signal processing can be applied to an analog signal like human voice, several steps are required beforehand. An analog signal, continuous in both time and amplitude, is passed through an initial filter, which limits the frequency bandwidth. An analog-digital converter then samples and quantities the signal into a digital form, now discrete in time and amplitude. This digital signal can now be processed, before (possibly) being converted to an analog signal again (refer to Figure \ref{fig:fig_dsp}). The sampling rate defines, in how many samples per second are taken from the analog signal - a higher sample rate delivers a more accurate digital representation of the signal but also uses more resources. According to the Nyquist–Shannon sampling theorem, the sample rate must be at least twice the highest frequency component present in the signal to avoid aliasing of the signal (refer to Figure \ref{fig:fig_nyquist}). Aliasing describes the phenomenon, that high frequency parts of a signal are wrongly interpreted, if the sampling rate of the analog signal is too low. The digitlazied signal then contains low frequencies, which don´t occur in the original signal.
|
Before digital signal processing can be applied to an analog signal like human voice, several steps are required beforehand. An analog signal, continuous in both time and amplitude, is passed through an initial filter, which limits the frequency bandwidth. An analog-digital converter then samples and quantities the signal into a digital form, now discrete in time and amplitude. This digital signal can now be processed, before (possibly) being converted to an analog signal again (refer to Figure \ref{fig:fig_dsp}). The sampling rate defines, in how many samples per second are taken from the analog signal - a higher sample rate delivers a more accurate digital representation of the signal but also uses more resources. According to the Nyquist–Shannon sampling theorem, the sample rate must be at least twice the highest frequency component present in the signal to avoid aliasing of the signal (refer to Figure \ref{fig:fig_nyquist}). Aliasing describes the phenomenon, that high frequency parts of a signal are wrongly interpreted, if the sampling rate of the analog signal is too low. The digitalized signal then contains low frequencies, which don't occur in the original signal.
|
||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width=0.8\linewidth]{Bilder/fig_nyquist.jpg}
|
\includegraphics[width=0.8\linewidth]{Bilder/fig_nyquist.jpg}
|
||||||
@@ -60,19 +60,19 @@ During the description of transfer functions, the term ``filter'' was used but n
|
|||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width=0.8\linewidth]{Bilder/fig_lowpass.jpg}
|
\includegraphics[width=0.8\linewidth]{Bilder/fig_lowpass.jpg}
|
||||||
\caption{Behavior of a low-pass-filter. At the highlighted frequecny $f_c$ of 3400 Hz, the amplitude of the incoming signal is reduced to 70\%. \cite{source_dsp_ch2}}
|
\caption{Behavior of a low-pass-filter. At the highlighted frequency $f_c$ of 3400 Hz, the amplitude of the incoming signal is reduced to 70\%. \cite{source_dsp_ch2}}
|
||||||
\label{fig:fig_lowpass}
|
\label{fig:fig_lowpass}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\subsection{Filter designs}
|
\subsection{Filter designs}
|
||||||
Before we continue with the introduction to the actual topic of this thesis, adaptive noise reduction, two very essential filter designs need further explanation - the Finite Impulse Response- and Infinite Impulse Response filter.
|
Before we continue with the introduction to the actual topic of this thesis, adaptive noise reduction, two very essential filter designs need further explanation - the Finite Impulse Response- and Infinite Impulse Response filter.
|
||||||
\subsubsection{Finite Impulse Response filters}
|
\subsubsection{Finite Impulse Response filters}
|
||||||
A Finite Impulse Response (FIR) filter, commonly referred to as a ``Feedforward Filter'' is defined through the property, that it uses only input values and not feedback from output samples to determine it´s filtering behavior - therefore, if the input signal is reduced to zero, the response of a FIR filter reaches zero after a finite number of samples.\\ \\
|
A Finite Impulse Response (FIR) filter, commonly referred to as a ``Feedforward Filter'' is defined through the property, that it uses only input values and not feedback from output samples to determine its filtering behavior - therefore, if the input signal is reduced to zero, the response of a FIR filter reaches zero after a finite number of samples.\\ \\
|
||||||
Equation \ref{equation_fir} specifies the input-output relationship of a FIR filter - $x[n]$ is the input sample, $y[n]$ is output sample, and $b_0$ to $b_M$ the filter coefficients and M the length of the filter
|
Equation \ref{equation_fir} specifies the input-output relationship of a FIR filter - $x[n]$ is the input sample, $y[n]$ is output sample, and $b_0$ to $b_M$ the filter coefficients and M the length of the filter
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_fir}
|
\label{equation_fir}
|
||||||
y[n] = \sum_{k=0}^{M} b_kx[n-k] = b_0x[n] + b_1x[n-1] + ... + b_Mx[n-M]
|
y[n] = \sum_{k=0}^{M} b_kx[n-k] = b_0x[n] + b_1x[n-1] + ... + b_Mx[n-M]
|
||||||
\end{equation}
|
\end{equation}
|
||||||
Figure \ref{fig:fig_fir} visualizes a simple FIR filter with three coefficients - the first sample is multiplied with the operator $b_0$ whereas the following samples are multiplied with the operators $b_1$ amd $b_2$ before added back together. The Operator $Z^{-1}$ represents a delay operator of one sample.
|
Figure \ref{fig:fig_fir} visualizes a simple FIR filter with three coefficients - the first sample is multiplied with the operator $b_0$ whereas the following samples are multiplied with the operators $b_1$ and $b_2$ before added back together. The Operator $Z^{-1}$ represents a delay operator of one sample.
|
||||||
As there are three operators present in the filter, three samples are needed before the filter response is complete.
|
As there are three operators present in the filter, three samples are needed before the filter response is complete.
|
||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
@@ -81,7 +81,7 @@ As there are three operators present in the filter, three samples are needed bef
|
|||||||
\label{fig:fig_fir}
|
\label{fig:fig_fir}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\subsubsection{Infinite Impulse Response filters}
|
\subsubsection{Infinite Impulse Response filters}
|
||||||
An Infinite Impulse Response (IIR) filter, commonly referred to as a ``Feedback Filter'' can be seen as an extension of the FIR filter. In contrary to its counterpart, it also uses past output samples in addition to current input samples to adapt it´s filtering behavior - therefore the response of an IIR filter theoretically continues indefinitely, even if the input signal is reduced to zero.\\ \\
|
An Infinite Impulse Response (IIR) filter, commonly referred to as a ``Feedback Filter'' can be seen as an extension of the FIR filter. In contrary to its counterpart, it also uses past output samples in addition to current input samples to adapt its filtering behavior - therefore the response of an IIR filter theoretically continues indefinitely, even if the input signal is reduced to zero.\\ \\
|
||||||
Equation \ref{equation_iir} specifies the input-output relationship of a IIR filter. In addition to Equation \ref{equation_fir} there is now a second term included, where $a_0$ to $a_N$ are the feedback coefficients with their own filter length N.
|
Equation \ref{equation_iir} specifies the input-output relationship of a IIR filter. In addition to Equation \ref{equation_fir} there is now a second term included, where $a_0$ to $a_N$ are the feedback coefficients with their own filter length N.
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_iir}
|
\label{equation_iir}
|
||||||
@@ -95,9 +95,9 @@ Figure \ref{fig:fig_iir} visualizes a simple IIR filter with two feedforward coe
|
|||||||
\label{fig:fig_iir}
|
\label{fig:fig_iir}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\subsubsection{FIR- vs. IIR-filters}
|
\subsubsection{FIR- vs. IIR-filters}
|
||||||
Due to the fact, that there is no feedback, a FIR filter offers unconditional stability, meaning that the filter response always converges, no matter how the coefficients are set. The disadvantages of the FIR design is the relatively flat frequency response and the higher number of needed coefficitents needed to achieve a certain frequency response compared to its Infinite Impulse Response counterpart.\\ \\
|
Due to the fact, that there is no feedback, a FIR filter offers unconditional stability, meaning that the filter response always converges, no matter how the coefficients are set. The disadvantages of the FIR design is the relatively flat frequency response and the higher number of needed coefficients needed to achieve a certain frequency response compared to its Infinite Impulse Response counterpart.\\ \\
|
||||||
The recursive nature of an IIR filter, in contrary, allows to achieve a sharp frequency response with significantly fewer coefficients than an equivalent FIR filter, but it also opens up the possibility, that the filter response diverges, depending on the set coefficients.\\ \\
|
The recursive nature of an IIR filter, in contrary, allows achieving a sharp frequency response with significantly fewer coefficients than an equivalent FIR filter, but it also opens up the possibility, that the filter response diverges, depending on the set coefficients.\\ \\
|
||||||
A higher number of needed coefficients implies, that the filter itself needs more time to complete it´s signal response, as more samples are needed to pass the filter.
|
A higher number of needed coefficients implies, that the filter itself needs more time to complete its signal response, as more samples are needed to pass the filter.
|
||||||
|
|
||||||
\subsection{Introduction to Adaptive Noise Reduction}
|
\subsection{Introduction to Adaptive Noise Reduction}
|
||||||
\subsubsection{History}
|
\subsubsection{History}
|
||||||
@@ -111,7 +111,7 @@ In the 1930s, the first real concept of active noise cancellation was proposed b
|
|||||||
\label{fig:fig_patent}
|
\label{fig:fig_patent}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\noindent In contrary to the static filters in the beginning of the century, the active noise cancellation of Lueg and Widrow was far more advanced than just reducing a signal by a specific frequency portion like with the use of static filters, yet this technique still has their limitations as it is designed only to work within to a certain environment.\\ \\
|
\noindent In contrary to the static filters in the beginning of the century, the active noise cancellation of Lueg and Widrow was far more advanced than just reducing a signal by a specific frequency portion like with the use of static filters, yet this technique still has their limitations as it is designed only to work within to a certain environment.\\ \\
|
||||||
With the rapid advancement of digital signal processing technologies, noise cancellation techniques evolved from static, hardware-based filters and pyhsical soundwave cancellation towards more sophisticated approaches. In the then 1970s, the concept of digital adaptive filtering arose, allowing digital filters to adjust their parameters in real-time based on the characteristics of the incoming signal and noise. This marked a significant leap forward, as it enabled systems to deal with dynamic and unpredictable noise environments - the concept of adaptive noise reduction was born.
|
With the rapid advancement of digital signal processing technologies, noise cancellation techniques evolved from static, hardware-based filters and physical soundwave cancellation towards more sophisticated approaches. In the then 1970s, the concept of digital adaptive filtering arose, allowing digital filters to adjust their parameters in real-time based on the characteristics of the incoming signal and noise. This marked a significant leap forward, as it enabled systems to deal with dynamic and unpredictable noise environments - the concept of adaptive noise reduction was born.
|
||||||
\subsubsection{The concept of adaptive filtering}
|
\subsubsection{The concept of adaptive filtering}
|
||||||
Adaptive noise reduction describes an advanced filtering method based on an error-metric and represents a significant advancement over these earlier methods by allowing the filter parameters to continuously adapt to the changing acoustic environment in real-time. This adaptability makes ANR particularly suitable for hearing devices, where environmental noise characteristics vary constantly.\\ \\
|
Adaptive noise reduction describes an advanced filtering method based on an error-metric and represents a significant advancement over these earlier methods by allowing the filter parameters to continuously adapt to the changing acoustic environment in real-time. This adaptability makes ANR particularly suitable for hearing devices, where environmental noise characteristics vary constantly.\\ \\
|
||||||
Static filters low- and high-pass filters as described in the previous chapter feature coefficients that remain constant over time. They are designed for known, predictable noise conditions (e.g., removing a steady 50 Hz hum). While these filters are efficient and easy to implement, they fail to function when noise characteristics change dynamically.\\ \\
|
Static filters low- and high-pass filters as described in the previous chapter feature coefficients that remain constant over time. They are designed for known, predictable noise conditions (e.g., removing a steady 50 Hz hum). While these filters are efficient and easy to implement, they fail to function when noise characteristics change dynamically.\\ \\
|
||||||
@@ -122,10 +122,10 @@ Although active noise cancellation and adaptive noise reduction share obvious si
|
|||||||
\caption{The basic idea of an adaptive filter design for noise reduction.}
|
\caption{The basic idea of an adaptive filter design for noise reduction.}
|
||||||
\label{fig:fig_anr}
|
\label{fig:fig_anr}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The target signal sensor (top) aims to recieve the target signal and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the corruption noise signal $n[n]$, whereas the noise signal sensor aims to recieve (ideally) only the noise signal and outputs the recorded reference noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption-noise signal is uncorellated to the recorded target signal, and therefore seperable from it. In addition, we asume, that the corruption noise signal is correlated to the reference noise signal, as it originitaes from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the aproximated target signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted CI system.
|
\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The target signal sensor (top) aims to receive the target signal and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal and outputs the recorded reference noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption-noise signal is uncorrelated to the recorded target signal, and therefore separable from it. In addition, we assume, that the corruption noise signal is correlated to the reference noise signal, as it originates from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the approximated target signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted CI system.
|
||||||
\subsubsection{Fully adaptive vs. hybrid filter design}
|
\subsubsection{Fully adaptive vs. hybrid filter design}
|
||||||
The basic ANR concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every evaluation step.\\ \\
|
The basic ANR concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every evaluation step.\\ \\
|
||||||
To reduce the required computing power, a hybrid static/adaptive filter design can be taken into consideration instead. In this approach, the inital fully adaptive filter is split into a fixed and an adaptive part - the static filter removes a certain, known, or estimated, frequency portion of the noise signal, whereas the adaptive part only has to adapt to the remaining, unforecastable, noise parts. This approach reduces the number of coefficients required to be adapted, therefore lowering the required computing power.
|
To reduce the required computing power, a hybrid static/adaptive filter design can be taken into consideration instead. In this approach, the initial fully adaptive filter is split into a fixed and an adaptive part - the static filter removes a certain, known, or estimated, frequency portion of the noise signal, whereas the adaptive part only has to adapt to the remaining, unforecastable, noise parts. This approach reduces the number of coefficients required to be adapted, therefore lowering the required computing power.
|
||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width=0.8\linewidth]{Bilder/fig_anr_hybrid.jpg}
|
\includegraphics[width=0.8\linewidth]{Bilder/fig_anr_hybrid.jpg}
|
||||||
@@ -134,9 +134,9 @@ To reduce the required computing power, a hybrid static/adaptive filter design c
|
|||||||
\end{figure}
|
\end{figure}
|
||||||
\noindent Different approaches of the hybrid static/adaptive filter design will be evaluated and compared in regard of their required computing power in a later chapter of this thesis.
|
\noindent Different approaches of the hybrid static/adaptive filter design will be evaluated and compared in regard of their required computing power in a later chapter of this thesis.
|
||||||
\subsection{Adaptive optimization strategies}
|
\subsection{Adaptive optimization strategies}
|
||||||
In the decription of the concept of adaptive filtering above, the adaption of filter coefficients due to an error metric was mentioned but not further explained. The following subchapters shall cover the most important aspects of filter optimization in regard of adaptive noise reduction.
|
In the description of the concept of adaptive filtering above, the adaption of filter coefficients due to an error metric was mentioned but not further explained. The following subchapters shall cover the most important aspects of filter optimization in regard of adaptive noise reduction.
|
||||||
\subsubsection{Filter optimization and error metrics }
|
\subsubsection{Filter optimization and error metrics}
|
||||||
Adaptive filters rely on an error metric to self-reliantely evaluate their performance in real-time by adjusting their coefficients in a constant manner to minimize the recieved error signal $e[n]$, which is defined as:
|
Adaptive filters rely on an error metric to self-reliantely evaluate their performance in real-time by adjusting their coefficients in a constant manner to minimize the received error signal $e[n]$, which is defined as:
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_error}
|
\label{equation_error}
|
||||||
e[n] = d[n] - y[n] = š[n]
|
e[n] = d[n] - y[n] = š[n]
|
||||||
@@ -144,27 +144,27 @@ Adaptive filters rely on an error metric to self-reliantely evaluate their perfo
|
|||||||
The error signal $e[n]$, already illustrated in Figure \ref{fig:fig_anr} and \ref{fig:fig_anr_hybrid}, is calculated as the difference between the corrupted target signal $d[n]$ and the output signal of the filter $y[n]$.
|
The error signal $e[n]$, already illustrated in Figure \ref{fig:fig_anr} and \ref{fig:fig_anr_hybrid}, is calculated as the difference between the corrupted target signal $d[n]$ and the output signal of the filter $y[n]$.
|
||||||
As we will see in the following chapters, a real world application of an adaptive filter system poses several challenges, which have to be taken into consideration when designing the filter. These challenges include:
|
As we will see in the following chapters, a real world application of an adaptive filter system poses several challenges, which have to be taken into consideration when designing the filter. These challenges include:
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item The error signal $e[n]$ is not a perfect representation of the recorded target signal $s[n]$ present in the corrputed target signal $d[n]$, as the adaptive filter can only approximate the noise signal based on its current coefficients, which in general do not represent the optimal solution at that given time.
|
\item The error signal $e[n]$ is not a perfect representation of the recorded target signal $s[n]$ present in the corrupted target signal $d[n]$, as the adaptive filter can only approximate the noise signal based on its current coefficients, which in general do not represent the optimal solution at that given time.
|
||||||
\item Altough, the corruption noise signal $n[n]$ and the reference noise signal $x[n]$ are correlated, they are not identical, as they take different signal paths from the noise source to their respective sensors. This discrepancy can lead to imperfect noise reduction, as the adaptive filter has to estimate the relationship between these two signals.
|
\item Although, the corruption noise signal $n[n]$ and the reference noise signal $x[n]$ are correlated, they are not identical, as they take different signal paths from the noise source to their respective sensors. This discrepancy can lead to imperfect noise reduction, as the adaptive filter has to estimate the relationship between these two signals.
|
||||||
\item The recorded target signal $s[n]$ is not directly available, as it is only available combined with the corruption noise signal $n[n]$ in the form of $d[n]$ while there is no reference available. Therefore, the error signal $e[n]$, respectively $š[n]$, of the adaptive filter serves as an approximation of the clean target signal and is used as an indirect measure of the filter's performance, guiding the adaptation process by its own stepwise minimization.
|
\item The recorded target signal $s[n]$ is not directly available, as it is only available combined with the corruption noise signal $n[n]$ in the form of $d[n]$ while there is no reference available. Therefore, the error signal $e[n]$, respectively $š[n]$, of the adaptive filter serves as an approximation of the clean target signal and is used as an indirect measure of the filter's performance, guiding the adaptation process by its own stepwise minimization.
|
||||||
\item The reference noise signal $x[n]$ fed into the adaptive filter could also contaminated with parts of the target signal. If this circumstance occurs is not handled properly, it could lead to the undesired removal of parts of the target signal from the output signal $š[n]$.
|
\item The reference noise signal $x[n]$ fed into the adaptive filter could also be contaminated with parts of the target signal. If this circumstance occurs is not handled properly, it could lead to the undesired removal of parts of the target signal from the output signal $š[n]$.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
The goal of the adaptive filter is therefore to minimize this error signal over time, thereby improving the quality of the output signal by reducing it by it´s noise-component.\\
|
The goal of the adaptive filter is therefore to minimize this error signal over time, thereby improving the quality of the output signal by reducing it by its noise-component.\\
|
||||||
The minimization of the error signal $e[n]$ can by achieved by applying different error metrics used to evaluate the performance of an adaptive filter, including:
|
The minimization of the error signal $e[n]$ can be achieved by applying different error metrics used to evaluate the performance of an adaptive filter, including:
|
||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item Mean Squared Error (MSE): This metric calculates the averaged square of the error between the expected value and the observed value over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications.
|
\item Mean Squared Error (MSE): This metric calculates the averaged square of the error between the expected value and the observed value over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications.
|
||||||
\item Least Mean Squares (LMS): This metric focuses on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications.
|
\item Least Mean Squares (LMS): This metric focuses on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications.
|
||||||
\item Normalized Least Mean Squares (NLMS): An extension of the LMS algorithm that normalizes the step size based on the input signal, improving convergence speed.
|
\item Normalized Least Mean Squares (NLMS): An extension of the LMS algorithm that normalizes the step size based on the input signal, improving convergence speed.
|
||||||
\item Recursive Least Squares (RLS): This metric aims to minimize the weighted sum of squared errors, providing faster convergence than the LMS metric but at the cost of higher computational effort.
|
\item Recursive Least Squares (RLS): This metric aims to minimize the weighted sum of squared errors, providing faster convergence than the LMS metric but at the cost of higher computational effort.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
As computaional efficiency is a key requirement for the implementation of real-time ANR on a low-power DSP, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter.
|
As computational efficiency is a key requirement for the implementation of real-time ANR on a low-power DSP, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter.
|
||||||
|
|
||||||
\subsubsection{The Wiener filter and the concept of Gradient Descent}
|
\subsubsection{The Wiener filter and the concept of Gradient Descent}
|
||||||
Before the Least Mean Squares algorithm can be explained in detail, the Wiener filter and the concept Gradient Descent have to be introduced. \\ \\
|
Before the Least Mean Squares algorithm can be explained in detail, the Wiener filter and the concept Gradient Descent have to be introduced. \\ \\
|
||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width=0.7\linewidth]{Bilder/fig_wien.jpg}
|
\includegraphics[width=0.7\linewidth]{Bilder/fig_wien.jpg}
|
||||||
\caption{Simple implementation of a Wien filter.}
|
\caption{Simple implementation of a Wiener filter.}
|
||||||
\label{fig:fig_wien}
|
\label{fig:fig_wien}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\noindent The Wiener filter, the base of many adaptive filter designs, is a statistical filter used to minimize the Mean Square Error between a target signal and the output of a linear filter. The output $y[n]$ of the Wiener filter is the sum of the weighted input samples, where the weights are represented by the filter coefficients.
|
\noindent The Wiener filter, the base of many adaptive filter designs, is a statistical filter used to minimize the Mean Square Error between a target signal and the output of a linear filter. The output $y[n]$ of the Wiener filter is the sum of the weighted input samples, where the weights are represented by the filter coefficients.
|
||||||
@@ -172,12 +172,12 @@ Before the Least Mean Squares algorithm can be explained in detail, the Wiener f
|
|||||||
\label{equation_wien}
|
\label{equation_wien}
|
||||||
y[n] = w_0x[n] + w_1x[n-1] + ... + w_Mx[n-M] = \sum_{k=0}^{M} w_kx[n-k]
|
y[n] = w_0x[n] + w_1x[n-1] + ... + w_Mx[n-M] = \sum_{k=0}^{M} w_kx[n-k]
|
||||||
\end{equation}
|
\end{equation}
|
||||||
The Wiener filter aims to adjust it´s coefficients to generate a filter output, which resembles the corruption noise signal $n[n]$ contained in the corrupted target signal $d[n]$ as close as possible. After the filter output is substracted from the corrupted target signal, we recvieve the error signal $e[n]$, which represents the cleaned signal $š[n]$ after the corruption noise component has been removed. For better understanding, a simple Wiener filter with one coefficient shall be illustrated in the following mathematical approach, before the generalization to an n-dimensional filter is made.
|
The Wiener filter aims to adjust its coefficients to generate a filter output, which resembles the corruption noise signal $n[n]$ contained in the corrupted target signal $d[n]$ as close as possible. After the filter output is subtracted from the corrupted target signal, we receive the error signal $e[n]$, which represents the cleaned signal $š[n]$ after the corruption noise component has been removed. For better understanding, a simple Wiener filter with one coefficient shall be illustrated in the following mathematical approach, before the generalization to an n-dimensional filter is made.
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_wien_error}
|
\label{equation_wien_error}
|
||||||
e[n] = d[n] - y[n] = d[n] - wx[n]
|
e[n] = d[n] - y[n] = d[n] - wx[n]
|
||||||
\end{equation}
|
\end{equation}
|
||||||
If we square the error signal and calculate the expected value, we receive the Mean Squared Error $J$, mentioned in the previous chapter, which is the metric the Wiener filter aims to minimize by adjusting it´s coefficients $w$.
|
If we square the error signal and calculate the expected value, we receive the Mean Squared Error $J$, mentioned in the previous chapter, which is the metric the Wiener filter aims to minimize by adjusting its coefficients $w$.
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_j}
|
\label{equation_j}
|
||||||
J = E(e[n]^2) = E(d^2[n])-2wE(d[n]x[n])+w^2E(x^2[n]) = MSE
|
J = E(e[n]^2) = E(d^2[n])-2wE(d[n]x[n])+w^2E(x^2[n]) = MSE
|
||||||
@@ -186,14 +186,14 @@ The terms contained in Equation \ref{equation_j} can be further be defined as:
|
|||||||
\begin{itemize}
|
\begin{itemize}
|
||||||
\item $\sigma^2$ = $E(d^2[n])$: The expected value of the squared corrupted target signal - a constant term independent of the filter coefficients $w$.
|
\item $\sigma^2$ = $E(d^2[n])$: The expected value of the squared corrupted target signal - a constant term independent of the filter coefficients $w$.
|
||||||
\item P = $E(d[n]x[n])$: The cross-correlation between the corrupted target signal and the reference noise signal - a measure of how similar these two signals are.
|
\item P = $E(d[n]x[n])$: The cross-correlation between the corrupted target signal and the reference noise signal - a measure of how similar these two signals are.
|
||||||
\item R = $E(x^2[n])$: The auto-correlation (or serial-correlation) of the reference noise signal - a measure of the similarity of a signal with it´s delayed copy and therefore of the signal's spectral power.
|
\item R = $E(x^2[n])$: The auto-correlation (or serial-correlation) of the reference noise signal - a measure of the similarity of a signal with it's delayed copy and therefore of the signal's spectral power.
|
||||||
\end{itemize}
|
\end{itemize}
|
||||||
Equation {\ref{equation_j}} can therefore be further simplified and written as:
|
Equation {\ref{equation_j}} can therefore be further simplified and written as:
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_j_simple}
|
\label{equation_j_simple}
|
||||||
J = \sigma^2 - 2wP + w^2R
|
J = \sigma^2 - 2wP + w^2R
|
||||||
\end{equation}
|
\end{equation}
|
||||||
As every part of Equation \ref{equation_j_simple} beside $w^2$ is constant, $j$ is a quadratic function of the filter coefficients $w$, offering a calculatable minimum. To find this minimum, the derivative of $J$ with respect to $w$ can be calculated and set to zero:
|
As every part of Equation \ref{equation_j_simple} beside $w^2$ is constant, $j$ is a quadratic function of the filter coefficients $w$, offering a calculable minimum. To find this minimum, the derivative of $J$ with respect to $w$ can be calculated and set to zero:
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_j_gradient}
|
\label{equation_j_gradient}
|
||||||
\frac{dJ}{dw} = -2P + 2wR = 0
|
\frac{dJ}{dw} = -2P + 2wR = 0
|
||||||
@@ -206,7 +206,7 @@ Solving Equation \ref{equation_j_gradient} for $w$ delivers the equation to calc
|
|||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width=0.7\linewidth]{Bilder/fig_w_opt.jpg}
|
\includegraphics[width=0.7\linewidth]{Bilder/fig_w_opt.jpg}
|
||||||
\caption{Minimum of the Mean Square Error J located at the optimcal coefficient w* \cite{source_dsp_ch9}}
|
\caption{Minimum of the Mean Square Error J located at the optimal coefficient w* \cite{source_dsp_ch9}}
|
||||||
\label{fig:fig_mse}
|
\label{fig:fig_mse}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\noindent If the Wiener filter now consists not out of one coefficient, but out of several coefficients, Equation \ref{equation_wien} can be written in a matrix form as
|
\noindent If the Wiener filter now consists not out of one coefficient, but out of several coefficients, Equation \ref{equation_wien} can be written in a matrix form as
|
||||||
@@ -231,20 +231,20 @@ After settings the derivative of Equation \ref{equation_j_matrix} to zero and so
|
|||||||
\label{equation_w_optimal_matrix}
|
\label{equation_w_optimal_matrix}
|
||||||
\textbf{W}_{opt} = PR^{-1}
|
\textbf{W}_{opt} = PR^{-1}
|
||||||
\end{equation}
|
\end{equation}
|
||||||
\noindent For a large filter, the numerical solution of Equation \ref{equation_w_optimal_matrix} can be computational expensive, as it involves the inversion of potential large matrix. Therefore, to find the optimal set of coefficients $w$, the concept of gradient descent, introduced by Widrow\&Stearns in 1985, can be applied. The gradient decent algortihm aims to to minimize the MSE iteratively sample by sample by adjusting the filter coefficients $w$ in small steps towards the direction of the steepest descent to find the optimal coefficients. The update rule for the coefficients using gradient descent can be expressed as
|
\noindent For a large filter, the numerical solution of Equation \ref{equation_w_optimal_matrix} can be computational expensive, as it involves the inversion of potential large matrix. Therefore, to find the optimal set of coefficients $w$, the concept of gradient descent, introduced by Widrow\&Stearns in 1985, can be applied. The gradient decent algorithm aims to minimize the MSE iteratively sample by sample, by adjusting the filter coefficients $w$ in small steps towards the direction of the steepest descent to find the optimal coefficients. The update rule for the coefficients using gradient descent can be expressed as
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_gradient}
|
\label{equation_gradient}
|
||||||
w(n+1) = w(n) - \mu \frac{dJ}{dw}
|
w(n+1) = w(n) - \mu \frac{dJ}{dw}
|
||||||
\end{equation}
|
\end{equation}
|
||||||
where $\mu$ is the constant step size determining the rate of convergence. Figure \ref{fig:fig_w_opt} visualizes the concept of stepwise minimization of the MSE using gradient descent. After the derivative of $J$ with respect to $w$ r4aches zero, the optimal coefficients $w_{opt}$ are found and the coefficients are no longer updated.
|
where $\mu$ is the constant step size determining the rate of convergence. Figure \ref{fig:fig_w_opt} visualizes the concept of stepwise minimization of the MSE using gradient descent. After the derivative of $J$ with respect to $w$ reaches zero, the optimal coefficients $w_{opt}$ are found and the coefficients are no longer updated.
|
||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width=0.9\linewidth]{Bilder/fig_gradient.jpg}
|
\includegraphics[width=0.9\linewidth]{Bilder/fig_gradient.jpg}
|
||||||
\caption{Vizualization of the steepest decent alorithm used on the Mean Squared Error. \cite{source_dsp_ch9}}
|
\caption{Visualization of the steepest decent algorithm used on the Mean Squared Error. \cite{source_dsp_ch9}}
|
||||||
\label{fig:fig_w_opt}
|
\label{fig:fig_w_opt}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\subsubsection{The Least Mean Squares algorithm}
|
\subsubsection{The Least Mean Squares algorithm}
|
||||||
The given approach of the steepest decent algorithm in the subchapter above still involves the calculation of the derivative of the MSE $\frac{dJ}{dw}$, which is also a compuational expensive operation to calulate, as it requires knowledge of the statistical properties of the input signals (cross-correlation P and auto-correlation R). Therefore, in energy critical real-time applications, like the implementation of ANR on a low-power DSP, a sample-based aproxmation in form of a Least Mean Squares (LMS) algorithm is used instead. The LMS algorithm approximates the gradient of the MSE by using the instantaneous estimates of the cross-correlation and auto-correlation. To achieve this, we remove the statistical expectation out of the MSE $J$ and take the derivative to obtain a samplewise approximate of $\frac{dJ}{dw[n]}$.
|
The given approach of the steepest decent algorithm in the subchapter above still involves the calculation of the derivative of the MSE $\frac{dJ}{dw}$, which is also a computational expensive operation to calculate, as it requires knowledge of the statistical properties of the input signals (cross-correlation P and auto-correlation R). Therefore, in energy critical real-time applications, like the implementation of ANR on a low-power DSP, a sample-based approximation in form of the Least Mean Squares (LMS) algorithm is used instead. The LMS algorithm approximates the gradient of the MSE by using the instantaneous estimates of the cross-correlation and auto-correlation. To achieve this, we remove the statistical expectation out of the MSE $J$ and take the derivative to obtain a samplewise approximate of $\frac{dJ}{dw[n]}$.
|
||||||
\begin{gather}
|
\begin{gather}
|
||||||
\label{equation_j_lms}
|
\label{equation_j_lms}
|
||||||
J = e[n]^2 = (d[n]-wx[n])^2 \\
|
J = e[n]^2 = (d[n]-wx[n])^2 \\
|
||||||
@@ -256,18 +256,18 @@ The result of Equation \ref{equation_j_lms_final} can now be inserted into Equat
|
|||||||
\label{equation_lms}
|
\label{equation_lms}
|
||||||
w[n+1] = w[n] - 2\mu e[n]x[n]
|
w[n+1] = w[n] - 2\mu e[n]x[n]
|
||||||
\end{equation}
|
\end{equation}
|
||||||
The LMS algorithm therefore updates the filter coefficients $w[n]$ after every sample by adding a correction term, which is is calculated by the error signal $e[n]$ and the reference noise signal $x[n]$, scaled by the constant step size $\mu$. By iteratively applying the LMS algorithm, the filter coefficients converge towards the optimal values that minimize the mean squared error between the target signal and the filter output. When a predefined acceptable error level is reached, the adaptation process can be stopped to save computing power.\\ \\
|
The LMS algorithm therefore updates the filter coefficients $w[n]$ after every sample by adding a correction term, which is calculated by the error signal $e[n]$ and the reference noise signal $x[n]$, scaled by the constant step size $\mu$. By iteratively applying the LMS algorithm, the filter coefficients converge towards the optimal values that minimize the mean squared error between the target signal and the filter output. When a predefined acceptable error level is reached, the adaptation process can be stopped to save computing power.\\ \\
|
||||||
\subsection{Signal flow diagram of an implanted cochlear implant system}
|
\subsection{Signal flow diagram of an implanted cochlear implant system}
|
||||||
Now equipped with the necessary theoretical background about signal processing, adaptive noise reduction and the LMS algorithm, a realistic signal flow diagram with the relevant transfer functions of an implanted cochlear implant system can be designed, which will serve as the basis for the implementation of ANR on a low-power digital signal processor.
|
Now equipped with the necessary theoretical background about signal processing, adaptive noise reduction and the LMS algorithm, a realistic signal flow diagram with the relevant transfer functions of an implanted cochlear implant system can be designed, which will serve as the basis for the implementation of ANR on a low-power digital signal processor.
|
||||||
\begin{figure}[H]
|
\begin{figure}[H]
|
||||||
\centering
|
\centering
|
||||||
\includegraphics[width=1.1\linewidth]{Bilder/fig_anr_implant.jpg}
|
\includegraphics[width=1.1\linewidth]{Bilder/fig_anr_implant.jpg}
|
||||||
\caption{Realstic implant design.}
|
\caption{Realistic implant design.}
|
||||||
\label{fig:fig_anr_implant}
|
\label{fig:fig_anr_implant}
|
||||||
\end{figure}
|
\end{figure}
|
||||||
\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an ANR implementation, without a detailed description how the corrupted targed signal $d[n]$ and the reference noise signal $x[n]$ are formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The target signal sensor recieves the target signal and the noise signal over their respective transfer functions and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\
|
\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an ANR implementation, without a detailed description how the corrupted target signal $d[n]$ and the reference noise signal $x[n]$ are formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The target signal sensor receives the target signal and the noise signal over their respective transfer functions and outputs the corrupted target signal $d[n]$, which consists out of the recorded target signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\
|
||||||
Adittionaly, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $D_n$, $F_n$, and $C_n$ describe the path from the signal sources to the chasis of the cochlear implant, where the sensors are located. As the sources and the relative location of the user to the sources can vary, these transfer functions are time-variant and unknown. From the chasis, there are two options for continuing the signal path - either directly to the microphone membranes of the respective sensors, represented through the transfer function $G$, or through mechanical vibrations of the implant´s chasis, represented through the transfer functions $A$ and $B$. As the mechanical properties of the implanted cochlear systems are fixed, these transfer functions do not change over time, so they can be seen as time-invariant and known.\\ \\
|
Additionally, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $D_n$, $F_n$, and $C_n$ describe the path from the signal sources to the chassis of the cochlear implant, where the sensors are located. As the sources and the relative location of the user to the sources can vary, these transfer functions are time-variant and unknown. From the chassis, there are two options for continuing the signal path - either directly to the microphone membranes of the respective sensors, represented through the transfer function $G$, or through mechanical vibrations of the implant's chassis, represented through the transfer functions $A$ and $B$. As the mechanical properties of the implanted cochlear systems are fixed, these transfer functions do not change over time, so they can be seen as time-invariant and known.\\ \\
|
||||||
The corrupted target signal $d[n]$ can thereforebe mathematically described as:
|
The corrupted target signal $d[n]$ can therefore be mathematically described as:
|
||||||
\begin{equation}
|
\begin{equation}
|
||||||
\label{equation_dn}
|
\label{equation_dn}
|
||||||
d[n] = s[n] + n[n] = t[n] * (D_nG) + v[n] * ((F_nG) + (C_nA))
|
d[n] = s[n] + n[n] = t[n] * (D_nG) + v[n] * ((F_nG) + (C_nA))
|
||||||
@@ -279,8 +279,8 @@ The noise reference signal $x[n]$ can be mathematically described as:
|
|||||||
x[n] = v[n] * (C_nB)
|
x[n] = v[n] * (C_nB)
|
||||||
\end{equation}
|
\end{equation}
|
||||||
where $v[n]$ is the noise signal at its source and $x[n]$ is the recorded reference noise signal after passing the transfer functions.\\ \\
|
where $v[n]$ is the noise signal at its source and $x[n]$ is the recorded reference noise signal after passing the transfer functions.\\ \\
|
||||||
Another possible signal interaction could be the leakage of the target signal into the noise signal sensor, leading to the partly removal of the target signal from the output signal. This case is not illustrated in Figure \ref{fig:fig_anr_implant} as it wont be further evaluated in this thesis, but shall be mentioned for the sake of completeness.\\ \\
|
Another possible signal interaction could be the leakage of the target signal into the noise signal sensor, leading to the partial removal of the target signal from the output signal. This case is not illustrated in Figure \ref{fig:fig_anr_implant} as it won't be further evaluated in this thesis, but shall be mentioned for the sake of completeness.\\ \\
|
||||||
At this point, the thereotical background and the fundamentals of adaptive noise reduction have been adequatly introduced and explained as necessary for the understanding of the following chapters of this thesis. The next chapter will now focus on practical high level simulations of different filter concepts and LMS algorithm variations to evaluate their performance in regard of noise reduction quality before the actual implementation on a low-power digital signal processor is conducted.
|
At this point, the theoretical background and the fundamentals of adaptive noise reduction have been adequately introduced and explained as necessary for the understanding of the following chapters of this thesis. The next chapter will now focus on practical high level simulations of different filter concepts and LMS algorithm variations to evaluate their performance in regard of noise reduction quality before the actual implementation on a low-power digital signal processor is conducted.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user