Abkürzungen im Fließtext aktualisiert

This commit is contained in:
Patrick Hangl
2025-12-10 15:39:49 +01:00
parent e121a3c923
commit 3f0628095e
7 changed files with 111 additions and 79 deletions

View File

@@ -1,9 +1,9 @@
\section{Theoretical Background}
The following subchapters shall supply the reader with the theoretical foundation of digital signal processing to better understand the following implementation of ANR on a low-power signal processor.\\ \\
The following subchapters shall supply the reader with the theoretical foundation of digital signal processing to better understand the following implementation of \ac{ANR} on a low-power signal processor.\\ \\
The chapter begins with the description of signals, the problem of them interfering and the basics of digital signal processing in general, covering fundamental topics like signal representation, transfer functions and filters.\\
Filters are used in various functional designs, therefore a short explanation into the concepts of Finite Impulse Response- and Infinite Impulse Response filters is indispensable.\\
At this point an introduction into adaptive noise reduction follows, including a short overview of the most important steps in history, the general concept of ANR, its design possibilities and its optimization possibilities in regard of error calculation.\\
With this knowledge covered, a realistic signal flow diagram of an implanted CI system with corresponding transfer functions is designed, essential to implement ANR on a low-power digital signal processor.\\
At this point an introduction into adaptive noise reduction follows, including a short overview of the most important steps in history, the general concept of \ac{ANR}, its design possibilities and its optimization possibilities in regard of error calculation.\\
With this knowledge covered, a realistic signal flow diagram of an implanted \ac{CI} system with corresponding transfer functions is designed, essential to implement \ac{ANR} on a low-power digital signal processor.\\
At the end of chapter two, high-level Python simulations shall function as a practical demonstration of the recently presented theoretical background.\\ \\
Throughout this thesis, sampled signals are denoted in lowercase with square brackets (e.g. {x[n]}) to distinguish them from time-continuous signals (e.g. {x(t)}). Vectors are notaded in lowercase bold font, whereas matrix are notaded in uppercase bold font. Scalars are notated in normal lowercase font.\\
\subsection{Signals and signal interference}
@@ -17,7 +17,7 @@ The term "signal interference" describes the overlapping of unwanted signals or
\end{figure}
\noindent In cochlear implant systems, speech signals must be reconstructed with high spectral precision to ensure intelligibility for the user. As signal interference can cause considerable degradation to the quality of said final audio signal, the objective of this thesis shall be the improvement of implant technology in regard of adaptive noise reduction.
\subsection{Fundamentals of digital signal processing}
Digital signal processing describes the manipulation of digital signals on a digital signal processor (DSP) trough mathematical approaches. Analog signals have to be digitalized before being able to be handled by a DSP.
Digital signal processing describes the manipulation of digital signals on a \ac{DSP} through mathematical approaches. Analog signals have to be digitalized before being able to be handled by a \ac{DSP}.
\subsubsection{Signal conversion and representation}
\begin{figure}[H]
\centering
@@ -66,37 +66,37 @@ During the description of transfer functions, the term ``filter'' was used but n
\subsection{Filter designs}
Before we continue with the introduction to the actual topic of this thesis, adaptive noise reduction, two very essential filter designs need further explanation - the Finite Impulse Response- and Infinite Impulse Response filter.
\subsubsection{Finite Impulse Response filters}
A Finite Impulse Response (FIR) filter, commonly referred to as a ``Feedforward Filter'' is defined through the property, that it uses only input values and not feedback from output samples to determine its filtering behavior - therefore, if the input signal is reduced to zero, the response of a FIR filter reaches zero after a finite number of samples.\\ \\
Equation \ref{equation_fir} specifies the input-output relationship of a FIR filter - $x[n]$ is the input sample, $y[n]$ is output sample, and $b_0$ to $b_M$ the filter coefficients and M the length of the filter
A \ac{FIR} filter, commonly referred to as a ``Feedforward Filter'' is defined through the property, that it uses only input values and not feedback from output samples to determine its filtering behavior - therefore, if the input signal is reduced to zero, the response of a \ac{FIR} filter reaches zero after a finite number of samples.\\ \\
Equation \ref{equation_fir} specifies the input-output relationship of a \ac{FIR} filter - $x[n]$ is the input sample, $y[n]$ is output sample, and $b_0$ to $b_M$ the filter coefficients and M the length of the filter
\begin{equation}
\label{equation_fir}
y[n] = \sum_{k=0}^{M} b_kx[n-k] = b_0x[n] + b_1x[n-1] + ... + b_Mx[n-M]
\end{equation}
Figure \ref{fig:fig_fir} visualizes a simple FIR filter with three coefficients - the first sample is multiplied with the operator $b_0$ whereas the following samples are multiplied with the operators $b_1$ and $b_2$ before added back together. The Operator $Z^{-1}$ represents a delay operator of one sample.
Figure \ref{fig:fig_fir} visualizes a simple \ac{FIR} filter with three coefficients - the first sample is multiplied with the operator $b_0$ whereas the following samples are multiplied with the operators $b_1$ and $b_2$ before added back together. The Operator $Z^{-1}$ represents a delay operator of one sample.
As there are three operators present in the filter, three samples are needed before the filter response is complete.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{Bilder/fig_fir.jpg}
\caption{FIR filter example with three feedforward operators.}
\caption{\ac{FIR} filter example with three feedforward operators.}
\label{fig:fig_fir}
\end{figure}
\subsubsection{Infinite Impulse Response filters}
An Infinite Impulse Response (IIR) filter, commonly referred to as a ``Feedback Filter'' can be seen as an extension of the FIR filter. In contrary to its counterpart, it also uses past output samples in addition to current input samples to adapt its filtering behavior - therefore the response of an IIR filter theoretically continues indefinitely, even if the input signal is reduced to zero.\\ \\
Equation \ref{equation_iir} specifies the input-output relationship of a IIR filter. In addition to Equation \ref{equation_fir} there is now a second term included, where $a_0$ to $a_N$ are the feedback coefficients with their own filter length N.
An \ac{IIR} filter, commonly referred to as a ``Feedback Filter'' can be seen as an extension of the \ac{FIR} filter. In contrary to its counterpart, it also uses past output samples in addition to current input samples to adapt its filtering behavior - therefore the response of an \ac{IIR} filter theoretically continues indefinitely, even if the input signal is reduced to zero.\\ \\
Equation \ref{equation_iir} specifies the input-output relationship of a \ac{IIR} filter. In addition to Equation \ref{equation_fir} there is now a second term included, where $a_0$ to $a_N$ are the feedback coefficients with their own filter length N.
\begin{equation}
\label{equation_iir}
y[n] = \sum_{k=0}^{M} b_kx[n-k] - \sum_{k=0}^{N} a_ky[n-k] = b_0x[n] + ... + b_Mx[n-M] - a_0y[n] - ... - a_Ny[n-N]
\end{equation}
Figure \ref{fig:fig_iir} visualizes a simple IIR filter with two feedforward coefficients and two feedback coefficients. The first sample passes through the adder after it was multiplied with $b_0$. After that, it is passed back after being multiplied with $a_0$. The second sample is then processed the same way - this time multiplied with $b_1$ and $b_1$. After two samples, the response of this exemplary IIR filter is complete.
Figure \ref{fig:fig_iir} visualizes a simple \ac{IIR} filter with two feedforward coefficients and two feedback coefficients. The first sample passes through the adder after it was multiplied with $b_0$. After that, it is passed back after being multiplied with $a_0$. The second sample is then processed the same way - this time multiplied with $b_1$ and $b_1$. After two samples, the response of this exemplary \ac{IIR} filter is complete.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{Bilder/fig_iir.jpg}
\caption{IIR filter example with two feedforward operators and two feedback operators.}
\caption{\ac{IIR} filter example with two feedforward operators and two feedback operators.}
\label{fig:fig_iir}
\end{figure}
\subsubsection{FIR- vs. IIR-filters}
Due to the fact, that there is no feedback, a FIR filter offers unconditional stability, meaning that the filter response always converges, no matter how the coefficients are set. The disadvantages of the FIR design is the relatively flat frequency response and the higher number of needed coefficients needed to achieve a sharp frequency response compared to its Infinite Impulse Response counterpart.\\ \\
The recursive nature of an IIR filter, in contrary, allows achieving a sharp frequency response with significantly fewer coefficients than an equivalent FIR filter, but it also opens up the possibility, that the filter response diverges, depending on the set coefficients.\\ \\
\subsubsection{\ac{FIR}- vs. \ac{IIR}-filters}
Due to the fact, that there is no feedback, a \ac{FIR} filter offers unconditional stability, meaning that the filter response always converges, no matter how the coefficients are set. The disadvantages of the \ac{FIR} design is the relatively flat frequency response and the higher number of needed coefficients needed to achieve a sharp frequency response compared to its Infinite Impulse Response counterpart.\\ \\
The recursive nature of an \ac{IIR} filter, in contrary, allows achieving a sharp frequency response with significantly fewer coefficients than an equivalent \ac{FIR} filter, but it also opens up the possibility, that the filter response diverges, depending on the set coefficients.\\ \\
A higher number of needed coefficients implies, that the filter itself needs more time to complete its signal response, as the group delay is increased.
\subsection{Introduction to Adaptive Noise Reduction}
@@ -113,18 +113,18 @@ In the 1930s, the first real concept of active noise cancellation was proposed b
\noindent In contrary to the static filters in the beginning of the century, the active noise cancellation of Lueg and Widrow was far more advanced than just reducing a signal by a specific frequency portion like with the use of static filters, yet this technique still has their limitations as it is designed only to work within to a certain environment.\\ \\
With the rapid advancement of digital signal processing technologies, noise cancellation techniques evolved from static, hardware-based filters and physical soundwave cancellation towards more sophisticated approaches. In the then 1970s, the concept of digital adaptive filtering arose, allowing digital filters to adjust their parameters in real-time based on the characteristics of the incoming signal and noise. This marked a significant leap forward, as it enabled systems to deal with dynamic and unpredictable noise environments - the concept of adaptive noise reduction was born.
\subsubsection{The concept of adaptive filtering}
Adaptive noise reduction describes an advanced filtering method based on an error-metric and represents a significant advancement over these earlier methods by allowing the filter parameters to continuously adapt to the changing acoustic environment in real-time. This adaptability makes ANR particularly suitable for hearing devices, where environmental noise characteristics vary constantly.\\ \\
Adaptive noise reduction describes an advanced filtering method based on an error-metric and represents a significant advancement over these earlier methods by allowing the filter parameters to continuously adapt to the changing acoustic environment in real-time. This adaptability makes \ac{ANR} particularly suitable for hearing devices, where environmental noise characteristics vary constantly.\\ \\
Static filters, like low- and high-pass filters, as described in the previous chapter feature coefficients that remain constant over time. They are designed for known, predictable noise conditions (e.g., removing a steady 50 Hz hum). While these filters are efficient and easy to implement, they fail to function when noise characteristics change dynamically.\\ \\
Although active noise cancellation and adaptive noise reduction share obvious similarities, they differ fundamentally in their application and signal structure. While active noise cancellation aims to physically cancel noise in the acoustic domain — typically before, or at the time, the signal reaches the ear — ANR operates within the signal processing chain, attempting to extract the noisy component from the digital signal. In cochlear implant systems, the latter is more practical because the acoustic waveform is converted into electrical stimulation signals; thus, signal-domain filtering is the only feasible approach.
Although active noise cancellation and adaptive noise reduction share obvious similarities, they differ fundamentally in their application and signal structure. While active noise cancellation aims to physically cancel noise in the acoustic domain — typically before, or at the time, the signal reaches the ear — \ac{ANR} operates within the signal processing chain, attempting to extract the noisy component from the digital signal. In cochlear implant systems, the latter is more practical because the acoustic waveform is converted into electrical stimulation signals; thus, signal-domain filtering is the only feasible approach.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{Bilder/fig_anr.jpg}
\caption{The basic idea of an adaptive filter design for noise reduction.}
\label{fig:fig_anr}
\end{figure}
\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The primary sensor (top) aims to receive the desired signal and outputs the corrupted signal $d[n]$, which consists out of the recorded desired signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the secondary signal sensor aims to receive (ideally) only the noise signal and outputs the recorded reference noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption noise signal is uncorrelated to the desired signal, and therefore separable from it. In addition, we assume, that the corruption noise signal is correlated to the reference noise signal, as it originates from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the approximated desired signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted CI system in chapter 2.6.
\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The primary sensor (top) aims to receive the desired signal and outputs the corrupted signal $d[n]$, which consists out of the recorded desired signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the secondary signal sensor aims to receive (ideally) only the noise signal and outputs the recorded reference noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption noise signal is uncorrelated to the desired signal, and therefore separable from it. In addition, we assume, that the corruption noise signal is correlated to the reference noise signal, as it originates from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the approximated desired signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted \ac{CI} system in chapter 2.6.
\subsubsection{Fully adaptive vs. hybrid filter design}
The basic ANR concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every evaluation step.\\ \\
The basic \ac{ANR} concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every evaluation step.\\ \\
To reduce the required computing power, a hybrid static/adaptive filter design can be taken into consideration instead (refer to Figure \ref{fig:fig_anr_hybrid}). In this approach, the initial fully adaptive filter is split into a fixed and an adaptive part - the static filter removes a certain, known, or estimated, frequency portion of the noise signal, whereas the adaptive part only has to adapt to the remaining, unforecastable, noise parts. This approach reduces the number of coefficients required to be adapted, therefore lowering the required computing power.
\begin{figure}[H]
\centering
@@ -152,12 +152,12 @@ As we will see in the following chapters, a real world application of an adaptiv
The goal of the adaptive filter is therefore to minimize this error signal over time, thereby improving the quality of the output signal by reducing it by its noise-component.\\
The minimization of the error signal $e[n]$ can be achieved by applying different error metrics and algorithms used to evaluate the performance of an adaptive filter, including:
\begin{itemize}
\item Mean Squared Error (MSE): This metric calculates the averaged square of the error between the expected value and the observed value over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications.
\item Least Mean Squares (LMS): The LMS is an algorithm, focused on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications.
\item Normalized Least Mean Squares (NLMS): An extension of the LMS algorithm that normalizes the step size based on the input signal, improving convergence speed.
\item Recursive Least Squares (RLS): This algorithm aims to minimize the weighted sum of squared errors, providing faster convergence than the LMS algorithm but at the cost of higher computational effort.
\item \ac{MSE}: This metric calculates the averaged square of the error between the expected value and the observed value over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications.
\item \ac{LMS}: The \ac{LMS} is an algorithm, focused on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications.
\item \ac{NLMS}: An extension of the \ac{LMS} algorithm that normalizes the step size based on the input signal, improving convergence speed.
\item \ac{RLS}: This algorithm aims to minimize the weighted sum of squared errors, providing faster convergence than the \ac{LMS} algorithm but at the cost of higher computational effort.
\end{itemize}
As computational efficiency is a key requirement for the implementation of real-time ANR on a low-power DSP, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter.
As computational efficiency is a key requirement for the implementation of real-time \ac{ANR} on a low-power \ac{DSP}, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter.
\subsubsection{The Wiener filter and the concept of gradient descent}
Before the Least Mean Squares algorithm can be explained in detail, the Wiener filter and the concept of gradient descent have to be introduced. \\ \\
@@ -244,29 +244,29 @@ where $\mu$ is the constant step size determining the rate of convergence. Figur
\label{fig:fig_w_opt}
\end{figure}
\subsubsection{The Least Mean Squares algorithm}
The given approach of the steepest decent algorithm in the subchapter above still involves the calculation of the derivative of the MSE $\frac{dJ}{dw}$, which is also a computational expensive operation to calculate, as it requires knowledge of the statistical properties of the input signals (cross-correlation P and auto-correlation R). Therefore, in energy critical real-time applications, like the implementation of ANR on a low-power DSP, a sample-based approximation in form of the Least Mean Squares (LMS) algorithm is used instead. The LMS algorithm approximates the gradient of the MSE by using the instantaneous estimates of the cross-correlation and auto-correlation. To achieve this, we remove the statistical expectation out of the MSE $J$ and take the derivative to obtain a samplewise approximate of $\frac{dJ}{dw[n]}$.
The given approach of the steepest decent algorithm in the subchapter above still involves the calculation of the derivative of the MSE $\frac{dJ}{dw}$, which is also a computational expensive operation to calculate, as it requires knowledge of the statistical properties of the input signals (cross-correlation P and auto-correlation R). Therefore, in energy critical real-time applications, like the implementation of \ac{ANR} on a low-power \ac{DSP}, a sample-based approximation in form of the \ac{LMS} algorithm is used instead. The \ac{LMS} algorithm approximates the gradient of the MSE by using the instantaneous estimates of the cross-correlation and auto-correlation. To achieve this, we remove the statistical expectation out of the MSE $J$ and take the derivative to obtain a samplewise approximate of $\frac{dJ}{dw[n]}$.
\begin{gather}
\label{equation_j_lms}
J = e[n]^2 = (d[n]-wx[n])^2 \\
\label{equation_j_lms_final}
\frac{dJ}{dw[n]} = 2(d[n]-w[n]x[n])\frac{d(d[n])-w[n]x[n]}{dw[n]} = -2e[n]x[n]
\end{gather}
The result of Equation \ref{equation_j_lms_final} can now be inserted into Equation \ref{equation_gradient} to receive the LMS update rule for the filter coefficients:
The result of Equation \ref{equation_j_lms_final} can now be inserted into Equation \ref{equation_gradient} to receive the \ac{LMS} update rule for the filter coefficients:
\begin{equation}
\label{equation_lms}
w[n+1] = w[n] - 2\mu e[n]x[n]
\end{equation}
The LMS algorithm therefore updates the filter coefficients $w[n]$ after every sample by adding a correction term, which is calculated by the error signal $e[n]$ and the reference noise signal $x[n]$, scaled by the constant step size $\mu$. By iteratively applying the LMS algorithm, the filter coefficients converge towards the optimal values that minimize the mean squared error between the desired signal and the filter output. When a predefined acceptable error level is reached, the adaptation process can be stopped to save computing power.\\ \\
The \ac{LMS} algorithm therefore updates the filter coefficients $w[n]$ after every sample by adding a correction term, which is calculated by the error signal $e[n]$ and the reference noise signal $x[n]$, scaled by the constant step size $\mu$. By iteratively applying the \ac{LMS} algorithm, the filter coefficients converge towards the optimal values that minimize the mean squared error between the desired signal and the filter output. When a predefined acceptable error level is reached, the adaptation process can be stopped to save computing power.\\ \\
\subsection{Signal flow diagram of an implanted cochlear implant system}
Now equipped with the necessary theoretical background about signal processing, adaptive noise reduction and the LMS algorithm, a realistic signal flow diagram with the relevant transfer functions of an implanted cochlear implant system can be designed, which will serve as the basis for the implementation of ANR on a low-power digital signal processor.
Now equipped with the necessary theoretical background about signal processing, adaptive noise reduction and the \ac{LMS} algorithm, a realistic signal flow diagram with the relevant transfer functions of an implanted cochlear implant system can be designed, which will serve as the basis for the implementation of \ac{ANR} on a low-power digital signal processor.
\begin{figure}[H]
\centering
\includegraphics[width=1.1\linewidth]{Bilder/fig_anr_implant.jpg}
\caption{Realistic implant design.}
\label{fig:fig_anr_implant}
\end{figure}
\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an ANR implementation, without a detailed description how the corrupted signal $d[n]$ and the reference noise signal $x[n]$ are formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The primary sensor receives the desired- and noise signal over their respective transfer functions and outputs the corrupted signal $d[n]$, which consists out of the recorded desired signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\
Additionally, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $C_n$, $D_n$, and $E_n$ describe the path from the signal sources to the cochlear implant system. As the sources, the relative location of the user to the sources and the medium bewteen them can vary, these transfer functions are time-variant and unknown. After the signals reached the implant systems, we establish the possibility, that the remaining path of the signals is mainly depented on the sensitivity curve of the respective sensors and therefore can be seen as time-invariant and known. This known transfer functions, which are titled $A$ and $B$, allow us to apply an hybrid static/adaptive filter design for the ANR implementation, as described in chapter 2.5.2.\\ \\
\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an \ac{ANR} implementation, without a detailed description how the corrupted signal $d[n]$ and the reference noise signal $x[n]$ are formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The primary sensor receives the desired- and noise signal over their respective transfer functions and outputs the corrupted signal $d[n]$, which consists out of the recorded desired signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\
Additionally, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $C_n$, $D_n$, and $E_n$ describe the path from the signal sources to the cochlear implant system. As the sources, the relative location of the user to the sources and the medium bewteen them can vary, these transfer functions are time-variant and unknown. After the signals reached the implant systems, we establish the possibility, that the remaining path of the signals is mainly depented on the sensitivity curve of the respective sensors and therefore can be seen as time-invariant and known. This known transfer functions, which are titled $A$ and $B$, allow us to apply an hybrid static/adaptive filter design for the \ac{ANR} implementation, as described in chapter 2.5.2.\\ \\
\begin{equation}
\label{equation_dn}
d[n] = s[n] + n[n] = t[n] * (C_nA) + v[n] * (D_nA)
@@ -279,7 +279,7 @@ x[n] = v[n] * (E_nB)
\end{equation}
where $v[n]$ is the noise signal at its source.\\ \\
Another possible signal interaction could be the leakage of the desired signal into the secondary sensor, leading to the partial removal of the desired signal from the output signal. This case is not illustrated in Figure \ref{fig:fig_anr_implant} as it won't be further evaluated in this thesis, but shall be mentioned for the sake of completeness.\\ \\
At this point, the theoretical background and the fundamentals of adaptive noise reduction have been adequately introduced and explained as necessary for the understanding of the following chapters of this thesis. The next chapter will now focus on practical high level simulations of different filter concepts and LMS algorithm variations to evaluate their performance in regard of noise reduction quality before the actual implementation on a low-power digital signal processor is conducted.
At this point, the theoretical background and the fundamentals of adaptive noise reduction have been adequately introduced and explained as necessary for the understanding of the following chapters of this thesis. The next chapter will now focus on practical high level simulations of different filter concepts and \ac{LMS} algorithm variations to evaluate their performance in regard of noise reduction quality before the actual implementation on a low-power digital signal processor is conducted.