This commit is contained in:
Patrick Hangl
2025-10-28 16:47:14 +01:00
parent af2a1b3c5e
commit 4c3f5d7129

View File

@@ -133,13 +133,37 @@ To reduce the required computing power, a hybrid static/adaptive filter design c
\label{fig:fig_anr_hybrid}
\end{figure}
\noindent Different approaches of the hybrid static/adaptive filter design will be evaluated and compared in regard of their required computing power in a later chapter of this thesis.
\subsection{Filter optimization}
\subsection{Adaptive optimization strategies}
In the decription of the concept of adaptive filtering above, the adaption of filter coefficients due to an error metric was mentioned but not further explained. The following subchapters shall cover the most important aspects of filter optimization in regard of adaptive noise reduction.
\subsubsection{Error metrics}
Adaptive filters rely on an error metric to self-reliantely evaluate their performance in real-time by adjusting their coefficients in a constant manner to minimize the recieved error signal.
\subsubsection{Filter optimization and error metrics }
Adaptive filters rely on an error metric to self-reliantely evaluate their performance in real-time by adjusting their coefficients in a constant manner to minimize the recieved error signal $e[n]$, which is defined as:
\begin{equation}
\label{equation_error}
e[n] = d[n] - y[n]
\end{equation}
The error signal $e[n]$, already illustrated in Figure \ref{fig:fig_anr} and \ref{fig:fig_anr_hybrid}, is calculated as the difference between the target signal $d[n]$ and the output signal of the filter $y[n]$.
As we will see in the following chapters, a real world application of an adaptive filter system poses several challenges, which have to be taken into consideration when designing the filter. These challenges include:
\begin{itemize}
\item The output signal $e[n]$ is not a perfect representation of the clean speech signal $s[n]$ present in the target signal $d[n]$, as the adaptive filter can only approximate the noise signal based on its current coefficients, which in general doesn´t represent the optimal solution at that given time.
\item The clean speech signal $s[n]$ is not directly available, as it is contaminated with noise and there is no reference. Therefore, the error signal $e[n]$ of the adaptive filter serves as an approximation of the clean speech signal and is used as an indirect measure of the filter's performance, guiding the adaptation process by its own stepwise minimization.
\item The noise signal $x[n]$ fed into the adaptive filter could also contaminated with parts of the target signal. If this circumstance occurs, it can lead to undesired effects like signal distortion if not handled properly.
\end{itemize}
The goal of the adaptive filter is therefore to minimize this error signal over time, thereby improving the quality of the output signal by reducing it by it´s noise-component.\\
The minimization of the error signal $e[n]$ can by achieved by applying different error metrics used to evaluate the performance of an adaptive filter, including:
\begin{itemize}
\item Mean Squared Error (MSE): This metric calculates the average of the squares of the errors over a specified period. It is sensitive to large errors and is commonly used in adaptive filtering applications.
\item Least Mean Squares (LMS): This metric focuses on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal. It is computationally efficient and widely used in real-time applications.
\item Normalized Least Mean Squares (NLMS): An extension of the LMS algorithm that normalizes the step size based on the power of the input signal, improving convergence speed and stability.
\item Recursive Least Squares (RLS): This metric aims to minimize the weighted sum of squared errors, providing faster convergence than LMS but at the cost of higher computational complexity.
\end{itemize}
As computaional efficiency is a key requirement for the implementation of real-time ANR on a low-power digital signal processor, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter.
The error signal $e[n]$, already illustrated in Figure \ref{fig:fig_anr} and \ref{fig:fig_anr_hybrid}, is calculated as the difference between the desired signal $d[n]$ (the noise reference) and the output signal $y[n]$ (the filtered signal). The goal of the adaptive filter is to minimize this error signal over time, thereby improving the quality of the output signal.\\ \\
\subsubsection{The Least Mean Squares algorithm}
Before the Least Mean Squares algorithm can be explained in detail, the concept of gradient descent has to be introduced.
Gradient descent is an optimization algorithm used to minimize a function by iteratively moving towards the steepest descent, defined by the negative of the gradient.
In the context of adaptive filtering, gradient descent is used to adjust the filter coefficients in a way that minimizes the error signal $e[n]$.\\ \\
\subsection{Signal flow diagram of an implanted cochlear implant system}
\subsection{Derivation of the systems transfer function based on the problem setup}