This commit is contained in:
Patrick Hangl
2025-10-31 12:40:11 +01:00
parent 6cff68ad4d
commit 4eeabddb3a
7 changed files with 153 additions and 61 deletions

View File

@@ -122,7 +122,7 @@ Although active noise cancellation and adaptive noise reduction share obvious si
\caption{The basic idea of an adaptive filter design for noise reduction.}
\label{fig:fig_anr}
\end{figure}
\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The target signal sensor aims to recieve the target signal $d[n]$, which consists out of the speech signal $s[n]$ and the noise signal $n[n]$, whereas the noise signal sensor aims to recieve (ideally) only the noise signal $x[n]$, which then feeds the adaptive filter. The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the aproximated speech signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted CI system.
\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The target signal sensor (top) aims to recieve the target signal $d[n]$, which consists out of the speech signal $s[n]$ and the corruption-noise signal $n[n]$, whereas the noise signal sensor aims to recieve (ideally) only the noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption-noise signal is uncorellated to the speech signal, and therefore seperable from it. In addition, we asume, that the corruption-noise signal is correlated to the noise signal, as it originitaes from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the aproximated speech signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted CI system.
\subsubsection{Fully adaptive vs. hybrid filter design}
The basic ANR concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every sample.\\ \\
To reduce the required computing power, a hybrid static/adaptive filter design can be taken into consideration instead. In this approach, the inital fully adaptive filter is split into a fixed and an adaptive part - the static filter removes a certain, known, or estimated, frequency portion of the noise signal, whereas the adaptive part only has to adapt to the remaining, unforecastable, noise parts. This approach reduces the number of coefficients required to be adapted, therefore lowering the required computing power.
@@ -145,44 +145,69 @@ The error signal $e[n]$, already illustrated in Figure \ref{fig:fig_anr} and \re
As we will see in the following chapters, a real world application of an adaptive filter system poses several challenges, which have to be taken into consideration when designing the filter. These challenges include:
\begin{itemize}
\item The error signal $e[n]$ is not a perfect representation of the clean speech signal $s[n]$ present in the target signal $d[n]$, as the adaptive filter can only approximate the noise signal based on its current coefficients, which in general do not represent the optimal solution at that given time.
\item The clean speech signal $s[n]$ is not directly available, as it is contaminated with noise and there is no reference available. Therefore, the error signal $e[n]$, respectively $š[n]$, of the adaptive filter serves as an approximation of the clean speech signal and is used as an indirect measure of the filter's performance, guiding the adaptation process by its own stepwise minimization.
\item Altough, the corruption-noise signal $n[n]$ and the noise signal $x[n]$ are correlated, they are not identical, as they take different signal paths from the noise source to their respective sensors. This discrepancy can lead to imperfect noise reduction, as the adaptive filter has to estimate the relationship between these two signals.
\item The clean speech signal $s[n]$ is not directly available, as it is contaminated with the corruption-noise signal and there is no reference available. Therefore, the error signal $e[n]$, respectively $š[n]$, of the adaptive filter serves as an approximation of the clean speech signal and is used as an indirect measure of the filter's performance, guiding the adaptation process by its own stepwise minimization.
\item The noise signal $x[n]$ fed into the adaptive filter could also contaminated with parts of the target signal. If this circumstance occurs, it can lead to undesired effects like signal distortion if not handled properly.
\end{itemize}
The goal of the adaptive filter is therefore to minimize this error signal over time, thereby improving the quality of the output signal by reducing it by it´s noise-component.\\
The minimization of the error signal $e[n]$ can by achieved by applying different error metrics used to evaluate the performance of an adaptive filter, including:
\begin{itemize}
\item Mean Squared Error (MSE): This metric calculates the averaged square of the error between input and output over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications.
\item Mean Squared Error (MSE): This metric calculates the averaged square of the error between the expected value and the observed value over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications.
\item Least Mean Squares (LMS): This metric focuses on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications.
\item Normalized Least Mean Squares (NLMS): An extension of the LMS algorithm that normalizes the step size based on the power of the input signal, improving convergence speed and stability.
\item Recursive Least Squares (RLS): This metric aims to minimize the weighted sum of squared errors, providing faster convergence than LMS but at the cost of higher computational complexity.
\end{itemize}
As computaional efficiency is a key requirement for the implementation of real-time ANR on a low-power digital signal processor, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter.
\subsubsection{The Least Mean Squares algorithm}
Before the Least Mean Squares algorithm can be explained in detail, the concept of gradient descent has to be introduced.
Gradient descent is an optimization algorithm used to minimize a function by iteratively moving towards the steepest descent, defined by the negative of the gradient.
In the context of adaptive filtering, gradient descent is used to adjust the filter coefficients in a way that minimizes the error signal $e[n]$.\\ \\
The MSE cost function $J[i]$ can be defined as:
\subsubsection{Use of Least Mean Squares algorithm in adaptive filtering}
Before the Least Mean Squares algorithm can be explained in detail, the Wiener filter and the concept of gradient descent have to be introduced.
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth]{Bilder/fig_wien.jpg}
\caption{Simple implementation of a Wien filter.}
\label{fig:fig_wien}
\end{figure}
\noindent The Wiener filter, the base of many adaptive filter designs, is a statistical filter used to minimize the mean square error between a desired signal and the output of a linear filter. The output $y[n]$ of the Wiener filter is the sum of the weighted input samples, where the weights are represented by the filter coefficients.
\begin{equation}
\label{equation_mse}
J[i] = 1/n*sum_{i=1}^{n} (x[i] - y[i])^2
\label{equation_wien}
y[n] = w_0x[n] + w_1x[n-1] + ... + w_Mx[n-M] = \sum_{k=0}^{M} w_kx[n-k]
\end{equation}
where $x[i]$ is the desired signal and $y[i]$ is the output signal of the filter. The goal of the LMS algorithm is to minimize this cost function by adjusting the filter coefficients $w_k$.\\ \\
The minimization is achieved by calculating the gradient of the cost function with respect to the filter coefficients and updating them in the opposite direction of the gradient. The update equation for the filter coefficients can be expressed as:
The Wiener filter aims to adjust it´s coefficients to generate a filter output, which resembles the corruption-noise $n[n]$ contained in the target signal $d[n]$ as close as possible. After the filter output is substracted from the target signal, we recvieve the error signal $e[n]$, which represents the cleaned signal $š[n]$ after the noise-component has been removed.
\begin{equation}
\label{equation_lms}
w_k[n+1] = w_k[n] + mu*e[n]*x[n-k]
\label{equation_wien_error}
e[n] = d[n] - y[n] = d[n] - wx[n]
\end{equation}
If we square the error signal and calculate the expected value, we receive the Mean Squared Error $J$, mentioned in the previous chapter, which is the metric the Wiener filter aims to minimize by adjusting it´s coefficients $w$.
\begin{equation}
\label{equation_wien_error}
J = E(e[n]^2) = E(d^2[n])-2wE(d[n]x[n])+w^2E(x^2[n]) = MSE
\end{equation}
The termns contained in Equation \ref{equation_wien_error} can be further be defined as:
\begin{itemize}
\item $\sigma^2$ = $E(d^2[n])$: The expected value of the squared corrupted target signal - a constant term independent of the filter coefficients $w$.
\item P = $E(d[n]x[n])$: The cross-correlation between the corrupted target signal and the noise reference signal - a measure of how similar these two signals are.
\item R = $E(x^2[n])$: The auto-correlation of the noise reference signal - a measure of the signal's spectral power.
\end{itemize}
For a large number of samples, Equation {\ref{equation_wien_error}} can therefore be further simplified and written as:
\begin{equation}
\label{equation_wien_error_final}
J = \sigma^2 - 2wP + w^2R
\end{equation}
As every part of Equation \ref{equation_wien_error_final} beside $w^2$ is constant, the MSE is a quadratic function of the filter coefficients $w$, offering a calculatable minimum. To find this minimum, we can calculate the derivative of $J$ with respect to $w$ and set it to zero:
\begin{equation}
\label{equation_gradient_j}
\frac{dJ}{dw} = -2P + 2wR = 0
\end{equation}
Solving Equation \ref{equation_gradient_j} for $w$ delivers the equation to calculate the optimal coefficients for the Wiener filter::
\begin{equation}
\label{equation_wien_optimal}
w_{opt} = \frac{P}{R}
\end{equation}
To find the optimal set of coefficients $w$ minimizing the Mean Squared Error $J$, we can apply the concept of gradient descent. Gradient descent is an iterative optimization algorithm used to find the minimum of a function by moving in the direction of the steepest descent, which is determined by the negative gradient of the function. In our case, we want to minimize the MSE $J$ by adjusting the filter coefficients $w$. The update rule for the coefficients using gradient descent can be expressed as:
\begin{equation}
\label{equation_gradient}
w(n+1) = w(n) - \mu \nabla J(w(n))
\end{equation}
where $mu$ is the step size, controlling the rate of convergence, $e[n]$ is the error signal, and $x[n-k]$ are the input samples.\\ \\
The step size $mu$ is a crucial parameter in the LMS algorithm, as it influences both the convergence speed and stability of the filter adaptation. A larger step size can lead to faster convergence but may also cause instability, while a smaller step size ensures stability but results in slower convergence. Therefore, selecting an appropriate value for $mu$ is essential for the effective performance of the adaptive filter.\\ \\
The LMS algorithm is widely used in adaptive filtering applications due to its simplicity and computational efficiency. It
provides a practical approach to real-time noise reduction by continuously updating the filter coefficients based on the observed error signal, allowing the filter to adapt to changing noise conditions effectively.
\subsection{Signal flow diagram of an implanted cochlear implant system}
\subsection{Derivation of the systems transfer function based on the problem setup}
\subsection{Example applications and high-level simulations using Python}