diff --git a/acronyms.aux b/acronyms.aux index 8f7d295..237a34c 100644 --- a/acronyms.aux +++ b/acronyms.aux @@ -6,6 +6,12 @@ \newacro{WHO}[\AC@hyperlink{WHO}{WHO}]{World Health Organization} \newacro{FIR}[\AC@hyperlink{FIR}{FIR}]{Finite Impulse Response} \newacro{IIR}[\AC@hyperlink{IIR}{IIR}]{Infinite Impulse Response} +\newacro{LMS}[\AC@hyperlink{LMS}{LMS}]{Least Mean Squares} +\newacro{MSE}[\AC@hyperlink{MSE}{MSE}]{Mean Square Error} +\newacro{ALU}[\AC@hyperlink{ALU}{ALU}]{Arithmetic Logic Unit} +\newacro{NLMS}[\AC@hyperlink{NLMS}{NLMS}]{Normalized Least Mean Squares} +\newacro{RLS}[\AC@hyperlink{RLS}{RLS}]{Recursive Least Squares} +\newacro{MAC}[\AC@hyperlink{MAC}{MAC}]{multiply-accumulate} \@setckpt{acronyms}{ \setcounter{page}{5} \setcounter{equation}{0} diff --git a/acronyms.tex b/acronyms.tex index 947d8a5..ecc236b 100644 --- a/acronyms.tex +++ b/acronyms.tex @@ -7,4 +7,10 @@ \acro{WHO}{World Health Organization} \acro{FIR}{Finite Impulse Response} \acro{IIR}{Infinite Impulse Response} + \acro{LMS}{Least Mean Squares} + \acro{MSE}{Mean Square Error} + \acro{ALU}{Arithmetic Logic Unit} + \acro{NLMS}{Normalized Least Mean Squares} + \acro{RLS}{Recursive Least Squares} + \acro{MAC}{multiply-accumulate} \end{acronym} \ No newline at end of file diff --git a/chapter_01.tex b/chapter_01.tex index 7702c9b..10d7d9d 100644 --- a/chapter_01.tex +++ b/chapter_01.tex @@ -6,7 +6,7 @@ Therefore, the improvement of implant performance in regard to the suppression o By addressing these challenges, this work aims to contribute to the next generation of cochlear implant technology, ultimately enhancing the auditory experience and quality of life for people with severe hearing impairments. \subsection{Introduction to cochlear implant systems} A \ac{CI} System is a specialized form of hearing aid, used to restore partly or complete deafness. In contrary to standard hearing aids, \ac{CI}'s do not just amplify the audio signal received by the ear, but stimulate the auditory nerve itself directly through electric pulses.\\ \\ -Usually, a CI system consists out of an external processor with a microphone (``audio processor'') receiving the ambient audio signal, processing it, and then transmitting it inductively via a transmission coil through the skin to the cochlear implant itself, implanted on the patient's skull (see Figure \ref{fig:fig_synchrony}). The CI stimulates the auditory nerves inside the cochlear through charge pulses, thus enabling the patient to hear the received audio signal as sound.\\ +Usually, a \ac{CI} system consists out of an external processor with a microphone (``audio processor'') receiving the ambient audio signal, processing it, and then transmitting it inductively via a transmission coil through the skin to the cochlear implant itself, implanted on the patient's skull (see Figure \ref{fig:fig_synchrony}). The \ac{CI} stimulates the auditory nerves inside the cochlear through charge pulses, thus enabling the patient to hear the received audio signal as sound.\\ \begin{figure}[H] \centering \includegraphics[width=0.6\linewidth]{Bilder/fig_synchrony.png} @@ -20,10 +20,10 @@ Usually, a CI system consists out of an external processor with a microphone (`` \caption{Visualization of a MED-EL electrode inserted into a human cochlear. \cite{source_electrode}} \label{fig:fig_electrode} \end{figure} -\noindent As for any head worn hearing aid, the audio processor of a CI system does not only pick up the desired ambient audio signal, but also any sort of interference noises from different sources. This circumstance leads to a decrease in the quality of the final audio signal for the user. Reducing this interference noise through adaptive noise reduction, implemented on a low-power digital signal processor, which can be powered within the electrical limitations of a CI system, is the topic of this master's thesis. +\noindent As for any head worn hearing aid, the audio processor of a \ac{CI} system does not only pick up the desired ambient audio signal, but also any sort of interference noises from different sources. This circumstance leads to a decrease in the quality of the final audio signal for the user. Reducing this interference noise through adaptive noise reduction, implemented on a low-power digital signal processor, which can be powered within the electrical limitations of a CI system, is the topic of this master's thesis. \subsection{Implementation of Adaptive Noise Reduction in Cochlear Implant Systems} -The above problem description of noise interference shows the need of further improvement of CI systems in this regard. For persons with a healthy hearing sense, the addition of noise to an observed signal may just mean a decrease in hearing comfort, whereas for aurally impaired people it can make the difference in the basic understanding of information. As everyday environments present fluctuating background noise - from static crowd chatter to sudden sounds of different characteristics — that can severely degrade speech perception, the ability to suppress noise is a crucial benefit for users of cochlear implant systems. \\ \\ -Adaptive noise reduction (ANR) (also commonly referred as adaptive noise cancellation (ANC)), is an advanced signal processing technique that adjusts the parameters of a digital filter to suppress unwanted noise from a signal while preserving the desired target signal. In contrary to static filters (like a high- or low-pass filter), ANR uses real-time feedback to adjust said digital filter to adapt to the current circumstances.\\ \\ -The challenge in the implementation of ANR in CI systems lies in the limited capacities. As the CI system is powered by a small battery located in the audio processor, energy efficiency is crucial for a possible solution of the described problem of noise interference. Any approach to a reduction of interference noise must be highly optimized with regard to computing power and implemented on dedicated low-power hardware, being able to be powered within the limitations of a CI system.\\ \\ -The main solution concept of this thesis is the optimization of the adaptive filter of the ANR algorithm in combination with the used low-power hardware. Its goal is, to deliver the best possible result in interference noise reduction while still being able to be powered by the limited resources of a CI system. Different variants, like the fully adaptive filter, the hybrid static/adaptive filter and different optimization approaches of the latter one are low-level simulated on the dedicated digital signal processor. Especially, the different optimization strategies of the hybrid static/adaptive filter algorithm shall be evaluated and compared in regard of their required computing power, and therefore, their required power consumption. Depending on the kind of interference noise, the frequency and the intensity, a promising optimization approach is the reduction of adaptation steps per sample while still maintaining an adequate quality of the filtered audio signal.\\ \\ -Due to the fact, that the CI system is powered by a battery with a relatively small capacity, the firmware is required to work with the least power possible. Therefore, optimization in regard to a minimization of needed processor clocks is aimed for. \ No newline at end of file +The above problem description of noise interference shows the need of further improvement of \ac{CI} systems in this regard. For persons with a healthy hearing sense, the addition of noise to an observed signal may just mean a decrease in hearing comfort, whereas for aurally impaired people it can make the difference in the basic understanding of information. As everyday environments present fluctuating background noise - from static crowd chatter to sudden sounds of different characteristics — that can severely degrade speech perception, the ability to suppress noise is a crucial benefit for users of cochlear implant systems. \\ \\ +Adaptive Noise Reduction (\ac{ANR}) (also commonly referred as \ac{ANC}), is an advanced signal processing technique that adjusts the parameters of a digital filter to suppress unwanted noise from a signal while preserving the desired target signal. In contrary to static filters (like a high- or low-pass filter), \ac{ANR} uses real-time feedback to adjust said digital filter to adapt to the current circumstances.\\ \\ +The challenge in the implementation of \ac{ANR} in \ac{CI} systems lies in the limited capacities. As the \ac{CI} system is powered by a small battery located in the audio processor, energy efficiency is crucial for a possible solution of the described problem of noise interference. Any approach to a reduction of interference noise must be highly optimized with regard to computing power and implemented on dedicated low-power hardware, being able to be powered within the limitations of a \ac{CI} system.\\ \\ +The main solution concept of this thesis is the optimization of the adaptive filter of the \ac{ANR} algorithm in combination with the used low-power hardware. Its goal is, to deliver the best possible result in interference noise reduction while still being able to be powered by the limited resources of a \ac{CI} system. Different variants, like the fully adaptive filter, the hybrid static/adaptive filter and different optimization approaches of the latter one are low-level simulated on the dedicated digital signal processor. Especially, the different optimization strategies of the hybrid static/adaptive filter algorithm shall be evaluated and compared in regard of their required computing power, and therefore, their required power consumption. Depending on the kind of interference noise, the frequency and the intensity, a promising optimization approach is the reduction of adaptation steps per sample while still maintaining an adequate quality of the filtered audio signal.\\ \\ +Due to the fact, that the \ac{CI} system is powered by a battery with a relatively small capacity, the firmware is required to work with the least power possible. Therefore, optimization in regard to a minimization of needed processor clocks is aimed for. \ No newline at end of file diff --git a/chapter_02.tex b/chapter_02.tex index 16c244c..1a0df04 100644 --- a/chapter_02.tex +++ b/chapter_02.tex @@ -1,9 +1,9 @@ \section{Theoretical Background} -The following subchapters shall supply the reader with the theoretical foundation of digital signal processing to better understand the following implementation of ANR on a low-power signal processor.\\ \\ +The following subchapters shall supply the reader with the theoretical foundation of digital signal processing to better understand the following implementation of \ac{ANR} on a low-power signal processor.\\ \\ The chapter begins with the description of signals, the problem of them interfering and the basics of digital signal processing in general, covering fundamental topics like signal representation, transfer functions and filters.\\ Filters are used in various functional designs, therefore a short explanation into the concepts of Finite Impulse Response- and Infinite Impulse Response filters is indispensable.\\ -At this point an introduction into adaptive noise reduction follows, including a short overview of the most important steps in history, the general concept of ANR, its design possibilities and its optimization possibilities in regard of error calculation.\\ -With this knowledge covered, a realistic signal flow diagram of an implanted CI system with corresponding transfer functions is designed, essential to implement ANR on a low-power digital signal processor.\\ +At this point an introduction into adaptive noise reduction follows, including a short overview of the most important steps in history, the general concept of \ac{ANR}, its design possibilities and its optimization possibilities in regard of error calculation.\\ +With this knowledge covered, a realistic signal flow diagram of an implanted \ac{CI} system with corresponding transfer functions is designed, essential to implement \ac{ANR} on a low-power digital signal processor.\\ At the end of chapter two, high-level Python simulations shall function as a practical demonstration of the recently presented theoretical background.\\ \\ Throughout this thesis, sampled signals are denoted in lowercase with square brackets (e.g. {x[n]}) to distinguish them from time-continuous signals (e.g. {x(t)}). Vectors are notaded in lowercase bold font, whereas matrix are notaded in uppercase bold font. Scalars are notated in normal lowercase font.\\ \subsection{Signals and signal interference} @@ -17,7 +17,7 @@ The term "signal interference" describes the overlapping of unwanted signals or \end{figure} \noindent In cochlear implant systems, speech signals must be reconstructed with high spectral precision to ensure intelligibility for the user. As signal interference can cause considerable degradation to the quality of said final audio signal, the objective of this thesis shall be the improvement of implant technology in regard of adaptive noise reduction. \subsection{Fundamentals of digital signal processing} -Digital signal processing describes the manipulation of digital signals on a digital signal processor (DSP) trough mathematical approaches. Analog signals have to be digitalized before being able to be handled by a DSP. +Digital signal processing describes the manipulation of digital signals on a \ac{DSP} through mathematical approaches. Analog signals have to be digitalized before being able to be handled by a \ac{DSP}. \subsubsection{Signal conversion and representation} \begin{figure}[H] \centering @@ -66,37 +66,37 @@ During the description of transfer functions, the term ``filter'' was used but n \subsection{Filter designs} Before we continue with the introduction to the actual topic of this thesis, adaptive noise reduction, two very essential filter designs need further explanation - the Finite Impulse Response- and Infinite Impulse Response filter. \subsubsection{Finite Impulse Response filters} -A Finite Impulse Response (FIR) filter, commonly referred to as a ``Feedforward Filter'' is defined through the property, that it uses only input values and not feedback from output samples to determine its filtering behavior - therefore, if the input signal is reduced to zero, the response of a FIR filter reaches zero after a finite number of samples.\\ \\ -Equation \ref{equation_fir} specifies the input-output relationship of a FIR filter - $x[n]$ is the input sample, $y[n]$ is output sample, and $b_0$ to $b_M$ the filter coefficients and M the length of the filter +A \ac{FIR} filter, commonly referred to as a ``Feedforward Filter'' is defined through the property, that it uses only input values and not feedback from output samples to determine its filtering behavior - therefore, if the input signal is reduced to zero, the response of a \ac{FIR} filter reaches zero after a finite number of samples.\\ \\ +Equation \ref{equation_fir} specifies the input-output relationship of a \ac{FIR} filter - $x[n]$ is the input sample, $y[n]$ is output sample, and $b_0$ to $b_M$ the filter coefficients and M the length of the filter \begin{equation} \label{equation_fir} y[n] = \sum_{k=0}^{M} b_kx[n-k] = b_0x[n] + b_1x[n-1] + ... + b_Mx[n-M] \end{equation} -Figure \ref{fig:fig_fir} visualizes a simple FIR filter with three coefficients - the first sample is multiplied with the operator $b_0$ whereas the following samples are multiplied with the operators $b_1$ and $b_2$ before added back together. The Operator $Z^{-1}$ represents a delay operator of one sample. +Figure \ref{fig:fig_fir} visualizes a simple \ac{FIR} filter with three coefficients - the first sample is multiplied with the operator $b_0$ whereas the following samples are multiplied with the operators $b_1$ and $b_2$ before added back together. The Operator $Z^{-1}$ represents a delay operator of one sample. As there are three operators present in the filter, three samples are needed before the filter response is complete. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_fir.jpg} - \caption{FIR filter example with three feedforward operators.} + \caption{\ac{FIR} filter example with three feedforward operators.} \label{fig:fig_fir} \end{figure} \subsubsection{Infinite Impulse Response filters} -An Infinite Impulse Response (IIR) filter, commonly referred to as a ``Feedback Filter'' can be seen as an extension of the FIR filter. In contrary to its counterpart, it also uses past output samples in addition to current input samples to adapt its filtering behavior - therefore the response of an IIR filter theoretically continues indefinitely, even if the input signal is reduced to zero.\\ \\ -Equation \ref{equation_iir} specifies the input-output relationship of a IIR filter. In addition to Equation \ref{equation_fir} there is now a second term included, where $a_0$ to $a_N$ are the feedback coefficients with their own filter length N. +An \ac{IIR} filter, commonly referred to as a ``Feedback Filter'' can be seen as an extension of the \ac{FIR} filter. In contrary to its counterpart, it also uses past output samples in addition to current input samples to adapt its filtering behavior - therefore the response of an \ac{IIR} filter theoretically continues indefinitely, even if the input signal is reduced to zero.\\ \\ +Equation \ref{equation_iir} specifies the input-output relationship of a \ac{IIR} filter. In addition to Equation \ref{equation_fir} there is now a second term included, where $a_0$ to $a_N$ are the feedback coefficients with their own filter length N. \begin{equation} \label{equation_iir} y[n] = \sum_{k=0}^{M} b_kx[n-k] - \sum_{k=0}^{N} a_ky[n-k] = b_0x[n] + ... + b_Mx[n-M] - a_0y[n] - ... - a_Ny[n-N] \end{equation} -Figure \ref{fig:fig_iir} visualizes a simple IIR filter with two feedforward coefficients and two feedback coefficients. The first sample passes through the adder after it was multiplied with $b_0$. After that, it is passed back after being multiplied with $a_0$. The second sample is then processed the same way - this time multiplied with $b_1$ and $b_1$. After two samples, the response of this exemplary IIR filter is complete. +Figure \ref{fig:fig_iir} visualizes a simple \ac{IIR} filter with two feedforward coefficients and two feedback coefficients. The first sample passes through the adder after it was multiplied with $b_0$. After that, it is passed back after being multiplied with $a_0$. The second sample is then processed the same way - this time multiplied with $b_1$ and $b_1$. After two samples, the response of this exemplary \ac{IIR} filter is complete. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_iir.jpg} - \caption{IIR filter example with two feedforward operators and two feedback operators.} + \caption{\ac{IIR} filter example with two feedforward operators and two feedback operators.} \label{fig:fig_iir} \end{figure} -\subsubsection{FIR- vs. IIR-filters} -Due to the fact, that there is no feedback, a FIR filter offers unconditional stability, meaning that the filter response always converges, no matter how the coefficients are set. The disadvantages of the FIR design is the relatively flat frequency response and the higher number of needed coefficients needed to achieve a sharp frequency response compared to its Infinite Impulse Response counterpart.\\ \\ -The recursive nature of an IIR filter, in contrary, allows achieving a sharp frequency response with significantly fewer coefficients than an equivalent FIR filter, but it also opens up the possibility, that the filter response diverges, depending on the set coefficients.\\ \\ +\subsubsection{\ac{FIR}- vs. \ac{IIR}-filters} +Due to the fact, that there is no feedback, a \ac{FIR} filter offers unconditional stability, meaning that the filter response always converges, no matter how the coefficients are set. The disadvantages of the \ac{FIR} design is the relatively flat frequency response and the higher number of needed coefficients needed to achieve a sharp frequency response compared to its Infinite Impulse Response counterpart.\\ \\ +The recursive nature of an \ac{IIR} filter, in contrary, allows achieving a sharp frequency response with significantly fewer coefficients than an equivalent \ac{FIR} filter, but it also opens up the possibility, that the filter response diverges, depending on the set coefficients.\\ \\ A higher number of needed coefficients implies, that the filter itself needs more time to complete its signal response, as the group delay is increased. \subsection{Introduction to Adaptive Noise Reduction} @@ -113,18 +113,18 @@ In the 1930s, the first real concept of active noise cancellation was proposed b \noindent In contrary to the static filters in the beginning of the century, the active noise cancellation of Lueg and Widrow was far more advanced than just reducing a signal by a specific frequency portion like with the use of static filters, yet this technique still has their limitations as it is designed only to work within to a certain environment.\\ \\ With the rapid advancement of digital signal processing technologies, noise cancellation techniques evolved from static, hardware-based filters and physical soundwave cancellation towards more sophisticated approaches. In the then 1970s, the concept of digital adaptive filtering arose, allowing digital filters to adjust their parameters in real-time based on the characteristics of the incoming signal and noise. This marked a significant leap forward, as it enabled systems to deal with dynamic and unpredictable noise environments - the concept of adaptive noise reduction was born. \subsubsection{The concept of adaptive filtering} -Adaptive noise reduction describes an advanced filtering method based on an error-metric and represents a significant advancement over these earlier methods by allowing the filter parameters to continuously adapt to the changing acoustic environment in real-time. This adaptability makes ANR particularly suitable for hearing devices, where environmental noise characteristics vary constantly.\\ \\ +Adaptive noise reduction describes an advanced filtering method based on an error-metric and represents a significant advancement over these earlier methods by allowing the filter parameters to continuously adapt to the changing acoustic environment in real-time. This adaptability makes \ac{ANR} particularly suitable for hearing devices, where environmental noise characteristics vary constantly.\\ \\ Static filters, like low- and high-pass filters, as described in the previous chapter feature coefficients that remain constant over time. They are designed for known, predictable noise conditions (e.g., removing a steady 50 Hz hum). While these filters are efficient and easy to implement, they fail to function when noise characteristics change dynamically.\\ \\ -Although active noise cancellation and adaptive noise reduction share obvious similarities, they differ fundamentally in their application and signal structure. While active noise cancellation aims to physically cancel noise in the acoustic domain — typically before, or at the time, the signal reaches the ear — ANR operates within the signal processing chain, attempting to extract the noisy component from the digital signal. In cochlear implant systems, the latter is more practical because the acoustic waveform is converted into electrical stimulation signals; thus, signal-domain filtering is the only feasible approach. +Although active noise cancellation and adaptive noise reduction share obvious similarities, they differ fundamentally in their application and signal structure. While active noise cancellation aims to physically cancel noise in the acoustic domain — typically before, or at the time, the signal reaches the ear — \ac{ANR} operates within the signal processing chain, attempting to extract the noisy component from the digital signal. In cochlear implant systems, the latter is more practical because the acoustic waveform is converted into electrical stimulation signals; thus, signal-domain filtering is the only feasible approach. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_anr.jpg} \caption{The basic idea of an adaptive filter design for noise reduction.} \label{fig:fig_anr} \end{figure} -\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The primary sensor (top) aims to receive the desired signal and outputs the corrupted signal $d[n]$, which consists out of the recorded desired signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the secondary signal sensor aims to receive (ideally) only the noise signal and outputs the recorded reference noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption noise signal is uncorrelated to the desired signal, and therefore separable from it. In addition, we assume, that the corruption noise signal is correlated to the reference noise signal, as it originates from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the approximated desired signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted CI system in chapter 2.6. +\noindent Figure \ref{fig:fig_anr} shows the basic concept of an adaptive filter design, represented through a feedback filter application. The primary sensor (top) aims to receive the desired signal and outputs the corrupted signal $d[n]$, which consists out of the recorded desired signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the secondary signal sensor aims to receive (ideally) only the noise signal and outputs the recorded reference noise signal $x[n]$, which then feeds the adaptive filter. We assume at this point, that the corruption noise signal is uncorrelated to the desired signal, and therefore separable from it. In addition, we assume, that the corruption noise signal is correlated to the reference noise signal, as it originates from the same source, but takes a different signal path. \\ \\ The adaptive filter removes a certain, noise-related, frequency part of the input signal and re-evaluates the output through its feedback design. The filter parameters are then adjusted and applied to the next sample to minimize the observed error $e[n]$, which also represents the approximated desired signal $š[n]$. In reality, a signal contamination of the two sensors has to be expected, which will be illustrated in a more realistic signal flow diagram of an implanted \ac{CI} system in chapter 2.6. \subsubsection{Fully adaptive vs. hybrid filter design} -The basic ANR concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every evaluation step.\\ \\ +The basic \ac{ANR} concept illustrated in Figure \ref{fig:fig_anr} can be understood as a fully adaptive variant. A fully adaptive filter design works with a fixed number of coefficients of which everyone is updated after every sample processing. Even if this approach features the best performance in noise reduction, it also requires a relatively high amount of computing power, as every coefficient has to be re-calculated after every evaluation step.\\ \\ To reduce the required computing power, a hybrid static/adaptive filter design can be taken into consideration instead (refer to Figure \ref{fig:fig_anr_hybrid}). In this approach, the initial fully adaptive filter is split into a fixed and an adaptive part - the static filter removes a certain, known, or estimated, frequency portion of the noise signal, whereas the adaptive part only has to adapt to the remaining, unforecastable, noise parts. This approach reduces the number of coefficients required to be adapted, therefore lowering the required computing power. \begin{figure}[H] \centering @@ -152,12 +152,12 @@ As we will see in the following chapters, a real world application of an adaptiv The goal of the adaptive filter is therefore to minimize this error signal over time, thereby improving the quality of the output signal by reducing it by its noise-component.\\ The minimization of the error signal $e[n]$ can be achieved by applying different error metrics and algorithms used to evaluate the performance of an adaptive filter, including: \begin{itemize} -\item Mean Squared Error (MSE): This metric calculates the averaged square of the error between the expected value and the observed value over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications. -\item Least Mean Squares (LMS): The LMS is an algorithm, focused on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications. -\item Normalized Least Mean Squares (NLMS): An extension of the LMS algorithm that normalizes the step size based on the input signal, improving convergence speed. -\item Recursive Least Squares (RLS): This algorithm aims to minimize the weighted sum of squared errors, providing faster convergence than the LMS algorithm but at the cost of higher computational effort. +\item \ac{MSE}: This metric calculates the averaged square of the error between the expected value and the observed value over a predefined period. It is sensitive to large errors and is commonly used in adaptive filtering applications. +\item \ac{LMS}: The \ac{LMS} is an algorithm, focused on minimizing the mean squared error by adjusting the filter coefficients iteratively based on the error signal by applying the gradient descent method. It is computationally efficient and widely used in real-time applications. +\item \ac{NLMS}: An extension of the \ac{LMS} algorithm that normalizes the step size based on the input signal, improving convergence speed. +\item \ac{RLS}: This algorithm aims to minimize the weighted sum of squared errors, providing faster convergence than the \ac{LMS} algorithm but at the cost of higher computational effort. \end{itemize} -As computational efficiency is a key requirement for the implementation of real-time ANR on a low-power DSP, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter. +As computational efficiency is a key requirement for the implementation of real-time \ac{ANR} on a low-power \ac{DSP}, the Least Mean Squares algorithm is chosen for the minimization of the error signal and therefore will be further explained in the following subchapter. \subsubsection{The Wiener filter and the concept of gradient descent} Before the Least Mean Squares algorithm can be explained in detail, the Wiener filter and the concept of gradient descent have to be introduced. \\ \\ @@ -244,29 +244,29 @@ where $\mu$ is the constant step size determining the rate of convergence. Figur \label{fig:fig_w_opt} \end{figure} \subsubsection{The Least Mean Squares algorithm} -The given approach of the steepest decent algorithm in the subchapter above still involves the calculation of the derivative of the MSE $\frac{dJ}{dw}$, which is also a computational expensive operation to calculate, as it requires knowledge of the statistical properties of the input signals (cross-correlation P and auto-correlation R). Therefore, in energy critical real-time applications, like the implementation of ANR on a low-power DSP, a sample-based approximation in form of the Least Mean Squares (LMS) algorithm is used instead. The LMS algorithm approximates the gradient of the MSE by using the instantaneous estimates of the cross-correlation and auto-correlation. To achieve this, we remove the statistical expectation out of the MSE $J$ and take the derivative to obtain a samplewise approximate of $\frac{dJ}{dw[n]}$. +The given approach of the steepest decent algorithm in the subchapter above still involves the calculation of the derivative of the MSE $\frac{dJ}{dw}$, which is also a computational expensive operation to calculate, as it requires knowledge of the statistical properties of the input signals (cross-correlation P and auto-correlation R). Therefore, in energy critical real-time applications, like the implementation of \ac{ANR} on a low-power \ac{DSP}, a sample-based approximation in form of the \ac{LMS} algorithm is used instead. The \ac{LMS} algorithm approximates the gradient of the MSE by using the instantaneous estimates of the cross-correlation and auto-correlation. To achieve this, we remove the statistical expectation out of the MSE $J$ and take the derivative to obtain a samplewise approximate of $\frac{dJ}{dw[n]}$. \begin{gather} \label{equation_j_lms} J = e[n]^2 = (d[n]-wx[n])^2 \\ \label{equation_j_lms_final} \frac{dJ}{dw[n]} = 2(d[n]-w[n]x[n])\frac{d(d[n])-w[n]x[n]}{dw[n]} = -2e[n]x[n] \end{gather} -The result of Equation \ref{equation_j_lms_final} can now be inserted into Equation \ref{equation_gradient} to receive the LMS update rule for the filter coefficients: +The result of Equation \ref{equation_j_lms_final} can now be inserted into Equation \ref{equation_gradient} to receive the \ac{LMS} update rule for the filter coefficients: \begin{equation} \label{equation_lms} w[n+1] = w[n] - 2\mu e[n]x[n] \end{equation} -The LMS algorithm therefore updates the filter coefficients $w[n]$ after every sample by adding a correction term, which is calculated by the error signal $e[n]$ and the reference noise signal $x[n]$, scaled by the constant step size $\mu$. By iteratively applying the LMS algorithm, the filter coefficients converge towards the optimal values that minimize the mean squared error between the desired signal and the filter output. When a predefined acceptable error level is reached, the adaptation process can be stopped to save computing power.\\ \\ +The \ac{LMS} algorithm therefore updates the filter coefficients $w[n]$ after every sample by adding a correction term, which is calculated by the error signal $e[n]$ and the reference noise signal $x[n]$, scaled by the constant step size $\mu$. By iteratively applying the \ac{LMS} algorithm, the filter coefficients converge towards the optimal values that minimize the mean squared error between the desired signal and the filter output. When a predefined acceptable error level is reached, the adaptation process can be stopped to save computing power.\\ \\ \subsection{Signal flow diagram of an implanted cochlear implant system} - Now equipped with the necessary theoretical background about signal processing, adaptive noise reduction and the LMS algorithm, a realistic signal flow diagram with the relevant transfer functions of an implanted cochlear implant system can be designed, which will serve as the basis for the implementation of ANR on a low-power digital signal processor. + Now equipped with the necessary theoretical background about signal processing, adaptive noise reduction and the \ac{LMS} algorithm, a realistic signal flow diagram with the relevant transfer functions of an implanted cochlear implant system can be designed, which will serve as the basis for the implementation of \ac{ANR} on a low-power digital signal processor. \begin{figure}[H] \centering \includegraphics[width=1.1\linewidth]{Bilder/fig_anr_implant.jpg} \caption{Realistic implant design.} \label{fig:fig_anr_implant} \end{figure} -\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an ANR implementation, without a detailed description how the corrupted signal $d[n]$ and the reference noise signal $x[n]$ are formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The primary sensor receives the desired- and noise signal over their respective transfer functions and outputs the corrupted signal $d[n]$, which consists out of the recorded desired signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\ -Additionally, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $C_n$, $D_n$, and $E_n$ describe the path from the signal sources to the cochlear implant system. As the sources, the relative location of the user to the sources and the medium bewteen them can vary, these transfer functions are time-variant and unknown. After the signals reached the implant systems, we establish the possibility, that the remaining path of the signals is mainly depented on the sensitivity curve of the respective sensors and therefore can be seen as time-invariant and known. This known transfer functions, which are titled $A$ and $B$, allow us to apply an hybrid static/adaptive filter design for the ANR implementation, as described in chapter 2.5.2.\\ \\ +\noindent Figure \ref{fig:fig_anr_hybrid} showed us the basic concept of an \ac{ANR} implementation, without a detailed description how the corrupted signal $d[n]$ and the reference noise signal $x[n]$ are formed. Figure \ref{fig:fig_anr_implant} now shows a more complete and realistic signal flow diagram of an implanted cochlear implant system, with two signal sensors and an adaptive noise reduction circuit afterwards. The primary sensor receives the desired- and noise signal over their respective transfer functions and outputs the corrupted signal $d[n]$, which consists out of the recorded desired signal $s[n]$ and the recorded corruption noise signal $n[n]$, whereas the noise signal sensor aims to receive (ideally) only the noise signal $v[n]$ over its transfer function and outputs the reference noise signal $x[n]$, which then feeds the adaptive filter.\\ \\ +Additionally, now the relevant transfer functions of the overall system are illustrated in Figure \ref{fig:fig_anr_implant}. The transfer functions $C_n$, $D_n$, and $E_n$ describe the path from the signal sources to the cochlear implant system. As the sources, the relative location of the user to the sources and the medium bewteen them can vary, these transfer functions are time-variant and unknown. After the signals reached the implant systems, we establish the possibility, that the remaining path of the signals is mainly depented on the sensitivity curve of the respective sensors and therefore can be seen as time-invariant and known. This known transfer functions, which are titled $A$ and $B$, allow us to apply an hybrid static/adaptive filter design for the \ac{ANR} implementation, as described in chapter 2.5.2.\\ \\ \begin{equation} \label{equation_dn} d[n] = s[n] + n[n] = t[n] * (C_nA) + v[n] * (D_nA) @@ -279,7 +279,7 @@ x[n] = v[n] * (E_nB) \end{equation} where $v[n]$ is the noise signal at its source.\\ \\ Another possible signal interaction could be the leakage of the desired signal into the secondary sensor, leading to the partial removal of the desired signal from the output signal. This case is not illustrated in Figure \ref{fig:fig_anr_implant} as it won't be further evaluated in this thesis, but shall be mentioned for the sake of completeness.\\ \\ -At this point, the theoretical background and the fundamentals of adaptive noise reduction have been adequately introduced and explained as necessary for the understanding of the following chapters of this thesis. The next chapter will now focus on practical high level simulations of different filter concepts and LMS algorithm variations to evaluate their performance in regard of noise reduction quality before the actual implementation on a low-power digital signal processor is conducted. +At this point, the theoretical background and the fundamentals of adaptive noise reduction have been adequately introduced and explained as necessary for the understanding of the following chapters of this thesis. The next chapter will now focus on practical high level simulations of different filter concepts and \ac{LMS} algorithm variations to evaluate their performance in regard of noise reduction quality before the actual implementation on a low-power digital signal processor is conducted. diff --git a/chapter_03.tex b/chapter_03.tex index 9d40bd1..36646da 100644 --- a/chapter_03.tex +++ b/chapter_03.tex @@ -1,8 +1,8 @@ \section{High level simulations} -The main purpose of the high-level simulations is to verify and demonstrate the theoretical approach of the previous chapters and to evaluate the performance of the proposed algorithms under various conditions. The following simulations include different scenarios such as, different types of noise signals and different cosniderations of transfer functions. The goal is to verify different approaches before taking the step to the implementation of said algorithms on the low-power DSP.\\ \\ +The main purpose of the high-level simulations is to verify and demonstrate the theoretical approach of the previous chapters and to evaluate the performance of the proposed algorithms under various conditions. The following simulations include different scenarios such as, different types of noise signals and different cosniderations of transfer functions. The goal is to verify different approaches before taking the step to the implementation of said algorithms on the low-power \ac{DSP}.\\ \\ The implementation is conducted in Python, which provides a flexible environment for numerical computations and data visualization. The simulation is graphically represented using the Python library Matplotlib, allowing for clear visualization of the results. -\subsection{ANR algorithm implementation} -The high-level implementation of the ANR algorithm follows the theoretical framework outlined in Subchapter 2.5, specificially Equation \ref{equation_lms}. The algorithm is designed to adaptively filter out noise from a desired signal using a reference noise input. The implementation of the ANR function includes the following key steps: +\subsection{\ac{ANR} algorithm implementation} +The high-level implementation of the \ac{ANR} algorithm follows the theoretical framework outlined in Subchapter 2.5, specificially Equation \ref{equation_lms}. The algorithm is designed to adaptively filter out noise from a desired signal using a reference noise input. The implementation of the \ac{ANR} function includes the following key steps: \begin{itemize} \item Initialization: Define vectors to store the filter coefficients, the output samples, and the updated filter coefficients over time. \item Filtering Process: After initially enough input samples (= number of filter coeffcients) passed the filter, for each sample in the input sample, the filter coefficients are multiplied with the corresponding reference noise samples before added to an accumulator. @@ -10,11 +10,11 @@ The high-level implementation of the ANR algorithm follows the theoretical frame \item Coefficient Update: The filter coefficients are updated by the corrector, which consists out of the error signal, scaled by the step size. The adaption step parameter allows to control how often the coefficients are updated. \item Iteration: Repeat the process for all samples in the input signal. \end{itemize} -The flow diagram in Figure \ref{fig:fig_anr_logic} illustrates the logical flow of the ANR algorithm, while the code snippet in Figure \ref{fig:fig_anr_code} provides the concrete code implementation of the ANR-function. +The flow diagram in Figure \ref{fig:fig_anr_logic} illustrates the logical flow of the \ac{ANR} algorithm, while the code snippet in Figure \ref{fig:fig_anr_code} provides the concrete code implementation of the \ac{ANR}-function. \begin{figure}[H] \centering \includegraphics[width=0.9\linewidth]{Bilder/fig_anr_logic.jpg} - \caption{Flow diagram of the code implementation of the ANR algrotihm.} + \caption{Flow diagram of the code implementation of the \ac{ANR} algrotihm.} \label{fig:fig_anr_logic} \end{figure} \begin{figure}[H] @@ -42,14 +42,14 @@ The flow diagram in Figure \ref{fig:fig_anr_logic} illustrates the logical flow return output, coefficient_matrix \end{lstlisting} \label{fig:fig_anr_code} - \caption{High-level implementation of the ANR algorithm in Python} + \caption{High-level implementation of the \ac{ANR} algorithm in Python} \end{figure} -\subsection{Simple ANR usecases} -To evaltuate the general functionality and performance of the ANR algorithm from Figure \ref{fig:fig_anr_code} a set of three simple, artificial scenarios are introduced. These examples shall serve as a showcase to demonstrate the general functionality, the possibilities and the limitations of the ANR algorithm. In contrary to a more complex and realistic setup, which will be reviewed afterwards, the clean signals are available, which is in a realistic application not the case.\\ \\ +\subsection{Simple \ac{ANR} usecases} +To evaltuate the general functionality and performance of the \ac{ANR} algorithm from Figure \ref{fig:fig_anr_code} a set of three simple, artificial scenarios are introduced. These examples shall serve as a showcase to demonstrate the general functionality, the possibilities and the limitations of the \ac{ANR} algorithm. In contrary to a more complex and realistic setup, which will be reviewed afterwards, the clean signals are available, which is in a realistic application not the case.\\ \\ In all three scenarios, a chirp signal with a frequency range from 100-1000 Hz is used as the desired signal, which is then corrupted with a sine wave (Usecase 1 and 2) or a gaussian white noise (Usecase 3) as noise signal respectively. In this simple setup, the corrpution noise signal is also available as the reference noise signal. Every approach is conducted with 16 filter coefficients and a stepsize of 0.01. The four graphs in the repsective first plot show the desired signal, the corrupted signal, the reference noise signal and the filter output. The two graphs in the respective second plot show the performance of the filter in form of the resulting error signal and the evolution of three filter coefficients over time.\\ \\ -\noindent This artificial setup could be solved analitically, as the signals do not pass seperate, different transfer functions. This means, that the reference noise signal is the same as the corruption noise signal. This simple setup would not require an adaptive filter approach, but it nevertheless allows to clearly evaluate the performance of the ANR algorithm in different scenarios. Also, due to the fact that the desired signal is known, it is possible to graphically evaluate the performance of the algorithm in a simple way. +\noindent This artificial setup could be solved analitically, as the signals do not pass seperate, different transfer functions. This means, that the reference noise signal is the same as the corruption noise signal. This simple setup would not require an adaptive filter approach, but it nevertheless allows to clearly evaluate the performance of the \ac{ANR} algorithm in different scenarios. Also, due to the fact that the desired signal is known, it is possible to graphically evaluate the performance of the algorithm in a simple way. \subsubsection{Simple usecase 1: Sine noise at 2000 Hz} -In the first usecase, a sine wave with a frequency of 2000 Hz, which lies outside the frequency spectrum of the chirp, is used as noise signal to corrupt the desired signal. The shape of the initial desired signal is still clearly recognizeable, even if its shape is affected in the higher frequency area. The filter output in Figure \ref{fig:fig_plot_1_sine_1.png} shows a statisfying performance of the ANR algorithm, as the noise is almost completely removed from the corrupted signal after the filter coefficients have adapted. +In the first usecase, a sine wave with a frequency of 2000 Hz, which lies outside the frequency spectrum of the chirp, is used as noise signal to corrupt the desired signal. The shape of the initial desired signal is still clearly recognizeable, even if its shape is affected in the higher frequency area. The filter output in Figure \ref{fig:fig_plot_1_sine_1.png} shows a statisfying performance of the \ac{ANR} algorithm, as the noise is almost completely removed from the corrupted signal after the filter coefficients have adapted. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{Bilder/fig_plot_1_sine_1.png} @@ -64,14 +64,14 @@ In the first usecase, a sine wave with a frequency of 2000 Hz, which lies outsid \label{fig:fig_plot_2_sine_1.png} \end{figure} \subsubsection{Simple usecase 2: Sine noise at 500 Hz} -The second usecase resembles the first one, but instead of a 2000 Hz sine wave, a sine wave with a frequency of 500 Hz is used as noise signal. This means, that the noise signal now overlaps with the frequency spectrum of the chirp signal, making the noise cancellation task more challenging, as an osciillation beacon in the area of 500 Hz appears. Also, in contrary to usecase 1, the shape of the initial chirp is now far less recognizebale. The filter output in Figure \ref{fig:fig_plot_1_sine_2.png} indicates that the ANR algorithm is still able to significantly reduce the noise from the corrputed signal, +The second usecase resembles the first one, but instead of a 2000 Hz sine wave, a sine wave with a frequency of 500 Hz is used as noise signal. This means, that the noise signal now overlaps with the frequency spectrum of the chirp signal, making the noise cancellation task more challenging, as an osciillation beacon in the area of 500 Hz appears. Also, in contrary to usecase 1, the shape of the initial chirp is now far less recognizebale. The filter output in Figure \ref{fig:fig_plot_1_sine_2.png} indicates that the \ac{ANR} algorithm is still able to significantly reduce the noise from the corrputed signal, \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{Bilder/fig_plot_1_sine_2.png} \caption{Desired signal, corrputed signal, reference noise signal and filter output of simple usecase 2} \label{fig:fig_plot_1_sine_2.png} \end{figure} -\noindent Figure \ref{fig:fig_plot_2_sine_2.png} shows a significant increase of the amplitude of the error signal compared to Usecase 1, especially around the 500 Hz frequency of the noise signal. Also the adaption of the coefficients shows far more variance compared to Usecase 1, with a complete rearrangement in the area of 500 Hz. This indicates that the ANR algorithm is struggling to adapt effectively in a scenario, where the noise signal overlaps with the desired signal. +\noindent Figure \ref{fig:fig_plot_2_sine_2.png} shows a significant increase of the amplitude of the error signal compared to Usecase 1, especially around the 500 Hz frequency of the noise signal. Also the adaption of the coefficients shows far more variance compared to Usecase 1, with a complete rearrangement in the area of 500 Hz. This indicates that the \ac{ANR} algorithm is struggling to adapt effectively in a scenario, where the noise signal overlaps with the desired signal. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{Bilder/fig_plot_2_sine_2.png} @@ -79,7 +79,7 @@ The second usecase resembles the first one, but instead of a 2000 Hz sine wave, \label{fig:fig_plot_2_sine_2.png} \end{figure} \subsubsection{Simple usecase 3: Gaussian white noise} -The last on of our three simplified usecases involves the use of a gaussian white noise signal as the noise signal to corrupt the desired signal. This scenario represents a more complex situation, as white noise contains a broad spectrum of frequencies and is not deterministic, making it more challenging for the ANR algorithm to effectively generate a clean output. Nevertheless, the filter output in Figure \ref{fig:fig_plot_1_noise.png} demonstrates that the ANR algorithm is capable of significantly reducing the noise from the desired signal, although the amplitude of the filter output varies, indicating difficulties adapting due to the broad frequency spectrum of the noise. +The last on of our three simplified usecases involves the use of a gaussian white noise signal as the noise signal to corrupt the desired signal. This scenario represents a more complex situation, as white noise contains a broad spectrum of frequencies and is not deterministic, making it more challenging for the \ac{ANR} algorithm to effectively generate a clean output. Nevertheless, the filter output in Figure \ref{fig:fig_plot_1_noise.png} demonstrates that the \ac{ANR} algorithm is capable of significantly reducing the noise from the desired signal, although the amplitude of the filter output varies, indicating difficulties adapting due to the broad frequency spectrum of the noise. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{Bilder/fig_plot_1_noise.png} @@ -93,25 +93,25 @@ The last on of our three simplified usecases involves the use of a gaussian whit \caption{Error signal and filter coefficient evolution of simple usecase 3} \label{fig:fig_plot_2_noise.png} \end{figure} -\subsection{Intermediate ANR usecase} -After the general functionality of the ANR algorithm has been verified with the above simple and artificial usecases, a more complex and intermediate scenario is now introduced. In this usecase, a real-world audio track of a person speaking on TV (see top graph in Figure \ref{fig:fig_plot_1_wav.png}) is used as the desired signal, which is then corrupted with a dominant breathing noise as the noise signal. This scenario represents a more realistic application of the ANR algorithm, as it involves complex audio signals with varying frequency components and relatively high dynamics, but still keeps the advantage of having the clean signal available for performance evaluation. Also, again, the same noise which corrputs the desired signal is used as the reference noise signal, as no transfer functionsare applied on the signals. +\subsection{Intermediate \ac{ANR} usecase} +After the general functionality of the \ac{ANR} algorithm has been verified with the above simple and artificial usecases, a more complex and intermediate scenario is now introduced. In this usecase, a real-world audio track of a person speaking on TV (see top graph in Figure \ref{fig:fig_plot_1_wav.png}) is used as the desired signal, which is then corrupted with a dominant breathing noise as the noise signal. This scenario represents a more realistic application of the \ac{ANR} algorithm, as it involves complex audio signals with varying frequency components and relatively high dynamics, but still keeps the advantage of having the clean signal available for performance evaluation. Also, again, the same noise which corrputs the desired signal is used as the reference noise signal, as no transfer functionsare applied on the signals. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{Bilder/fig_plot_1_wav.png} - \caption{Desired signal, corrputed signal, reference noise signal and filter output of the intermediate ANR usecase} + \caption{Desired signal, corrputed signal, reference noise signal and filter output of the intermediate \ac{ANR} usecase} \label{fig:fig_plot_1_wav.png} \end{figure} -\noindent The filter output in Figure \ref{fig:fig_plot_1_wav.png} indicates already graphically, that the audio track of the person speaking is significantly more intelligible after the application of the ANR algorithm - the prominent breathing noise is clearly reduced in the filter output compared to the corrupted signal. +\noindent The filter output in Figure \ref{fig:fig_plot_1_wav.png} indicates already graphically, that the audio track of the person speaking is significantly more intelligible after the application of the \ac{ANR} algorithm - the prominent breathing noise is clearly reduced in the filter output compared to the corrupted signal. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{Bilder/fig_plot_2_wav.png} - \caption{Error signal and filter coefficient evolution of the intermediate ANR usecase} + \caption{Error signal and filter coefficient evolution of the intermediate \ac{ANR} usecase} \label{fig:fig_plot_2_wav.png} \end{figure} -\noindent The error signal in Figure \ref{fig:fig_plot_2_wav.png} confirms the function of the algorithm and shows peaks corresponding to the spikes in the breathing noise, indicating the the moments, when the ANR algorithm is setting its coeffcients again to adapt to the changing noise characteristics. It makes sense, that the adaption of the filter coefficients causes repeating spikes in the error signal, as the noise signal now is not static or periodic, but rather dynamic and changing it frequenc and amplitude over time. -\subsection{Complex ANR usecase} -To close the topic of high-level simulations of the ANR algorithm, a more complex and realistic usecase is finally introduced. In this scenario, the same two audio tracks of the previous usecase are used - but now they pass different transfer functions. Now, an analitical solution is not possible anymore, as the transfer functions affect the signals in different ways, making it impossible to simply subtract the noise signal from the corrupted signal. This scenario represents a more realistic application of the ANR algorithm, as it involves complex audio signals with varying frequency components and dynamics, as well as different transfer functions affecting the signals.\\ \\ -Different transfer functions represent the reality of different sensors recording the corrupted signal and the reference noise signal with a specific frequency response characteristic - this circumstance is especially important, as later a fixed set of filter coefficients shall take care of the predictable part of the signal to reduce the computing power of the DSP.\\ +\noindent The error signal in Figure \ref{fig:fig_plot_2_wav.png} confirms the function of the algorithm and shows peaks corresponding to the spikes in the breathing noise, indicating the the moments, when the \ac{ANR} algorithm is setting its coeffcients again to adapt to the changing noise characteristics. It makes sense, that the adaption of the filter coefficients causes repeating spikes in the error signal, as the noise signal now is not static or periodic, but rather dynamic and changing it frequenc and amplitude over time. +\subsection{Complex \ac{ANR} usecase} +To close the topic of high-level simulations of the \ac{ANR} algorithm, a more complex and realistic usecase is finally introduced. In this scenario, the same two audio tracks of the previous usecase are used - but now they pass different transfer functions. Now, an analitical solution is not possible anymore, as the transfer functions affect the signals in different ways, making it impossible to simply subtract the noise signal from the corrupted signal. This scenario represents a more realistic application of the \ac{ANR} algorithm, as it involves complex audio signals with varying frequency components and dynamics, as well as different transfer functions affecting the signals.\\ \\ +Different transfer functions represent the reality of different sensors recording the corrupted signal and the reference noise signal with a specific frequency response characteristic - this circumstance is especially important, as later a fixed set of filter coefficients shall take care of the predictable part of the signal to reduce the computing power of the \ac{DSP}.\\ Therefore, the audio tracks from the previous example are now convolved with different transfer functions, which mimic the case, that the sensor recording the corrputed signal, shows another frequency response characteristic as the one recording the reference noise signal. This means, that the reference noise signal is now different to the noise signal corrupting the desired signal, making adaptive noise reduction the only feasible approach to reduce the noise from the corrputed signal. \begin{figure}[H] \centering @@ -126,18 +126,18 @@ Therefore, the audio tracks from the previous example are now convolved with dif \caption{The raw noise signal recorded with two different sensors, showing the effect of different transfer functions on the signal} \label{fig:fig_plot_4_wav_complex.png} \end{figure} -\noindent To evaluate the performance of the ANR algorithm in this complex scenario, the corrupted signal is recorded with the primary sensor while the reference noise signal is recorded with secondary sensor. The filter output in Figure \ref{fig:fig_plot_1_wav_complex.png} indicates, that the ANR algorithm is still capable of significantly reducing the noise from the corrupted signal, even with only a different reference noise signal available to adapt the filter coefficients. +\noindent To evaluate the performance of the \ac{ANR} algorithm in this complex scenario, the corrupted signal is recorded with the primary sensor while the reference noise signal is recorded with secondary sensor. The filter output in Figure \ref{fig:fig_plot_1_wav_complex.png} indicates, that the \ac{ANR} algorithm is still capable of significantly reducing the noise from the corrupted signal, even with only a different reference noise signal available to adapt the filter coefficients. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{Bilder/fig_plot_1_wav_complex.png} - \caption{Desired signal, corrputed signal, reference noise signal and filter output of the complex ANR usecase} + \caption{Desired signal, corrputed signal, reference noise signal and filter output of the complex \ac{ANR} usecase} \label{fig:fig_plot_1_wav_complex.png} \end{figure} -\noindent The error signal in Figure \ref{fig:fig_plot_2_wav_complex.png} shows only a minor increase in amplitude compared to the previous intermediate usecase, indicating that the ANR algorithm is effectively adapting its filter coefficients. +\noindent The error signal in Figure \ref{fig:fig_plot_2_wav_complex.png} shows only a minor increase in amplitude compared to the previous intermediate usecase, indicating that the \ac{ANR} algorithm is effectively adapting its filter coefficients. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{Bilder/fig_plot_2_wav_complex.png} - \caption{Error signal and filter coefficient evolution of the complex ANR usecase} + \caption{Error signal and filter coefficient evolution of the complex \ac{ANR} usecase} \label{fig:fig_plot_2_wav_complex.png} \end{figure} -\noindent As now the functionality of the ANR algorithm has been verified in different scenarios, varying from simple to complex, the next chapter of this thesis focuses on the implementation of the algorithm on the low-power DSP. \ No newline at end of file +\noindent As now the functionality of the \ac{ANR} algorithm has been verified in different scenarios, varying from simple to complex, the next chapter of this thesis focuses on the implementation of the algorithm on the low-power \ac{DSP}. \ No newline at end of file diff --git a/chapter_04.aux b/chapter_04.aux index d21c8f6..2c05d2b 100644 --- a/chapter_04.aux +++ b/chapter_04.aux @@ -1,9 +1,29 @@ \relax \@writefile{toc}{\contentsline {section}{\numberline {4}Hardware setup and low level simulations}{40}{}\protected@file@percent } -\@writefile{toc}{\contentsline {subsection}{\numberline {4.1}Description of the low-power DSP}{40}{}\protected@file@percent } -\@writefile{toc}{\contentsline {subsection}{\numberline {4.2}Implementation of the ANR algorithm on the DSP}{41}{}\protected@file@percent } +\acronymused{ANR} +\acronymused{ANR} +\acronymused{DSP} +\acronymused{ANR} +\acronymused{ANR} +\acronymused{DSP} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.1}Description of the low-power \ac {DSP}}{40}{}\protected@file@percent } +\acronymused{DSP} +\AC@undonewlabel{acro:ALU} +\newlabel{acro:ALU}{{4.1}{40}{}{}{}} +\acronymused{ALU} +\acronymused{DSP} +\acronymused{ALU} +\AC@undonewlabel{acro:MAC} +\newlabel{acro:MAC}{{4.1}{40}{}{}{}} +\acronymused{MAC} +\acronymused{DSP} +\acronymused{DSP} +\acronymused{ANR} +\acronymused{DSP} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.2}Implementation of the \ac {ANR} algorithm on the \ac {DSP}}{41}{}\protected@file@percent } \@writefile{toc}{\contentsline {subsection}{\numberline {4.3}First optimization approach: algorithm implementation}{41}{}\protected@file@percent } -\@writefile{toc}{\contentsline {subsection}{\numberline {4.4}Second optimization approach: hybrid ANR algorithm}{41}{}\protected@file@percent } +\acronymused{ANR} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.4}Second optimization approach: hybrid \ac {ANR} algorithm}{41}{}\protected@file@percent } \@setckpt{chapter_04}{ \setcounter{page}{42} \setcounter{equation}{21} diff --git a/chapter_04.tex b/chapter_04.tex index 8f1a212..bea3498 100644 --- a/chapter_04.tex +++ b/chapter_04.tex @@ -1,16 +1,16 @@ \section{Hardware setup and low level simulations} -This section aims to be the main part of this thesis. The first subchapters describes the hardware, on which the ANR algorithm is implemented. The following subchapter describes the basic implementation of the ANR algorithm on the hardware itself and shall provide the reader with a basic understanding of its efficiency, which shall serve as a baseline for the following optimiziations.\\ -During the third chapter, this initial implementation is further optimized in order to achieve an improved real-time performance on the DSP. The last subchapter picks the final optimizations of the ANR algorithm itself as a central theme, especially with respect to the capabilites of a hybrid ANR approach. -\subsection{Description of the low-power DSP} -The DSP used for the implementation is a 32-bit fixed-point processor primarily designed for audio signal-processing applications in low-power embedded systems. It is developed using a retargetable processor design methodology and is typically programmed in C. Its highly efficient C compiler produces optimized assembly code that is comparable in performance and quality to hand-written assembly.\\ \\ -The processor is equipped with load/store architecture, meaning that, initially all operands need to be moved from the memory to the registers, before any operation can be performed. After this task is performed, the execution units (Arithmetic Logic Units (ALUs) and multiplier) can perform their oeprations on the data and write back the results into the registers. Finally, the results need to be explicitly moved back to the memory.\\ \\ -The DSP includes a three stage pipeline consisting of fetch, decode, and execute stages, aloowing for overlapping instruction execution and improved throughput. -The architecture is optimized for high cycle efficiency when executing computationally intensive signal-processing workloads. It features a dual Harvard load store architecture and two seperate ALUs, which enables the execution of two multiply-accumulate (MAC) operations, two memory operations (load/store) and two pointer updates in a single prcoessor cycle.\\ \\ -The DSP includes a set of registers, including +This section aims to be the main part of this thesis. The first subchapters describes the hardware, on which the \ac{ANR} algorithm is implemented. The following subchapter describes the basic implementation of the \ac{ANR} algorithm on the hardware itself and shall provide the reader with a basic understanding of its efficiency, which shall serve as a baseline for the following optimiziations.\\ +During the third chapter, this initial implementation is further optimized in order to achieve an improved real-time performance on the \ac{DSP}. The last subchapter picks the final optimizations of the \ac{ANR} algorithm itself as a central theme, especially with respect to the capabilites of a hybrid \ac{ANR} approach. +\subsection{Description of the low-power \ac{DSP}} +The \ac{DSP} used for the implementation is a 32-bit fixed-point processor primarily designed for audio signal-processing applications in low-power embedded systems. It is developed using a retargetable processor design methodology and is typically programmed in C. Its highly efficient C compiler produces optimized assembly code that is comparable in performance and quality to hand-written assembly.\\ \\ +The processor is equipped with load/store architecture, meaning that, initially all operands need to be moved from the memory to the registers, before any operation can be performed. After this task is performed, the execution units (\ac{ALU} and multiplier) can perform their oeprations on the data and write back the results into the registers. Finally, the results need to be explicitly moved back to the memory.\\ \\ +The \ac{DSP} includes a three stage pipeline consisting of fetch, decode, and execute stages, aloowing for overlapping instruction execution and improved throughput. +The architecture is optimized for high cycle efficiency when executing computationally intensive signal-processing workloads. It features a dual Harvard load store architecture and two seperate \ac{ALU}s, which enables the execution of two \ac{MAC} operations, two memory operations (load/store) and two pointer updates in a single prcoessor cycle.\\ \\ +The \ac{DSP} includes a set of registers, including -Advanced addressing modes — such as cyclic and bit-reversed addressing — facilitate efficient implementation of common DSP algorithms. Additional architectural features include hardware-supported zero-overhead looping, nested loop structures, interrupt handling, power-management mechanisms, and on-chip debugging capabilities such as JTAG, breakpoints, and watchpoints. Overall, the architecture is designed to support both control-flow operations and high-throughput signal-processing tasks within low-power embedded environments. -\subsection{Implementation of the ANR algorithm on the DSP} +Advanced addressing modes — such as cyclic and bit-reversed addressing — facilitate efficient implementation of common \ac{DSP} algorithms. Additional architectural features include hardware-supported zero-overhead looping, nested loop structures, interrupt handling, power-management mechanisms, and on-chip debugging capabilities such as JTAG, breakpoints, and watchpoints. Overall, the architecture is designed to support both control-flow operations and high-throughput signal-processing tasks within low-power embedded environments. +\subsection{Implementation of the \ac{ANR} algorithm on the \ac{DSP}} \subsection{First optimization approach: algorithm implementation} -\subsection{Second optimization approach: hybrid ANR algorithm} +\subsection{Second optimization approach: hybrid \ac{ANR} algorithm}