\section{Theoretical Background} The following subchapters shall equip the reader with the theoretical foundations of digital signal processing to better understand the following implementation of ANR on a low-power signal processor.\\ \\ We will begin with the fundamentals of digital signal processing in general, covering topics like signals, transfer-functions and filters.\\ To fully understand ANR, a short deep-dive into the concepts of Finite Impulse Response- and Infinite Impulse Response filters is indispensable.\\ From this point we will continue into the history and the mathematical concepts of ANR, its real-time feedback possibilities and its use of the Least Mean Square (LMS) Algorithm.\\ With this knowledge covered, we will design a realistic signal flow diagram and the corresponding transfer functions, of an implanted CI system essential to implement a functioning ANR on a low-power DSP.\\ At the end of chapter two, high-level Python simulations shall function as a practical demonstration of the recently presented theoretical background.\\ \\ Chapter 2 is relying on the textbook ``Digital Signal Processing Fundamentals and Applications 2nd Ed'' by Tan and Jiang \cite{source_dsp1}. \subsection{Fundamentals of Digital Signal Processing} Digital Signal Processing (DSP) describes the manipulation of an analog signal trough mathematical approaches after it has been recorded and converted into a digital form. Nearly every part of the modern daily live, be it communication via cellphones, X-Ray imaging or picture editing, is affected by DSP. \subsubsection{Signals} \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_dsp.jpg} \caption{Block diagram of processing an analog input signal to an analog output signal with digital signal processing in between \cite{source_dsp_ch1}} \label{fig:fig_dsp} \end{figure} Before digital signal processing can be applied to an analog signal like voice, several steps are required beforehand. An analog signal, continuous in both time and amplitude, is passed through an initial filter, which limits the frequency bandwidth. An analog-digital converter then samples and quantities the signal into a digital form, now discrete in time and amplitude. This digital signal can now be processed, before (possibly) being converted to an analog signal again. (refer to Figure \ref{fig:fig_dsp}). The sampling rate defines, in how many samples per second are taken from the analog signal - a higher sample rate delivers a more accurate digital representation of the signal but also uses more resources. According to the Nyquist–Shannon sampling theorem, the sample rate must be at least twice the highest frequency component present in the signal to avoid distortions of the signal.\\ \\ Throughout this thesis, sampled signals are denoted in lowercase with square brackets (e.g. {x[n]}) to distinguish them from continuous-time signals (e.g. {x(t)}).\\ The discrete digital signal can be viewed as a sequence of finite samples with its amplitude being a discrete value, like a 16- or 32-bit integer. A signal vector of the length N, containing N samples, is therefore notated as \begin{equation} \label{equation1} x[n] = [x[n-N+1],x[n-N+2],...,x[n-1],x[n]] \end{equation} where x[n] is the current sample and x[n-1] is the preceding sample. \subsubsection{Time domain vs. frequency domain} A signal (either analog or digital) can be displayed and analyzed in two ways: the time spectrum and the frequency spectrum. The time spectrum shows the amplitude of the signal over time - like the sine waves from Figure \ref{fig:fig_interference}. If a fast Fourier transformation (FFT) is applied to the signal in the time spectrum, we receive the same signal in the frequency spectrum, now showing the frequencies present in the signal (refer to Figure \ref{fig:fig_fft}).\\ \\ \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_fft.jpg} \caption{Sampled digital signal in the time spectrum and in the frequency spectrum \cite{source_dsp_ch1}} \label{fig:fig_fft} \end{figure} \subsubsection{Transfer Functions and filters} When we discuss signals in a mathematical way, we need to explain the term ``transfer function''. A transfer function is a mathematical representation of an abstract system that describes how an input signal is transformed into an output signal. This could mean a simple amplification or a phase shift applied to an input signal. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_transfer.jpg} \caption{Simple representation of a transfer function taking a noisy input signal and delivering a clean output signal \cite{source_dsp_ch1}} \label{fig:fig_transfer} \end{figure} In digital signal processing, especially in the design of a noise reduction algorithm, transfer functions are essential for modeling and analyzing filters, amplifiers, and the pathway of the signal itself. By understanding a system’s transfer function, one can predict how sound signals are altered and therefore how filter parameters can be adapted to deliver the desired output signal.\\ \\ During the description of transfer functions, the term ``filter'' was used but not yet defined. A filter can be understood as a component in signal processing, designed to modify or extract specific parts of a signal by selectively allowing certain frequency ranges to pass while attenuating others. Filters can be static, meaning they always extract the same portion of a signal, or adaptive, meaning they change their filtering-behavior over time according to their environment. Examples for static filter include low-pass-, high-pass-, band-pass- and band-stop filters, each tailored to isolate or remove particular frequency content (refer to Figure \ref{fig:fig_lowpass}). \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_lowpass.jpg} \caption{Behavior of a low-pass-filter. \cite{source_dsp_ch2}} \label{fig:fig_lowpass} \end{figure} Examples for an adaptive filter is the Least-Mean-Square-Algorithm used for adaptive noise reduction, which will be introduced in the following chapters. \subsection{Filter designs} Before we continue with the introduction to the actual topic of this thesis, ANR, two very essential filter designs need further explanation - the Finite Impulse Response- and Infinite Impulse Response-filters. \subsubsection{Finite Impulse Response filters} A Finite Impulse Response (FIR) filter, commonly referred to as a ``Feedforward Filter'' is defined through the property, that it uses only present and past input values and not feedback from output samples - therefore the response of a FIR filter reaches zero after a finite number of samples. Due to the fact, that there is no feedback, a FIR filter offers unconditional stability, meaning that the filter response converges, no matter how the coefficients are set. A disadvantage to the FIR design is the relatively slow frequency response compared to its Infinite Impulse Response counterpart. \\ \\ Equation \ref{equation_fir} specifies the input-output relationship of a FIR filter - $x[n]$ is the input sample, $y[n]$ is output sample, and $b_0$ to $b_M$ the filter coefficients and M the length of the filter \begin{equation} \label{equation_fir} y[n] = \sum_{k=0}^{M} b_kx[n-k] = b_0x[n] + b_1x[n-1] + ... + b_Mx[n-M] \end{equation} Figure \ref{fig:fig_fir} visualizes a simple FIR filter with two coefficients - the first sample is multiplied with the operator $b_0$ whereas the following sample $b1$ is multiplied with the operator $b_1$ before added back together. The Operator $Z^{-1}$ represents a delay operator. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_fir.jpg} \caption{FIR filter example with two feedforward operators} \label{fig:fig_fir} \end{figure} \subsubsection{Infinite Impulse Response filters} An Infinite Impulse Response (IIR) filter, commonly referred to as a ``Feedback Filter'' can be seen as an extension of the FIR filter. In contrary to its counterpart, it also uses past output samples in addition to current and past input samples - therefore the response of an IIR filter theoretically continues indefinitely. This recursive nature allows IIR filter to achieve a sharp frequency response with significantly fewer coefficients than an equivalent FIR filter, but it also opens up the possibility, that the filter response diverges, depending on the set coefficients.\\ \\ Equation \ref{equation_iir} specifies the input-output relationship of a IIR filter. In addition to Equation \ref{equation_fir} there is now a second term included, where $a_0$ to $a_N$ are the feedback coefficients with their own filter length N. \begin{equation} \label{equation_iir} y[n] = \sum_{k=0}^{M} b_kx[n-k] - \sum_{k=0}^{N} a_ky[n-k] = b_0x[n] + ... + b_Mx[n-M] - a_0y[n] - ... - a_Ny[n-N] \end{equation} Figure \ref{fig:fig_iir} visualizes a simple IIR filter with one feedforward coefficient and one feedback coefficient. The first sample passes through the adder after it was multiplied with $b_0$. After that, it is passed back after being multiplied with $a_0$ and is added two the second sample, also multiplied with $b_0$. \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_iir.jpg} \caption{IIR filter example with one feedforward operator and one feedback operator} \label{fig:fig_iir} \end{figure} \subsection{Introduction to Adaptive Nose Reduction} \subsubsection{History} In the beginnings of the 20th century, filter techniques were limited to the use of static filters like low- or highpass filters. The fundamental techniques allow limiting the frequency spectrum, by cutting out certain frequency like high-pitched noises. In the 1930s, the first real concept of active noise cancellation was proposed by the German Physician Paul Lueg. Lueg patented the idea of two speakers emitting antiphase signals which cancel each other out. Though his patent was granted in 1936, back at the time, there was no technical possibility detect and process audio signals in a way, to make his noise cancellation actually work in a technical environment.\\ \\ 20 years after Lueg's patent, Lawrence Fogel patented a practical concept of noise cancellation, intended for noise suppression in aviation - this time, the technical circumstances of the 1950s enabled the development of an aviation headset, lowering the overall noise experienced by pilots in the cockpit of a helicopter or an airplane by emitting the phase shifted signal of the recorded background noise of the cockpit into the pilots' headset. (see Figure \ref{fig:fig_patent}). \begin{figure}[H] \centering \includegraphics[width=0.8\linewidth]{Bilder/fig_patent.jpg} \caption{Patent of a device for lowering ambient noise to improve intelligence by Lawrence Fogel in 1960 \cite{source_patent}} \label{fig:fig_patent} \end{figure} The final step to real adaptive noise cancellation was made with the introduction of the fundamental Least-Mean-Square (LMS) algorithm in 1960 by Widrow and Hoff, which will be discussed in a later chapter in detail. \subsubsection{The concept of adaptive filtering} As already mentioned in the introduction, environmental noise severely degrades cochlear implant user's speech understanding and listening comfort. The traditional concept of static noise reduction, such as fixed filters, are not a feasible solution due to dynamic acoustic conditions where the type, intensity, and spectral composition of noise can change rapidly. Adaptive Noise Reduction addresses this problem by using adaptive filters that can automatically adjust their parameters in real time, continuously optimizing the system's response to changing environments.\\ \\ The practical concepts from the previous chapters were based analog noise suppression, were a microphone measures the noise and a fixed circuit generates the antiphase signal - this means, the system only works in a specified environment with time-invariant disturbing noise and there is no real adaptiveness to it. The concept of adaptive filtering on the other hand is based on the idea, that a digital filter is learning in real-time through a feedback system what frequencies to filter and what no to filter. \\ \\ Figure XXX shows the basic concept of an adaptive filter design, represented through a combination of a feedforward- and feedback filter application. \subsubsection{Introduction to the Least Mean Square algorithm} Allowing an automatic adaption of the filter coefficients depending on the surrounding by stepwise minimization of the squared error \\ \\ \subsection{Signal flow diagram showing the origin of the useful signal, noise signal, and their coupling} \subsection{Derivation of the system’s transfer function based on the problem setup} \subsection{Example applications and high-level simulations using Python}