This commit is contained in:
Patrick Hangl
2026-05-04 10:48:09 +02:00
parent dd514337d7
commit 5e5331f099
4 changed files with 146 additions and 45 deletions
+9 -1
View File
@@ -1 +1,9 @@
\section{Conclusion and outlook}
\section{Conclusion and outlook}
The focus of this thesis was to investigate the possibilities for the efficient implementation of a real-time capable \ac{ANR} algorithm in \ac{CI} systems. The initial high-level implementation in Python proofed the general feasibility of the proposed method, where the \ac{SNR}-Gain was introduced as a metric for the quality of the \ac{ANR} algorithm. Said metric was used to evaluate the performance of the algorithm in various settings andnoise conditions. First a fictional signal and noise signal were used to check the algrotihm for it´s general functionality. Then the step to a real recorded signal was made. The final and most complex combination,s (which then served as a benchmark for the remaining implementations) was the use of a real wordls signals, which passed different transfer functions and delays. In every case, the algorithm was able to achieve significant performance improvement in the \ac{SNR} for the processed signals. \\ \\
\noindent The next challenge was to implement the algorithm in a efficient way in the C prgramming language, to achieve real-time capability. This was achieved by the use of \ac{DSP} cimpiler instrinsic functions, which allow to perform logic operations wih a minimum of needed instructions. After the C-implementation was fucntional, the performacne in the case of the benchmark track was compared to the initial python implementation. A histrogram of the differences between the two ouptuts shows only minor deviations, which can be attributed to the fixed-point calculations of the \ac{DSP} compiler.\\ \\
\noindent With the working C-implementation in place, a closer look on the performance, especially the needed cycles to compute on sample, was taken - the resulting formula is a function of the filter length and the update rate. With this information in mind, several noise sources were put under test, to evaluatue the optimal filter length, which is a trade-off between the performance improvement and the computational cost - the result was 45 coefficients.\\ \\
With a known filter length of 45 coefficients, the final improvement of the algorithm regarding performance and computational cost could be evaluated. The base was the computational most costly full-update implementation, needing 357 cycles to process one sample.\\ \\
\noindent The first approach was a rather simple reduction in the update rate, evaluated for the benchmark case and different signal/noise combinations. The result was a significant reduction in the needed cycles, but with a also quite significant drop in the \ac{SNR}-Gain. Additionaly, the implementation of such an universal reduction, required computational expensive processor operations, further reducing the cost-benefit-ration.\\ \\
\noindent The second approach was the proposed method of an error driven optimization, utilizing the idea of a fixed threshold for the error signal. Again, evaluated for the benchmark case and different signal/noise combinations, this approach can be considered a success, as it was able to achieve a significant reduction in the needed cycles, while only reducing the \ac{SNR}-Gain by a small amount. The implementation of this method is also computationally efficient, as it only requires a simple comparison operation to check if an update is necessary.\\ \\
\noindent The final result of this thesis shows, that the second method of an error driven optimization, utilizing the idea of a fixed threshold for the error signal, is a viable method to achieve significant performance improvement, reducing the computational load of the \ac{DSP} Core by over 62\% while only redcuing the \ac{SNR}-Gain by roughly 12\%.\\ \\
\noindent For future work, a proposed method to further optimize the system would be the use of a dynamic threshold, which could be adapted to the current noise conditions. The background for this proposal is the fact, that beside the error-signal, also the noise signal itself influences the size of the filter-coeffcient update. In the current implementation, the threshold is only dependend on the error signal - if a sitatuion arises, where the noise signal is very small, but the error/output signal is high due to a a input signal, an update of the filter coefficients would be triggered, even if not necessary. A dynamic threshold, which also takes the noise signal into account, could further reduce the number of updates, but with a potentially higher computational effort.