6.0
This commit is contained in:
+9
-7
@@ -1,9 +1,11 @@
|
||||
\section{Conclusion and outlook}
|
||||
The focus of this thesis was to investigate the possibilities for the efficient implementation of a real-time capable \ac{ANR} algorithm in \ac{CI} systems. The initial high-level implementation in Python proofed the general feasibility of the proposed method, where the \ac{SNR}-Gain was introduced as a metric for the quality of the \ac{ANR} algorithm. Said metric was used to evaluate the performance of the algorithm in various settings andnoise conditions. First a fictional signal and noise signal were used to check the algrotihm for it´s general functionality. Then the step to a real recorded signal was made. The final and most complex combination,s (which then served as a benchmark for the remaining implementations) was the use of a real wordls signals, which passed different transfer functions and delays. In every case, the algorithm was able to achieve significant performance improvement in the \ac{SNR} for the processed signals. \\ \\
|
||||
\noindent The next challenge was to implement the algorithm in a efficient way in the C prgramming language, to achieve real-time capability. This was achieved by the use of \ac{DSP} cimpiler instrinsic functions, which allow to perform logic operations wih a minimum of needed instructions. After the C-implementation was fucntional, the performacne in the case of the benchmark track was compared to the initial python implementation. A histrogram of the differences between the two ouptuts shows only minor deviations, which can be attributed to the fixed-point calculations of the \ac{DSP} compiler.\\ \\
|
||||
\noindent With the working C-implementation in place, a closer look on the performance, especially the needed cycles to compute on sample, was taken - the resulting formula is a function of the filter length and the update rate. With this information in mind, several noise sources were put under test, to evaluatue the optimal filter length, which is a trade-off between the performance improvement and the computational cost - the result was 45 coefficients.\\ \\
|
||||
With a known filter length of 45 coefficients, the final improvement of the algorithm regarding performance and computational cost could be evaluated. The base was the computational most costly full-update implementation, needing 357 cycles to process one sample.\\ \\
|
||||
\noindent The first approach was a rather simple reduction in the update rate, evaluated for the benchmark case and different signal/noise combinations. The result was a significant reduction in the needed cycles, but with a also quite significant drop in the \ac{SNR}-Gain. Additionaly, the implementation of such an universal reduction, required computational expensive processor operations, further reducing the cost-benefit-ration.\\ \\
|
||||
The focus of this thesis was to investigate the possibilities for the efficient implementation of a real-time capable \ac{ANR} algorithm in \ac{CI} systems.\\ \\ The initial high-level implementation in Python proofed the general feasibility of the proposed \ac{LMS}method, where the \ac{SNR}-Gain was introduced as a metric for the quality of the \ac{ANR} algorithm. Said metric was used to evaluate the performance of the algorithm in various settings and noise conditions. First a fictional desired signal (sine wave) and noise signal (sine wave or white noise) were used to check the algrotihm for it´s general functionality. Then the step to real, recorded signals was made. The final and most complex combinations (which then served as a benchmark for the remaining implementations) was the use of the same real world signals, but now different transfer functions and delays were introduced, to mimic a complex, practical situation. In every case, the algorithm was able to achieve significant performance improvement in the \ac{SNR} for the processed signals. \\ \\
|
||||
\noindent The next challenge was to implement the algorithm in a efficient way in the C programming language, to achieve real-time capability. This was achieved by the use of \ac{DSP} compiler instrinsic functions, which allow to perform logic operations with a minimum of needed instructions. After the C-implementation was functional, the performacne in the case of the benchmark track was compared to the initial Python implementation. A histrogram of the differences between the two ouptuts showed only minor deviations, which can be attributed to the fixed-point calculations of the \ac{DSP} compiler.\\ \\
|
||||
\noindent With the working C-implementation in place, a closer look on the performance, especially the needed cycles to compute on sample, was taken - the result was a formula, which calculates the needed samples as a function of the filter length and the update rate. With this information in mind, several noise sources were put under test, to evaluatue the optimal filter length, which is a trade-off between the performance improvement and the computational cost - the result was 45 coefficients.\\ \\
|
||||
With a set filter length of 45 coefficients, the final improvement of the algorithm regarding performance and computational cost could be evaluated. The base was the computational most costly full-update implementation, needing 357 cycles to process one sample - this correspondends with about 45\% \ac{DSP} load.\\ \\
|
||||
\noindent The first approach was a rather simple reduction in the update rate, evaluated for the benchmark case and different signal/noise combinations. The result was a significant reduction in the needed cycles, but with a also quite significant drop in the \ac{SNR}-Gain. Additionaly, the implementation of such an universal reduction, required computational expensive processor operations, further reducing the cost-benefit-ratio.\\ \\
|
||||
\noindent The second approach was the proposed method of an error driven optimization, utilizing the idea of a fixed threshold for the error signal. Again, evaluated for the benchmark case and different signal/noise combinations, this approach can be considered a success, as it was able to achieve a significant reduction in the needed cycles, while only reducing the \ac{SNR}-Gain by a small amount. The implementation of this method is also computationally efficient, as it only requires a simple comparison operation to check if an update is necessary.\\ \\
|
||||
\noindent The final result of this thesis shows, that the second method of an error driven optimization, utilizing the idea of a fixed threshold for the error signal, is a viable method to achieve significant performance improvement, reducing the computational load of the \ac{DSP} Core by over 62\% while only redcuing the \ac{SNR}-Gain by roughly 12\%.\\ \\
|
||||
\noindent For future work, a proposed method to further optimize the system would be the use of a dynamic threshold, which could be adapted to the current noise conditions. The background for this proposal is the fact, that beside the error-signal, also the noise signal itself influences the size of the filter-coeffcient update. In the current implementation, the threshold is only dependend on the error signal - if a sitatuion arises, where the noise signal is very small, but the error/output signal is high due to a a input signal, an update of the filter coefficients would be triggered, even if not necessary. A dynamic threshold, which also takes the noise signal into account, could further reduce the number of updates, but with a potentially higher computational effort.
|
||||
\noindent The error driven optimization approach can therefore be seen as a the clear winner, as it was able to further improve an already real-time capable \ac{ANR} algorithm, by significantly reducing the computational load of the \ac{DSP} core, while only slightly reducing the performance improvement in terms of \ac{SNR}-Gain.\\ \\
|
||||
\noindent For future work, a more advanced method to further optimize the system could be the use of a dynamic threshold, which could be adapted according to the current noise conditions. The background for this proposal is the fact, that beside the error-signal, also the noise signal itself influences the size of the filter-coeffcient update. In the current implementation, the threshold is only dependend on the error signal - if a sitatuion arises, where the noise signal is very small, but the error/output signal is high due to a high input signal, an update of the filter coefficients would be triggered, even if not necessary. A dynamic threshold, which also takes the noise signal into account, could further reduce the number of updates, but with a potentially higher computational effort.
|
||||
\noindent Also, the already in Chapter 2 mentioned hybrid filter approach, which splits the filter into a static and adaptive part, could be further investigated. The idea behind this approach is, that the static part of the filter covers certain signal paths, which are to be expected time invariant, while the adaptive part of the filter only needs to cover changing signals.
|
||||
\noindent Therefore, the final result of this thesis shows, that the approach of an error driven optimization, utilizing the idea of a fixed threshold for the error signal, is a viable method to achieve significant performance improvement, reducing the computational load of the \ac{DSP} core by over 62\% while only redcuing the \ac{SNR}-Gain by roughly 12\%.\\ \\
|
||||
Reference in New Issue
Block a user