A new computational approach is making waves in scientific computing by offering a more accurate and efficient way to evaluate a family of mathematical functions that are essential across numerous disciplines. These functions, known as error-like functions—including the error function (erf), complementary error function (erfc), and Faddeeva function—are ubiquitous in fields such as physics, statistics, and applied mathematics. They describe phenomena like heat diffusion and probability distributions, but their numerical evaluation has long been challenging due to their complex nature. The new , detailed in a recent paper, leverages an exponentially convergent trapezoidal rule to compute these functions with unprecedented precision and speed, potentially impacting simulations and data analysis that rely on these calculations.
The researchers found that by applying the trapezoidal rule to an integral representation of the Faddeeva function, they derived a simple formula that achieves double-precision accuracy, meaning errors are kept within the limits of standard computer arithmetic. This formula, implemented in a publicly available C/C++ library called ERFLIKE, outperforms the widely used FADDEEVA PACKAGE in terms of both accuracy and evaluation speed for complex-valued arguments. Specifically, the ERFLIKE library maintains a relative error close to the double-precision epsilon (approximately 2.2e-16) over vast regions of the complex plane, whereas the FADDEEVA PACKAGE shows irregular error behavior, with accuracy oscillating between 1 and 100 times this epsilon in certain areas, as illustrated in Figure 2 of the paper.
Ology centers on using the trapezoidal rule with pole contributions to approximate the Faddeeva function, which serves as a gateway to all other error-like functions through closed formulas. The researchers optimized the algorithm by selecting a step size h of approximately 0.5022 and truncating infinite sums at 12 terms to target double-precision accuracy. They partitioned the complex plane into regions—labeled A through E in Figure 1—to apply different evaluation strategies, such as neglecting certain terms where possible to enhance efficiency. For instance, in region B, where the imaginary part of the argument is positive and the real part is large, the poles contribution can be safely omitted without sacrificing accuracy, speeding up computations.
Analysis of reveals that the ERFLIKE library not only achieves more consistent accuracy but also offers significant speed improvements in many cases. As shown in Figure 4, the evaluation time for the Faddeeva function ranges from about 30 nanoseconds in regions where no exponentials are needed to 150 nanoseconds in more complex areas, with speedups of up to four times compared to the FADDEEVA PACKAGE for arguments in the lower half of the complex plane. However, for real-valued arguments, the FADDEEVA PACKAGE remains faster, with evaluation times under 10 nanoseconds per call, as depicted in Figure 5, due to its use of optimized Chebyshev fits that the trapezoidal rule cannot match without similar ad hoc tuning.
Of this work are substantial for scientific computing, where error-like functions are frequently used in simulations, statistical modeling, and engineering applications. By providing a more accurate and generally faster alternative, the ERFLIKE library could reduce errors in computational and improve efficiency in large-scale calculations. 's theoretical advantage lies in its generality; unlike the FADDEEVA PACKAGE, which relies on fitted representations specific to double precision, the trapezoidal rule approach can be extended to arbitrary precision computations, making it versatile for high-accuracy needs in research and industry.
Despite its advantages, the algorithm has limitations. It is slower for real-valued arguments compared to the FADDEEVA PACKAGE, as noted in the performance measurements. Additionally, the accuracy degrades in regions where the Faddeeva function is ill-conditioned, such as near its zeros or outside the angular sector where the asymptotic expansion is valid, as discussed in Section 2.1 of the paper. The researchers also highlight that slight accuracy loss occurs for odd functions like the imaginary error function near the origin, though this can be mitigated by switching to Maclaurin series. These limitations suggest that while the new is a significant improvement, it may not be universally superior in all scenarios, and further optimizations could be explored.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn