Modifying Taylor polynomials for better accuracy: sin(x) approximations

By Joel K. Pettersson. Added 2021-08-04. Updated 2021-09-18.

Working on saugns leads to exploring various things. For audio synthesis, sine wave generation is a basic component. I decided to work on a simple alternative to my old wavetable (look-up table) approach, thinking of perhaps redesigning the oscillator(s) far more drastically later. A search for a fast sine wave function led, of course, to Taylor polynomials as one of the most common ideas, simple to try though not optimal. Further reading in turn inspired my own experiments with arriving at better polynomial approximations.

Contents

Problems with Taylor polynomials

For sin() and the degree 7 Taylor approximation, reducing the input range to +/- π is not enough. It sounds very bad due to the larger difference in value near +/- π causing a little discontinuity every change of cycle. The degree 9 polynomial has lesser, but still unacceptable, distortion for the same input value range. With fuller range reduction, e.g. to +/- π/2, this is solved, though there is still much greater error for the extreme-end inputs, making Taylor polynomials suboptimal. (The range reduction is easy for input values from 32-bit integers, as used for phase counters in my oscillators. Integer wrap-around keeps all phase values in a range mapped to +/- π, and sine wave symmetry makes it easy to halve that before converting to a floating-point value.)

For the degree 7 polynomial, the remaining noise (difference from the accurate sine wave) with full range reduction was still visible with spectrum analysis tools, but faint enough that it probably wouldn't be heard. Still, I decided to try to improve accuracy, instead of simply settling for using the degree 9 polynomial (which had a clean-looking spectrum). Using the degree 9 polynomial is a bit slower than my old look-up-table sine wave (whereas degree 7 is a bit faster), so there's no point to it for oscillators. But I had a simple idea for experimenting, which turned out to work well.

Manual suboptimal improvements

Sometime in 2010, I had played around with graphing software and making my own polynomial approximations. Not very good ones, but I learned something on looking at my old notes. I had arrived at what was basically Taylor polynomials for a few other functions, through simple experimenting, but with a difference. I had reduced the size of the term of the highest order, and sometimes that of the next highest too, in order to minimize the error a bit further on in the curve.

It's trivial to try that with e.g. gnuplot. Input functions and plot them along with sin(x) for comparison, in this specific case when it's all about that function. Add one function containing the Taylor polynomial of some chosen degree, then another tweaked copy, comparing their divergences from sin(x). Tweak/redefine and re-plot... For example, in the degree 7 Taylor polynomial, multiplying the x^7 term with 29.0/30.0 brings the curve much closer to where it should be at +/- π/2. For the degree 9 polynomial, instead adding the multiplier 44.0/45.0 for the x^9 term makes a great improvement.

How great is the improvement, looking beyond just how well the value matches at the end of the input range? The results are in, by now, in terms of subtracting glibc sinf() from the version of a polynomial tested, for a large-enough sampling of values over the input range, and taking the largest absolute value as the error for a tested version. (Note that this is a 32-bit floating point comparison.)

Such trial and error with a graph plot program has its limits. It works well enough to reduce error by an order of magnitude or so for a Taylor polynomial, without too much tedium, by reducing the size of higher terms slightly. But it gets tedious quickly if you put effort into getting the best result by scaling down several terms, and hopeless if you truly want to optimize for minimum maximal error across the whole curve instead of focusing mostly on one or two points on the curve.

Automating and improving the improvements

After reading more on function approximations, including articles with exhortations that error must be measurable for work of this kind to be workable, and the idea that Chebyshev approximations are way, way better than Taylor polynomials, I changed the approach. Sticking with the idea of modifying Taylor polynomials for now, I ended up writing a C program to do it better than I could by hand, and importantly, much faster.

In hindsight, if I had understood the Remez algorithm and its uses when starting out, I probably wouldn't have tried the same things (which soon enough shifted toward a less efficient searching for minimax approximations). It resulted in something very usable for these kinds of simple-enough approximations, but not for more general purposes.

Part 1: Not bad for a start

The early program (later released, see below) compares a hardcoded polynomial unmodified and with scaling factor variations of the form n/(n+1) added to its terms (looping through tests for many values of n). It picks the version of the polynomial tested which has the smallest level of maximum difference from good sine wave data. Finally, it writes a file which can be used to render a graph of the error curve with gnuplot.

A few things became obvious. The numbers I had manually picked when experimenting with graphing software weren't as good as I had believed; other such numbers close to the ones I had picked by hand had looked worse to me but actually minimized maximum error better. Yet, my crude manual pick, the type which anyone can do by playing with graphing software, easily allows an order of magnitude of improvement over unmodified Taylor polynomials when it's measured. It's strange that modified Taylor polynomials are largely unmentioned and not generally recommended over unmodified for limited input ranges.

The old version of the program lacks efficient searching for the best numbers for re-scaling all terms in the polynomial tested, using linear search through integer ranges of n in multiple dimensions (which you have to wait for to complete). But it gets a decent result through the use of multiple passes, where fewer-dimensional passes tweaking only higher-order terms are done more quickly at first, the result of each of them applied before starting over with a fuller search, until the fullest search is done.

sin-taylor7-c3-errcurve.png
Difference between modified Taylor polynomial of degree 7 and sin(x) for x in range +/- π/2. Still a little worse than a Chebyshev curve with the same number of terms could be (such a curve could improve on the error in this one by roughly a third), but unlike such a Chebyshev approximation, does not apply any multiplier to the lowest term (which is simply x). In C code, it requires exactly the same operations as the unmodified Taylor polynomial of degree 7.

Following is output from the program which shows the three iterations and the key numbers attached to them (though the format may possibly do with improvement). Here, the meaning of the letters A, B, and C is that of multipliers in this formula for the polynomial: f(x) = x - A*x^3/3! + B*x^5/5! - C*x^7/7!. In a program, the inverse factorials and the added multipliers may be combined in the same floating point constants, for no performance loss.

Max.err. 1.56879425049e-04      End.err. -1.56879425049e-04
A==1.00000000000e+00    (1.00000000000e+00 * (0.0/0.0))
B==1.00000000000e+00    (1.00000000000e+00 * (0.0/0.0))
C==1.00000000000e+00    (1.00000000000e+00 * (0.0/0.0))
Max.err. 1.06096267700e-05      End.err. -1.06096267700e-05
A==1.00000000000e+00    (1.00000000000e+00 * (0.0/0.0))
B==1.00000000000e+00    (1.00000000000e+00 * (0.0/0.0))
C==9.68750000000e-01    (1.00000000000e+00 * (31.0/32.0))
(Applying selected coefficents.)
Max.err. 2.26497650146e-06      End.err. -2.02655792236e-06
A==1.00000000000e+00    (1.00000000000e+00 * (0.0/0.0))
B==9.98783454988e-01    (1.00000000000e+00 * (821.0/822.0))
C==9.46220930233e-01    (9.68750000000e-01 * (42.0/43.0))
(Applying selected coefficents.)
Max.err. 8.94069671631e-07      End.err. -8.94069671631e-07
A==9.99940144849e-01    (1.00000000000e+00 * (16706.0/16707.0))
B==9.97470993155e-01    (9.98783454988e-01 * (760.0/761.0))
C==9.31884249471e-01    (9.46220930233e-01 * (65.0/66.0))

The stuff for the last table took a few minutes on my main laptop, while the rest was printed quickly. These values listed above could be used in a program which uses a Taylor degree 7 polynomial, though as mentioned below, there's a somewhat better option.

I had guessed that if 4 coefficients were changed (including the degree-1 x term) and not just 3, then something very similar to a Chebyshev approximation would result, but I didn't have the patience to wait hours at best for the old version of my program to grind out such values. Instead, I first worked on making it faster, which led to further changes.

Part 2: Further work and a true minimax result

The software, a small C program named polapt, is now on Codeberg. With further work, it took up to a few seconds instead of a few minutes on my main laptop to find 3 variables (scaling values), and a few minutes to find 4 (which for a Taylor degree 7 starting point gave a true minimax result). To try different things in these versions, the C file needs editing followed by rebuilding the program with make. In terms of approximation theory, the program is still unsophisticated, and it wouldn't be useful for higher-degree searches without even greater reworking.

The big speed-up is from doing fewer comparisons (where comparing in more dimensions means recursion). The program no longer searches for n/(n+1) ratios, but rather increasingly narrows down a floating-point value between lower and upper bounds. The searching assumes a best possible value for each scaling factor, away from which the result grows worse, worse meaning a higher error level results when comparing curves. The best value is looked for at the bottom of what is either a valley or J-curve, or an extreme-end pit of resulting error levels, when trying different values.

The default floating-point precision for comparisons and the result is now double instead of single, making for a smoother error curve graph, and a slightly better result for double precision code using the resulting coefficients. However, rounding errors for single precision makes double precision coefficients possibly measure slightly worse for single precision than ones from a single precision build. Below, I'll only add the latest double precision results.

It's no longer necessary to run and apply several passes on top of each other, as values are narrowed down with greater precision in one search. Still, below the output is still produced in that way; it neither helps nor hurts here.

sin-taylor7-d4-errcurve.png
Difference between minimax-ified Taylor polynomial of degree 7 (4 scaling factors applied) and sin(x) for x in range +/- π/2. It doesn't get better than this, except for small improvements in the last digits.

In the below, the letters A to D are values to be inserted into this modified formula for the Taylor polynomial: f(x) = A*x - B*x^3/3! + C*x^5/5! - D*x^7/7!. Note the sequence of passes below, where using the values of an earlier pass instead of a later pass allows dropping the scaling factors which are then still 1.0. (Skipping the result of the last pass gives an update for the earlier modification on this page.)

Max.err. 1.56913759750e-04	End.err. -1.56913759750e-04
A==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
B==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
C==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
D==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
Max.err. 1.03953131998e-05	End.err. -1.03953131997e-05
A==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
B==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
C==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
D==0.96870437013495802603	(1.00000000000000 * 0.96870437013496)
(Applying selected coefficients.)
Max.err. 2.18992604917e-06	End.err. -2.18992604850e-06
A==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
B==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
C==0.99878745125110413028	(1.00000000000000 * 0.99878745125110)
D==0.94631178351642897262	(0.96870437013496 * 0.97688398307173)
(Applying selected coefficients.)
Max.err. 8.78684477845e-07	End.err. -8.78684476513e-07
A==1.00000000000000000000	(1.00000000000000 * 1.00000000000000)
B==0.99994083515986886823	(1.00000000000000 * 0.99994083515987)
C==0.99748390533962361104	(0.99878745125110 * 0.99869487155666)
D==0.93200606015849785102	(0.94631178351643 * 0.98488265325750)
(Applying selected coefficients.)
Max.err. 5.89131205153e-07	End.err. -5.89131199602e-07
A==0.99999661599039058046	(1.00000000000000 * 0.99999661599039)
B==0.99988967477352697077	(0.99994083515987 * 0.99994883658658)
C==0.99675900242734494228	(0.99748390533962 * 0.99927326856263)
D==0.92552840500237565369	(0.93200606015850 * 0.99304977141992)

Part 3: How low can one go?

Further work can't signfiicantly improve on the above result for an approximation of the same complexity as Taylor degree 7. But for sine wave approximation, how good can less complex approximations get?

An approximation of the same complexity as Taylor of degree 5 is the simplest that can be made minimax-better than unmodified Taylor of degree 7, with an error a little less than half of that of the latter. But when is such an approximation usable and useful? Sometimes it is worse for the error around +/- π/2 to be on the side of overshooting instead of undershooting, which then ends up the case. And for some possible applications, like ramping or "sweeping" a parameter value from one boundary value to another, it is more important to make the endpoints match exactly than it is to minimize maximum error, though the latter is still second priority.

It turns out that it's very simple to eliminate endpoint error (e.g. at +/- π/2) starting from a minimax approximation at the cost of increasing the maximum error, by simply rescaling all the coefficients with the same value, if the polynomial is formulated in the nice and simple way those in this article are. The multiplier to use is 1/(1+E), where the E is the non-absolute error (meaning the sign is included as in the printouts from my program) at the rightmost (positive x-axis) endpoint. (Keep in mind that the two endpoints have equally large error but with differing sign for odd functions like this one.) The endpoint error is as large as the maximum error for a minimax approximation before the adjustment, and the adjustment near-doubles the maximum error, by basically tilting and stretching the error curve so that the ends land at zero.

sin-taylor5-c3ep-errcurve.png
Difference, between approximation based on Taylor degree 5 adjusted to have practically no endpoint error (3 scaling factors applied), and sin(x) for x in range +/- π/2.

Here's a printout for such a sin(x) approximation made by modifying a Taylor approximation of degree 5, using yet another version of the program that takes the extra time to narrow down the last digits further. This time, the polynomial is: f(x) = A*x - B*x^3/3! + C*x^5/5!.

Max.err. 4.52484043988e-03      End.err. 4.52484043988e-03
A==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
B==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
C==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
Max.err. 3.93459627218e-04      End.err. 3.93459627218e-04
A==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
B==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
C==0.94815855880525299324       (1.00000000000000 * 0.94815855880525)
(Applying selected coefficients.)
Max.err. 1.14006884819e-04      End.err. 1.14006884819e-04
A==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
B==0.99647174318424061297       (1.00000000000000 * 0.99647174318424)
C==0.91605295472162327845       (0.94815855880525 * 0.96613899248657)
(Applying selected coefficients.)
Max.err. 6.77062480685e-05      End.err. 6.77062480685e-05
A==0.99969677332591416280       (1.00000000000000 * 0.99969677332591)
B==0.99403844937791019945       (0.99647174318424 * 0.99755809050987)
C==0.90172523370700511980       (0.91605295472162 * 0.98435928737442)
(Creating variation for fitted endpoints.)
Max.err. 1.34573846394e-04      End.err. 0.00000000000e+00
A==0.99962909219062180059       (0.99962909219062 * 1.00000000000000)
B==0.99397115132056594220       (0.99397115132057 * 1.00000000000000)
C==0.90166418540799339110       (0.90166418540799 * 1.00000000000000)

The maximum error, after near-doubling, is still a little better than that for an unmodified Taylor polynomial of degree 7, and there's practically no endpoint error.

Degree 5 approximations seem to be the lowest practically usable for audio purposes, though the above approximation adds one notable distortion, a 5th harmonic at -84 dB or very slightly below that (with the fundamental frequency at 0 dB).

If you wonder about degree 3 polynomial approximations, which have been ignored above, I decided not to include it at length because the result was so poor. For degree 3, the removal of endpoint error further backfires in a sense, in that endpoint undershoot is replaced by overshoot before the endpoint.

Part 4: A long doubling down on endpoint error removal

Returning to higher-degree approximations, the above approach to endpoint error removal also made sense to try with the degree 7 approximation. The maximum error for it remains quite low after near-doubling, so its endpoint error removal could generally be worthwhile for audio oscillator purposes (so as to remove peak amplitude error). A very slight overshoot does appear just before the endpoint, very near and also miniscule compared to the removed endpoint error (unlike for a degree 3 approximation).

sin-taylor7-d4ep-errcurve.png
Difference, between approximation based on Taylor degree 7 adjusted to have practically no endpoint error (4 scaling factors applied), and sin(x) for x in range +/- π/2.

To provide a more definite set of numbers once and for all, this time a higher-precision version of my program was also used for this approximation. (It uses the long double C type and a lower difference threshold for search comparisons, meaning more narrowing down of the last digits.) The below is as good as it gets using this approach.

Max.err. 1.56913759751e-04      End.err. -1.56913759751e-04
A==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
B==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
C==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
D==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
Max.err. 1.03953131998e-05      End.err. -1.03953131998e-05
A==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
B==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
C==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
D==0.96870437013497291792       (1.00000000000000 * 0.96870437013497)
(Applying selected coefficients.)
Max.err. 2.18992604846e-06      End.err. -2.18992604846e-06
A==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
B==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
C==0.99878745125124400705       (1.00000000000000 * 0.99878745125124)
D==0.94631178351879621346       (0.96870437013497 * 0.97688398307416)
(Applying selected coefficients.)
Max.err. 8.78684477233e-07      End.err. -8.78684477233e-07
A==1.00000000000000000000       (1.00000000000000 * 1.00000000000000)
B==0.99994083515990672684       (1.00000000000000 * 0.99994083515991)
C==0.99748390534086520983       (0.99878745125124 * 0.99869487155776)
D==0.93200606017454099619       (0.94631178351880 * 0.98488265327199)
(Applying selected coefficients.)
Max.err. 5.89131203356e-07      End.err. -5.89131203355e-07
A==0.99999661599039963469       (1.00000000000000 * 0.99999661599040)
B==0.99988967477373583847       (0.99994083515991 * 0.99994883658675)
C==0.99675900243116864394       (0.99748390534087 * 0.99927326856522)
D==0.92552840504249209451       (0.93200606017454 * 0.99304977144587)
(Creating variation for fitted endpoints.)
Max.err. 1.17564328092e-06      End.err. 0.00000000000e+00
A==0.99999720511995643916       (0.99999720511996 * 1.00000000000000)
B==0.99989026384029019816       (0.99989026384029 * 1.00000000000000)
C==0.99675958965334515255       (0.99675958965335 * 1.00000000000000)
D==0.92552895030047632540       (0.92552895030048 * 1.00000000000000)

As before, the A to D values apply to this modified Taylor polynomial expression: f(x) = A*x - B*x^3/3! + C*x^5/5! - D*x^7/7!.