Joke Collection Website - Talk about mood - Who can tell me about the 4-in-4 algorithm of pi?

Who can tell me about the 4-in-4 algorithm of pi?

Why calculate pi? The most primitive use of computers is to carry out complex operations that humans can't complete, and calculating pi is one of such operations. Although calculating pi itself has little practical significance, the programming challenge as a computer enthusiast is still very interesting. Calculating pi seems simple, but it also involves some useful mathematical knowledge. The first algorithm: arctan series expansion PI/4 = 4 Arctan (1/5)-Arctan (1/239) (1) Arctan (x) = x-x3/3+X5/5-x7+. In order to calculate each term in (2), it is necessary to divide the ultra-high precision real number by the small integer (52,2392,2k+1), and the number of cycles required is also proportional to n, so the total time complexity of the algorithm is O(n2). The advantage of this algorithm is that it is simple and only needs integer operation. The following is the program I wrote to calculate pi. In the program, I adopted some measures to improve the speed: ultra-high precision real numbers are accessed in the form of arrays, and the types of array elements are long integers with a length of 64, and each element stores 12 decimal numbers; The number of zeros at the head and tail of xk (x = 1/5, 1/239) is estimated, and only the non-zero part is calculated. In addition, there are many formulas similar to (1), but they are not commonly used. For example: pi/4 = arctan (1/2)+arctan (1/3) pi/4 = 8arctan (110)-arctan (1/239). PI related series1/pi = (sqrt (8)/9801) sumk = 0 ~ INF {[(4k)! ( 1 103 + 26390k)] / [(k! )4 3964k]}(Ramanujan) 1/PI =(sqrt( 10005)/4270934400)sumk = 0 ~ INF {[(6k)! ( 1359 1409+545 140 134k)]/[(k! )3 (3k)! (-640320)3k]} (Chudnovsky) The above two series (there are other similar series, but they are not commonly used) are much more complicated than arctan's Taylor series. Although they are still linearly convergent and the total time complexity is still O(n2), their convergence speed is quite fast. (Ramanujan) each item can be increased by 8 significant digits, and (Chudnovsky) each item can be increased by 14 digits. In this algorithm, in addition to the operation of ultra-high precision real numbers (array form) and small integers, there is also an operation of the root and reciprocal of ultra-high precision real numbers, which requires FFT (Fast Fourier Transform), as described below. The third kind of algorithms: arithmetic geometric mean and iterative arithmetic geometric mean (AGM) m (a, b) are defined as follows: A0 = a, B0 = b.

ak = (ak- 1 + bk- 1) / 2,bk = sqrt(ak- 1 bk- 1)

M(a,b)= limk-& gt; INF AK = limk-& gt; Inf bk can then derive the following formula from a series of theories of elliptic integral (sorry, I don't understand the process): a0 = 1, b0 = 1/sqrt(2).

1/pi = {1-sumk = 0 ~ INF [2k (Ak2-bk2)]}/2m (A0, B0) 2 (AGM) According to this formula, a suitable iterative algorithm can be worked out. In the process of iteration, the number of significant digits increases exponentially with the number of iterations, that is, the number of significant digits per iteration is multiplied by 2. The multiplication, division and square root of ultra-high precision real numbers in the algorithm need FFT, which is introduced below. Considering the time complexity of FFT, the time complexity of the whole algorithm is about O(n log(n)2). Besides (AGM), there are other iterative sequences with the same time complexity. For example, the following sequence will converge to1/pi according to the exponent of 4: y0 = sqrt (2)-1,A0 = 6-4sqrt (2).

yk =[ 1-sqrt(sqrt( 1-yk- 14))]/[ 1+sqrt(sqrt( 1-yk- 14))],AK =( 1+yk)4 AK- 1-22k+ 1 yk( 1+yk+

1/PI = limk->; Inf ak (Borwein)FFT As mentioned above, the second and third kinds of algorithms inevitably involve multiplication, division, square root and other operations of ultra-high precision real numbers (multiple digits accessed in array form). If the multi-digit multiplication is calculated according to the conventional method, the time complexity will reach O(n2). Using FFT can greatly reduce the amount of calculation. There are complex arrays a[k] and b[k] (k=0~n- 1). The forward and backward discrete Fourier transform (DFT) are defined as follows: (I = sqrt (- 1)) b = FFT forward (a): b [k] = sumj = 0 ~ n-1(a [j] e-I * j * 2pi * k. = (1/n) sumj = 0 ~ n-1(a [j] ei * j * 2pi * k/n) (4) (3) and (1/n) in (4) can be put into any formula. When all prime factors of n are small integers, especially when n is an integer power of 2, with careful cooperation of appropriate algorithms, redundant calculation can be avoided, and the time complexity of discrete Fourier transforms (3) and (4) can be reduced to O(n log(n)), which is called Fast Fourier Transform (FFT). Please refer to related books for details. The following is an FFT program I wrote for reference only. In addition, there are developed FFT function libraries, such as FFTW, which can be used directly. The C++ source program of fft. cpp FFT uses FFT. To calculate two multi-digit multiplications of n 1 bit and n2 bit, you can do this: open two lengths of N (n >); =n 1+n2, 2m is the best), fill in two multi-digits from low to high, and fill in 0 for high. Carry out forward Fourier transform on the two arrays respectively. Multiply the corresponding items of the two transformed arrays, then perform inverse Fourier transform, and finally get a result array. Because Fourier transform is carried out in complex number domain, the final product can be obtained only by rounding the result array. What deserves attention is the accuracy of Fourier transform. We know that real numbers are represented by single-precision numbers or double-precision numbers in computers, and there will be some errors. When calculating multi-digit multiplication, n is often a very large number, and each item of the array needs to be summed in the process of Fourier transform. How to ensure that the error caused by precision will not exceed the allowable range because of summation? In my opinion, double-precision real numbers must be used, and due to statistical characteristics, the error caused by precision in the summation process will not be great, which will generally not affect the correctness of calculation. If it is necessary to ensure the correctness of the calculation, I think of two inspection methods. The first is to check the modulus. For example, if the modulo of multiplier and multiplicand pair 17 are 8 and 6 respectively, then the modulo of product pair 17 should be 14. The second is to check whether floating-point numbers deviate from the maximum value of integers in the operation results. If the deviation is only, for example, 10-3, we can think that the multiplication operation of this scale is very safe; If the deviation reaches 0.5, there is something wrong with the operation; If the deviation reaches the order of 0. 1, it is also dangerous, and the multiplicand of another multiplier may overflow. The reciprocal and square root of multiple digits can be transformed into multiplication operation by Newton iterative root finding method. For example, if x = 1/a is calculated, according to Newton's iteration law f(x) = 1/x-a, the following iteration sequence can be obtained: x0 ~ =1/a.

Xk = xk-1-f (xk-1)/f' (xk-1) = 2xk-1-axk-12 (5) Calculate x = sqrt (.

xk = xk- 1-f(xk- 1)/f '(xk- 1)=(3/2)xk- 1-( 1/2)axk-65438。 There are other more complex iterative sequences that converge to a higher index, so I won't mention them here. However, it should be reminded that, unlike (AGM), x0 in (5) and (6) here is only an approximation of 1/a and 1/sqrt(a), so it is not necessary to carry out the multiplication operation with full n digits in the previous iteration, so the calculation amount can be reduced.