Continued fraction factorization method

The factorization method that we'll discuss in this post, it's called the continued fraction factorization method (CFRAC), and is quite an old method, but still pretty interesting, sharing many concepts and ideas with other modern factorization methods.


Factoring a composite integer into its prime factors has many applications in number theory,  especially in computing some important arithmetic functions for large non-trivial inputs.

One such function is Euler's totient function `φ(n)`, which it's practically impossible to compute for a large composite `n`, if the prime factorization of `n` is not known.

This fact led to the creation of several public-key cryptography systems (such as the RSA algorithm), which are systems responsible for secure communication online and rely on the assumption that it's hard to factorize a large integer `n` that is the product of two large random prime numbers.

The CFRAC method is a general-purpose integer factorization method, first described by D. H. Lehmer and R. E. Powers in 1931, and published as a computer algorithm by Michael A. Morrison and John Brillhart in 1975 as the first general-purpose factorization algorithm with (conjectured) subexponential running time.

# Algorithm

The CFRAC method tries to create a congruence of squares:

`x^2 ≡ y^2 mod n`

with the condition `x ≢ ±y mod n`.

Once such a congruence of squares is created, `n` can be factorized as:

`n = gcd(x + y, n) * gcd(x - y, n)`

This idea is also used in other modern factorization methods, such as the Quadratic Sieve, and is based on the older idea behind Fermat's factorization method, representing `n` as a difference of two squares:

`n = x^2 - y^2`

which leads to the factorization:

`n = (x - y) * (x + y)`

In the CFRAC method, continued fraction convergents `A_k/B_k ~~ sqrt(n)` are used, generating congruences `A_k^2 ≡ r_k mod n`,  where `|r_k|` is always relatively small.

More precisely: `|r_k| <= 2 sqrt(n)`, which we will check for smoothness.

In computing the continued fraction convergents for `sqrt(n)`, we have the following simple and efficient algorithm:

Initialization:

`a_0 = 2 floor(sqrt(n))`
`y_0 = floor(sqrt(n))`
`z_0 = 1`

Iteration process, with `k >= 1`:

`y_k = a_(k-1) * z_(k-1) - y_(k-1)`
`z_k = floor((n-y_k^2)/z_(k-1))`
`a_k = floor((floor(sqrt(n))+y_k)/z_k)`

Each `a_k`, with `k >= 1`, is a term in the continued fraction expansion for `sqrt(n)`:

`sqrt(n) = [floor(sqrt(n)); a_1, a_2, a_3, ...]`

which represents:

`sqrt(n) = floor(sqrt(n)) + 1/(a_1 + 1/(a_2 + 1/(a_3 + 1/...)))`

For computing the `A_k/B_k` convergents, we have:

`E_0 = 1`
`E_1 = 0`

`F_0 = 0`
`F_1 = 1`

where for `k>=1`:

`(E_(k-1), E_k) = (E_k, a_k E_k + E_(k-1))`
`(F_(k-1), F_k) = (F_k, a_k F_k + F_(k-1))`

which gives us `A_k` and `B_k` as:

`A_(k+1) = E_k + floor(sqrt(n)) F_k`
`B_(k+1) = F_k`

The presented algorithm for computing the square root convergents of `n`, can efficiently be implemented as:
func sqrt_convergents (n) {

    var x = n.isqrt
    var y = x
    var z = 1
    var a = 2*x

    var (e1, e2) = (1, 0)
    var (f1, f2) = (0, 1)

    loop {

        y = (a*z - y)
        z = floor((n - y*y) / z)
        a = floor((x + y)   / z)

        var A_k = (e2 + x*f2)
        var B_k = (f2)

        say (A_k / B_k)

        (f1, f2) = (f2, a*f2 + f1)
        (e1, e2) = (e2, a*e2 + e1)

        break if (z == 1)
    }
}
[Try it online]

(This algorithm can also be used in solving the Pell equation `x^2 - n y^2 = 1`, which has solutions of the form `A_k^2 - n B_k^2 = 1`, for some `k >= 1`.)

In the CFRAC method, one important detail is that, in practice, it's necessary to always keep `A_k` and `B_k` small, by reducing `E_k` and `F_k` modulo `n` at each iteration. Otherwise `A_k` and `B_k` grow quite large and will negatively affect the performance of the method.

For each congruence `A_k^2 ≡ r_k mod n`, we attempt to factorize `r_k` over the factor base, which consists of a list of primes below a predefined bound `B`.

An optimal value for `B` is given by the following expression:

`B = floor(exp(sqrt(ln n ln ln n) / 2))`

While the optimal number of primes in the factor-base should be roughly: `floor(exp(sqrt(ln n ln ln n))^(sqrt(2)/4))`.

Once enough `B`-smooth values of `r_k` are found, Gaussian elimination (or other similar method) can be used for finding a subset of quadratic residues `r_k` whose product is a square:

`x^2 = \prod_{k=1}A_k^2`
`y^2 = \prod_{k=1} r_k`

# Example

This process can be better illustrated with an example.

Let `n = 3053`.

First we compute `B` as:

`B = floor(exp(sqrt(ln n ln ln n) / 2))`

which in our case is:

`B = 7`

The next step is creating the factor base by iterating over the primes `p <= B` and collecting those primes for which the Kronecker symbol `(n/p) = 1`.

It's generally a good idea to also allow `p=2` in the factor base. In our case, factor base = `{2, 7}`.

The next step is taking the product of all the primes in the factor base, as this product will be used in recognizing the `B`-smooth quadratic residues that factor over our factor base, using the following efficient algorithm:

  1.  Let `n` be the number to be tested.
  2.  Let `k` be the product of the primes in the factor base.
  3.  Compute the greatest common divisor: `g = gcd(n, k)`.
  4.  If `g` is greater than `1`, then `n = r g^e`, for some `e >= 1`.
    • If `r = 1`, then `n` is smooth over the factor base.
    • Otherwise, set `n = r` and go to step `3`.
    1.  If this step is reached, then `n` is not smooth.

    In Sidef, this algorithm can be implemented as:
    func is_smooth_over_prod(n, k) {     # k is the product of the factor-base primes
    
        var g = gcd(n, k)
    
        while (g > 1) {                  # while gcd(n, k) > 1
            n /= g while (g `divides` n) # remove any divisibility by g
            return true if (n == 1)      # smooth if n == 1
            g = gcd(n, k)                # take the gcd again and repeat
        }
    
        return false
    }
    [Try it online]

    For `n = 3053`, the following table shows the square root convergents `A_k / B_k` along with the quadratic residues `A_k^2 ≡ r_k mod n`, where `|A_k^2 mod n| <= 2 sqrt(n)`:

    +----------------+-----------+----------------+
    |   A_k / B_k    | A_k mod n | (A_k)^2 mod n  |
    +----------------+-----------+----------------+
    |     55 / 1     |     55    | -28 = -2^2 * 7 |
    |    166 / 3     |    166    |  79 = 79       |
    |    221 / 4     |    221    |  -7 = -7       |
    |   3481 / 63    |    428    |   4 = 2^2      |
    |  94208 / 1705  |   2618    | -61 = -61      |
    |  97689 / 1768  |   3046    |  49 = 7^2      |
    +----------------+-----------+----------------+

    Each quadratic residue `|A_k^2 mod n|` that factors over the factor base, along with `A_k mod n`, are stored into an array as pairs `[A_k mod n, |A_k^2 mod n|]`:

    Q = (
        [  55, 28]
        [ 221,  7]
        [ 428,  4]
        [3046, 49]
    )

    Since each quadratic residue that passes the smoothness test is `B`-smooth, we can easily factorize it using trial division over the primes `p <= B`, from which we create a matrix of vector exponents, where the number of columns is the number of primes in the factor base, while the number of rows should be at least one more than the number of columns to ensure a linear dependency exists.

    A = [
        [2, 1],       # `28 = 2^2 * 7^1`
        [0, 1],       #  `7 = 2^0 * 7^1`
        [2, 0],       #  `4 = 2^2 * 7^0`
        [0, 2],       # `49 = 2^0 * 7^2`
    ]

    The key idea here is taking advantage of the following elementary identity:

    `p^a * p^b = p^(a+b)`

    and the fact that a square has an even exponent (e.g.: `x^2`, `x^4`, `x^6`, ...), which allows us to work modulo `2` and efficiently find a linear dependency using a linear algebra algorithm, such as Gaussian elimination over a `GF(2)` matrix, where each vector exponent is a base-2 encoded integer.

    This allows us to find a subset of vector exponents, such that when all these vectors are added together entrywise, the resulting vector will contain only even exponents.

    In our case, the first linear dependency found is:

        `[2, 1] + [0, 1] = [2, 2]`

    meaning that `28*7` is a square, namely: `2^2 * 7^2`:

        `y = 28 * 7 = 14^2`
        `x = 55 * 221 = 12155`

    This means that `12155^2 ≡ 14^2 mod n`, allowing us to factorize `n` as:

    `n = gcd(12155 + 14, n) * gcd(12155 - 14, n)`

    which, indeed, it does:

    `gcd(12155 + 14, n) = 43`
    `gcd(12155 - 14, n) = 71`

    concluding the factorization:

    `n = 43 * 71`

    # Remarks

    Notice that for some integers, the continued fraction expansion of `sqrt(n)` has a very short period, which does not allow for the matrix to be populated with enough vector exponents of quadratic residues, therefore a small integer `k` is required to be multiplied with `n`.

    In choosing a small `k>1`, we may also want to test the Kronecker symbol `((kn) / p) = 1` for some small primes `p < B` and select the most optimal `k` for which `kn` is a quadratic residue modulo `p` for as many small primes `p` as possible.

    In computing the square root convergents of `sqrt(n)`, it's also possible to use the round() function instead of the floor() function, which is a minor optimization that helps in finding `B`-smooth quadratic residues a bit faster:

    `a_k = round((floor(sqrt(n))+y_k)/z_k)`

    # Factorization of `2^128 + 1`

    Back in the days, between 1970 and 1985, CFRAC was the main factorization method in use and it was quite a practical factorization method for numbers up to about `40`-digits in size.

    It was also the first method to find the prime factorization of the seventh Fermat number `2^128 + 1`.

    The factorization of `2^128 + 1` was found in 1970 and published in 1975 by Michael A. Morrison and John Brillhart in their paper which presented the CFRAC algorithm. The collection of `B`-smooth quadratic residues took 90 minutes over a period of seven weeks. Finally, when everything was put together, the CFRAC method successfully factorized the seventh Fermat number.

    After about 5 years, in 1975, the improved algorithm was fast enough that it could factorize `2^128+1` in just about 50 minutes, which for 1975, that was quite fast.

    In 2018, after 40+ years, the CFRAC method (implemented in Perl and using the Math::GMPz interface to the GMP C library), factorizes the seventh Fermat number in just `3.5` seconds. :-)

    Here's an example, using `k=3` and `n=k*(2^128+1)` with the `B`-smooth limit set to `B=23153`, the algorithm finds `73` congruences of squares, where each congruence leads to the factorization of `2^128+1`, where the first `3` congruences are:

    `265234632660196684798995612022692300752^2 ≡ 330986462341823518901721714777083010082^2 mod n`

    `635560106895891184256412687131050792261^2 ≡ 711438226987726490667905138610253816049^2 mod n`

    `121201297055603208179572658649695033707^2 ≡ 626387237364064070444197550577938730293^2 mod n`

    Nowadays, however, even faster factorization methods exist, such as the Quadratic Sieve (QS) and the Number Field Sieve (NFS) methods.

    The CFRAC method still has its merits of being a decently fast method for factoring relatively small numbers very quickly, while, at the same time, being easy to understand and implement.

    # Implementations of CFRAC

    We provide three implementations of the CFRAC method: one in the Perl programming language, one in the Sidef programming language and one in the Julia programming language, which can be used in factoring medium-sized integers, which are given as a command-line argument:

    # Implementations of SIQS

    As a bonus, we also include here an implementation of the Self-Initializing Quadratic Sieve, which is generally faster than CFRAC, but the code is a bit more complex:

    # Implementations of ECM

    As a final bonus, we include a very simple implementation of Lenstra's elliptic-curve factorization method (ECM), which factorizes `2^128 + 1` in just `1` second, using the parameters `a=100` and `p=8000`: