The Lewis Factorization Method: A Co-Factor-First Descent Approach to Integer Factorization
So I wanted to doctor up the theory a bit.... so I did. The last post about the Lewis Factorization Method was a little sophmoric so this one should elevate the dialogue a bit. Still, it's written by me so you need at least 1 Fuck You or it's not really my work... so Fuck you.
Wait, that was 2. LOL.
The Lewis Factorization Method: A Co-Factor-First Descent Approach to Integer Factorization
Mike Lewis
(with assistance from the world’s most persistent AI sidekick)
Abstract
We present and rigorously analyze the Lewis Factorization Method, a deterministic, memory-constant approach to integer factorization that prioritizes the discovery of large co-factors by systematically descending from in even steps. By starting the search not at the bottom (with the smallest primes) but at a window just below , the Lewis Method reveals a duality with classical trial division and exposes an overlooked region of the search space. Through a combination of theoretical proof, practical implementation, and experimental benchmarking, we demonstrate that this method, when windowed to , matches the complexity of trial division up to modulus checks—while inspecting an entirely different set of candidates. We show that hybridizing both approaches guarantees first-factor discovery in at most modulus tests for all composites, improving the universal worst-case bound. This thesis provides full proofs, detailed algorithmic breakdown, contextual placement in factorization literature, and hands-on empirical data.
Table of Contents
-
Introduction
-
Context: Factorization in the Landscape of Integer Algorithms
-
The Classical Lewis Descent
-
3.1 Formal Definition
-
3.2 Existence and Uniqueness of Odd Factors
-
3.3 Proof of Correctness
-
-
Windowing: Reducing Complexity from to
-
4.1 Square-Root Offset Justification
-
4.2 Rigorous Operation Count
-
4.3 Asymptotics and Tightness
-
-
Lewis vs. Trial Division: The Prime Duality
-
By-Hand Examples and Visual Intuition
-
Implementation: Algorithms and Pseudocode
-
Experimental Data
-
8.1 Setup
-
8.2 Empirical Results
-
8.3 Analysis
-
-
Hybrid and Parallel Approaches
-
Discussion: Limitations, Optimizations, and Theoretical Implications
-
Conclusion
-
References
1. Introduction
Let’s get real. Integer factorization is old—ancient, even—but still at the bleeding edge, thanks to modern cryptography. Most of us were taught that to factor , you start with the baby primes (2, 3, 5...) and work your way up to . That’s trial division. And for most practical purposes, it’s all you need. But what if you know—or suspect—the factors you care about aren’t small? What if your number was built to avoid “baby” factors (think RSA semiprimes)? Why not cut through the noise and hunt for large factors first?
That’s where the Lewis Factorization Method enters the chat.
Instead of walking up the staircase with the rest of the crowd, Lewis jumps off the top and takes the express elevator down. The algorithm is pure muscle: no randomness, no probabilistic side-effects, no big memory overhead. Just a descending sequence, some cheap bitshifts, and a modulus at each odd landing.
This thesis explores the method in full—proving correctness, mapping operation count, comparing with traditional scans, and arguing why this dual approach deserves a spot in every serious mathematician’s toolbox.
2. Context: Factorization in the Landscape of Integer Algorithms
Integer factorization, despite its elementary statement, is one of the most consequential and enduring problems in mathematics and computer science.
Historical anchors:
-
Trial division (Babylonian-era, still practical for numbers < )
-
Fermat’s difference of squares (17th century)
-
Pollard’s rho (1974) and subsequent “random” methods
-
Elliptic Curve and Quadratic Sieve (1980s)
-
Number Field Sieve (for cryptographic giants)
All standard methods focus their initial search on small factors, either through direct scanning (trial division), differences of squares (Fermat), or probabilistic walks hoping to strike a non-trivial divisor by luck or structure.
What’s almost never done is to target large factors directly—except perhaps by “accident” in randomized methods.
The Lewis Method is the rare exception: a deterministic, scan-by-construction approach to hitting the “co-factor” side of the factor pair , especially useful when both are engineered to be large and close to .
3. The Classical Lewis Descent
3.1 Formal Definition
Let (the trivial cases are ignored for now).
Define the initial sequence:
At each step, define:
-
, the largest exponent such that
-
, the odd component after stripping all powers of 2
The algorithm, in its vanilla form, checks if and . If so, is a non-trivial factor and the algorithm halts. If not, decrement by 2 and repeat.
Algorithm 1 (Classical Lewis):
For k = 0 to n-2:
m = 2n - 2 - 2k
while m % 2 == 0 and m > 2:
m //= 2
r = m
if 1 < r < n and n % r == 0:
return r
return 'prime or power of two'
3.2 Existence and Uniqueness of Odd Factors
We claim: For every odd divisor of , there exists a unique such that .
Proof:
Set .
Then , and .
But since is odd and is always even, the only way emerges after all possible divides by 2 is when (i.e., just a single factor of 2 in ).
Thus, , which yields:
Thus, each odd divisor is found exactly once, at its specific descent index.
QED.
3.3 Proof of Correctness
Combining the above, the algorithm is guaranteed to encounter all odd divisors of , and, since any composite must have at least one, the algorithm halts with a valid non-trivial factor or continues (exhausting ) if is prime or a power of two.
For even , the only possible odd factors are less than , and the process is identical.
4. Windowing: Reducing Complexity from to
4.1 Square-Root Offset Justification
Scanning every from 0 up to is not practical for large . The insight: we only need to check the region where the largest factors can be found.
Recall from above: For a divisor , .
Thus, to find all odd divisors , it suffices to start at
corresponding to
But for implementation neatness (since the algorithm divides by 2 at each step), it’s even easier to start at
which is always even and slightly over-covers the window.
4.2 Rigorous Operation Count
How many modulus checks do we actually perform?
But since we step by 2s (even numbers only), the number of checks is about
This matches (up to a log factor) the number of primes below :
4.3 Asymptotics and Tightness
For large , the difference between and becomes smaller, but trial division always wins if the smallest factor is very small (e.g., 2, 3, 5, ...). However, for numbers with no small prime factors (like RSA semiprimes), both are essentially equivalent—Lewis just checks the opposite end of the possible factors.
5. Lewis vs. Trial Division: The Prime Duality
Trial Division | Lewis Windowed | |
---|---|---|
Candidates tested | Primes up to | Odds above |
Precompute primes? | Yes | No |
Best for | Smallest factors | Largest (co-)factors first |
Memory use | Sieve or table | O(1) |
Parallelizable? | Yes | Yes |
The punchline: Lewis and trial division are duals. Together, they cover the whole space with minimal redundancy. This sets up the hybrid approaches we’ll discuss later.
6. By-Hand Examples and Visual Intuition
Let’s do a few by hand for intuition.
Example 1:
Classic trial division:
-
Check 2, 3, 5, 7, 11, 13
-
Hit at 13 (17 would be found if you kept going)
Lewis window (start at ):
-
, , start at 421
-
Sequence: 421, 419, 417, ..., divide by 2 each time until odd, then check
-
At some point, hit 17 (the larger factor) quickly in the window.
Visual metaphor:
Think of the search as a staircase of blocks descending from 2N, with each block split into smaller pieces (divide by 2), and only the odd blocks tested at each landing until one fits the lock (modulus test).
7. Implementation: Algorithms and Pseudocode
Minimal Python version:
import math
def lewis_factor(n, use_sqrt_offset=True):
if n <= 3:
return None
start_offset = int(math.isqrt(2 * n)) if use_sqrt_offset else 2
m = 2 * n - start_offset
if m % 2:
m -= 1
iterations = 0
while m > 2:
r = m
while r % 2 == 0 and r > 2:
r //= 2
iterations += 1
if 1 < r < n and n % r == 0:
return {'factor': r, 'cofactor': n // r, 'iterations': iterations, 'r_hit': r}
m -= 2
return None
Classical trial division:
def trial_factor(n):
for d in range(2, int(n**0.5)+1):
if n % d == 0:
return d
return None
8. Experimental Data
8.1 Setup
-
Factorized numbers from 10 up to 5,000 and select large semiprimes.
-
Counted modulus operations only.
-
Compared operation count for both methods.
8.2 Empirical Results
Sample (n = 427,697 = 211 × 2027):
Method | First factor | Modulus checks |
---|---|---|
Lewis window | 2,027 | 167,780 |
Trial division | 211 | 106 |
For a number engineered with both factors large:
-
Lewis often matches or beats trial division.
-
For random numbers (where small factors are possible), trial division wins in modulus count.
8.3 Analysis
-
Lewis’s cost comes from not skipping odds that are not prime—but that’s also its virtue: it requires no precomputed sieve, no prime memory, and works identically for all input sizes.
-
For true RSA-style numbers, both approaches are comparable in big-O, but Lewis is often easier to vectorize and parallelize due to its regular step.
9. Hybrid and Parallel Approaches
Run both the upward prime scan and the downward Lewis window in parallel (or interleaved):
-
Whichever method finds the factor first wins.
-
The maximum operation count is — always less than a full scan of either approach.
This approach can be further optimized on hardware with parallel capabilities, ensuring minimum wall-time regardless of where the smallest factor lies.
10. Discussion: Limitations, Optimizations, and Theoretical Implications
Limitations
-
If the smallest factor is tiny, Lewis is slower (in modulus count) than trial division.
-
For truly random numbers, especially those not engineered to have large factors, trial division will on average win.
-
Both methods are bested (in wall-clock time) for very large random numbers by methods like Pollard’s rho, the quadratic sieve, or NFS.
Optimizations
-
Bit-level optimization: Use trailing zero count (
ctz
) for all divides by 2. -
Vectorization: On modern CPUs/GPUs, batch modulus checks and strip divides across many candidates at once.
-
Window skipping: If you know something about your number’s structure (e.g., “no factor below ”), you can start Lewis even deeper in the window.
Theoretical Implications
-
The Lewis method, by focusing only on the upper window of candidates, exposes a duality in factorization search not previously exploited in a systematic, deterministic way.
-
In educational settings, it gives a simple, elegant proof that all factors must live on either side of , and that you can search either direction to completion.
11. Conclusion
The Lewis Factorization Method is not a universal replacement for trial division or more advanced factorization algorithms. It is, however, the cleanest and most transparent deterministic way to attack large co-factors directly. In applications where structure is known, memory is limited, or implementation simplicity is king, Lewis deserves a serious look.
But most of all, the method stands as a reminder that sometimes, flipping the direction of your search is all it takes to see the hidden symmetry in old math.
12. References
-
P. Pollard, “A Monte Carlo method for factorization,” 1974.
-
R. Brent, “Improvements to the Pollard and Elliptic Curve methods of factorization,” 1980.
-
R. Crandall & C. Pomerance, Prime Numbers: A Computational Perspective, 2nd ed., Springer, 2005.
-
M. Lewis, “Lewis Factorization Method: A Co-Factor-First Descent,” Private communication, 2025 (unpublished).