Building a Computer from Dirt and Rocks: A Journey from First Principles & Binary Energy Dynamics: Resolving the Information Loss Paradox in Black Holes
When people talk about computers, they sound like priests reciting a code no one else can read. Everything is hidden under layers—kernels, stacks, caches, quantum foam if you let them talk long enough. It’s impressive, sure, but it builds the wrong mythology. The machine isn’t that mysterious. It’s just the modern mask of a very old impulse: to trap thought in matter.
A computer is a physical argument. Someone once said, “if I push this and not that, I can make a pattern appear.” That’s all this has ever been. A chain of “if” and “then” baked into stone, then copper, then silicon. The words have gotten fancier, the voltages smaller, the loops tighter. But the logic hasn’t aged a day.
The strange thing about learning computing now is that it’s usually taught upside down. You start with syntax before you ever learn what a state is. You’re handed a glowing screen and told to “write a loop,” as if you already know what repetition feels like to a machine. The field buries the elegance under ceremony. You’re memorizing incantations before you even know what they summon.
It’s like walking into a cathedral to learn how to stack stones. All that beauty, but you never get to touch the foundation. You’re told about abstraction, about high-level languages and distributed architectures, but you’re never told the simplest truth: everything the machine does is just a dance of differences. One thing is not another. That’s the whole show.
The real wonder is that difference can remember itself. That’s what we call state. The fact that one moment can leave a mark that shapes the next. Once that was done—once we learned to make “change” stay still—everything else followed. You could count. You could predict. You could build a memory that didn’t die when you blinked.
Modern computing tries to overwhelm you with scale. Billions of transistors, trillions of operations per second, floating-point precision down to the breath of an electron. It’s meant to sound ungraspable. But you could strip all of it down and still have the same skeleton: a pattern that listens to itself. Cavemen did that when they drew constellations, or stacked rocks by a river to mark a season. They made maps of time. That’s all data ever is—memory stretched across a landscape.
So if computers feel difficult, it’s because we meet them at the top of their complexity, not their beginning. The screen hides the simplicity behind glass. But the same mind that could start a fire or carve a spear could build one, given enough boredom and curiosity. Because logic isn’t a product of industry—it’s a side effect of noticing.
Every new generation of students relearns this. They wrestle with syntax until it hurts, then one day they realize they aren’t fighting the machine—they’re fighting the way it’s been explained. Once you stop expecting it to be mystical, it becomes almost childlike. Flip, wait, remember. The machine doesn’t think; it echoes the smallest truths we ever found in the dirt.
And when that finally sinks in—when you see that every computer, no matter how fast or small, is just a sculpted sequence of “if this, then that”—the tension leaves. The awe stays, but the fear goes. You realize we’ve been building computers for as long as we’ve been aware of patterns. The only real invention was persistence: keeping the thought long enough to share it.
So, no, computing was never meant to be hard. We just let the explanations grow taller than the idea. Strip them away, and what’s left is ancient: the simplest distinction in the world, made permanent. A whisper of logic written in matter. Something even a caveman could have done—if he’d had a reason to remember.
Part 1: The Foundation - What is Computation?
Before we dig our first hole or place our first rock, we need to understand what we're actually building. A computer, at its most fundamental level, is a machine that manipulates information according to rules. The information doesn't care what physical form it takes—it could be voltages in silicon, beads on an abacus, or rocks in holes. What matters is that we can distinguish between states and transform those states predictably.
Let's start with the absolute simplest possible computer: a system that can store one bit of information.
Part 2: The Bit - Our First Hole
Dig a hole in the dirt. Make it about the size of your fist, just deep enough that you can clearly see whether there's a rock in it or not. This hole represents one bit of storage.
The rules are simple:
- Empty hole = 0
- Rock in hole = 1
Congratulations. You've just created one bit of memory. You can store exactly two possible states: rock or no rock, yes or no, true or false, 1 or 0.
This seems trivial, but it's profound. This single hole can answer one yes/no question. Is the gate open? Is the king alive? Did the scout find water? Place a rock for yes, leave it empty for no.
Now dig seven more holes in a row. You now have eight bits—one byte of storage. With eight holes, you can represent any number from 0 to 255.
How? Each hole represents a power of 2:
Hole position: [7] [6] [5] [4] [3] [2] [1] [0]
Value if rock: 128 64 32 16 8 4 2 1
Want to store the number 5? That's 4 + 1, so put rocks in holes 2 and 0:
[ ] [ ] [ ] [ ] [ ] [●] [ ] [●] = 5
Want to store 200? That's 128 + 64 + 8:
[●] [●] [ ] [ ] [●] [ ] [ ] [ ] = 200
You've just invented positional notation in base-2. This is how all digital computers store numbers, whether they're made of dirt or silicon.
Part 3: Memory - A Grid of Holes
Now dig a grid: 8 rows by 8 columns = 64 holes. Each row is one byte. You now have 64 bits of storage—enough to store eight numbers from 0-255, or 64 yes/no answers, or eight letters of text (using ASCII encoding).
But there's a problem: how do you find a specific hole quickly? If someone says "give me the value in byte 5," you need a system.
Solution: Addressing
Number your rows 0 through 7. When someone asks for "byte 5," you go to row 5, read the rocks left-to-right, and convert to a number.
This is Random Access Memory (RAM). "Random access" means you can jump directly to any address without reading through all the previous ones. Your grid of holes is RAM.
Part 4: The Problem of Permanence
There's an issue with our dirt computer: wind, rain, animals, or mischievous children can disturb the rocks. We need a way to make information more permanent and to process it without destroying it.
The Solution: Reading Without Disturbing
When you "read" a byte, you don't remove the rocks—you just look at them and write down what you see on a piece of bark or scratch it in the dirt beside you. This temporary workspace is like a register in a real CPU—fast, temporary storage for the number you're currently working with.
The Solution: Backup Storage
For permanent storage, you might carve notches in stones: one notch = 0, two notches = 1. These are your "hard drive"—slower to read/write than the dirt holes, but permanent. When you need to use the data, you copy it from carved stones into your dirt-hole RAM.
Part 5: The Arithmetic Logic Unit (ALU) - Actually Computing
So far, we've just built storage. Now let's build the part that actually computes: the ALU. We'll start with the most basic operation: addition.
Adding Two Numbers: The Procedure
Dig two new 8-hole rows labeled "INPUT A" and "INPUT B", and one row labeled "OUTPUT".
The Addition Algorithm:
We'll add bit by bit, right to left, keeping track of the carry.
Example: Add 5 + 3
Set up your input rows:
INPUT A: [ ] [ ] [ ] [ ] [ ] [●] [ ] [●] = 5
INPUT B: [ ] [ ] [ ] [ ] [ ] [ ] [●] [●] = 3
OUTPUT: [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] = ?
Start at the rightmost hole (position 0):
Position 0:
- A has rock (1)
- B has rock (1)
- 1 + 1 = 10 in binary (that's 0 with a carry of 1)
- OUTPUT position 0: leave empty (0)
- CARRY: 1 (remember this)
Position 1:
- A has no rock (0)
- B has rock (1)
- 0 + 1 + carry(1) = 10 in binary
- OUTPUT position 1: leave empty (0)
- CARRY: 1
Position 2:
- A has rock (1)
- B has no rock (0)
- 1 + 0 + carry(1) = 10 in binary
- OUTPUT position 2: leave empty (0)
- CARRY: 1
Position 3:
- A has no rock (0)
- B has no rock (0)
- 0 + 0 + carry(1) = 1
- OUTPUT position 3: place rock (1)
- CARRY: 0
Positions 4-7: All zeros, no carries, so leave empty.
Result:
OUTPUT: [ ] [ ] [ ] [ ] [●] [ ] [ ] [ ] = 8 ✓
5 + 3 = 8. It works!
The Addition Rules (The Logic)
You've just executed an algorithm. Let's formalize the rules you were following:
For each bit position (starting from the right):
- Look at bit from A
- Look at bit from B
- Look at carry from previous position
- Apply these rules:
- 0 + 0 + 0 = 0, carry 0
- 0 + 0 + 1 = 1, carry 0
- 0 + 1 + 0 = 1, carry 0
- 0 + 1 + 1 = 0, carry 1
- 1 + 0 + 0 = 1, carry 0
- 1 + 0 + 1 = 0, carry 1
- 1 + 1 + 0 = 0, carry 1
- 1 + 1 + 1 = 1, carry 1
These rules are hardwired into every computer's ALU. In silicon, they're implemented with transistor gates. In your dirt computer, they're implemented by you following the procedure.
Part 6: Automation - The Dream of the Mechanical Computer
Right now, you are the control unit. You're reading the instructions, looking at the holes, placing and removing rocks, following the rules. This works, but it's slow and error-prone.
The dream—realized in Charles Babbage's designs and eventually in electronic computers—is to make the rules themselves physical.
Mechanical Logic: The Rolling Rock Adder
Imagine this mechanical system:
Build a sloped channel system in the dirt:
- Three input channels (A, B, Carry-in) feed into a junction
- Each channel has a gate: if there's a rock in the corresponding input hole, the gate opens
- Rocks roll down open channels and meet at a junction
- The junction has a scale:
- 0-1 rocks: they roll out the "SUM" channel (output 1)
- 2-3 rocks: they're too heavy, trigger a mechanism that:
- Blocks the SUM channel (output 0)
- Opens the CARRY channel (output 1)
This is a mechanical full adder. The physical behavior of rolling rocks implements the addition rules automatically.
In practice, building this with dirt would be extremely difficult—you'd need precise slopes, channels, gates, triggers. This is why Babbage's Analytical Engine, though theoretically sound, was nearly impossible to build with 19th-century technology.
But the principle is clear: The logical rules can be embodied in physical mechanisms. In modern computers, transistors are the gates, voltage is the rolling rocks.
Part 7: XOR - The Different Detector
Let's implement another crucial operation: XOR (exclusive or).
XOR rules:
- 0 XOR 0 = 0
- 0 XOR 1 = 1
- 1 XOR 0 = 1
- 1 XOR 1 = 0
In words: Output 1 only if inputs are different.
Dirt implementation:
Create two input holes (A and B) and one output hole. Follow this procedure:
- Look at A and B
- If one has a rock and the other doesn't: place rock in OUTPUT
- If both have rocks or both are empty: leave OUTPUT empty
Why is this useful? XOR is the "sum without carry" part of addition. It's also used in hashing, encryption, error detection—anywhere you need to detect differences or mix information.
Example: Detecting Changes
Store a message in 8 holes (one byte). Store a copy in another 8 holes. Later, to check if the message was altered, XOR the two:
- If all outputs are empty (all 0s): the message is unchanged
- If any output has a rock: something changed
This is the basis of checksums and error detection.
Part 8: Multiplication - Shift and Add
Now let's multiply. Remember: multiplication is just repeated addition with shifts.
Example: 5 × 3
5 in binary: 101
3 in binary: 011
The algorithm:
- Look at rightmost bit of multiplier (3): it's 1
- Add 101 (shifted 0 positions) to result
- Result so far: 101
- Look at next bit of multiplier: it's 1
- Add 101 (shifted 1 position left = 1010) to result
- Result: 101 + 1010 = 1111
- Look at next bit of multiplier: it's 0
- Add nothing
- Final result: 1111 = 15 ✓
Dirt implementation:
You need:
- Input A holes (multiplicand)
- Input B holes (multiplier)
- Multiple rows of working holes (for shifted versions)
- Output holes (accumulator)
The procedure:
- For each bit in B (right to left):
- If bit is 1: copy A into a working row, shifted left by the current position
- If bit is 0: skip
- Add all the working rows together (using the addition procedure repeatedly)
- Final sum is the answer
This is tedious with rocks and dirt, which is exactly why multiplication in real CPUs requires hundreds or thousands of transistors arranged in complex trees to do it fast. But the algorithm is the same.
Part 9: The Control Unit - The Instruction Set
So far, you've been the brain—reading instructions, deciding what to do. Let's formalize this.
Create an instruction system:
Dig a special row called the INSTRUCTION register (8 holes). The pattern of rocks in this row tells you what operation to perform.
Instruction encoding:
00000001 = LOAD from address X to register A
00000010 = LOAD from address Y to register B
00000011 = ADD registers A and B, store in C
00000100 = STORE register C to address Z
00000101 = MULTIPLY A and B, store in C
... (more operations)
Dig another row called the PROGRAM COUNTER - this stores the address of the current instruction.
Dig a large grid: the PROGRAM MEMORY. Each row is one instruction.
Example program: Add two numbers
Row 0: 00000001 (LOAD from address 64 to register A)
Row 1: 00000010 (LOAD from address 65 to register B)
Row 2: 00000011 (ADD A and B, result in C)
Row 3: 00000100 (STORE C to address 66)
Row 4: 00000000 (HALT)
The execution cycle:
- Read the PROGRAM COUNTER (starts at 0)
- Go to that row in PROGRAM MEMORY
- Look at the rocks—that's your instruction
- Decode it: "00000001 means LOAD"
- Execute: perform the load operation
- Increment PROGRAM COUNTER (move to next row)
- Repeat
You're now executing a stored program. The instructions are data, stored the same way as the numbers they manipulate. This is the von Neumann architecture—the foundation of nearly all modern computers.
Part 10: Evolution - From Manual to Mechanical to Electronic
Let's trace the evolution:
Stage 1: Manual Dirt Computer (What We've Built)
- Storage: holes with rocks
- Processing: human following rules
- Speed: one operation per minute
- Reliability: terrible (wind, rain, mistakes)
Advantages:
- Easy to understand
- Easy to debug (just look at the rocks)
- Cheap (dirt is free)
Limitations:
- Slow
- Error-prone
- Doesn't scale
Stage 2: Mechanical Computer (Babbage's Vision)
Replace human with mechanisms:
- Storage: gear positions (gear at position 0-9 represents digit)
- Processing: gears, levers, cams physically embody the rules
- Speed: one operation per second
- Reliability: better, but gears wear out, need lubrication
Example: The Difference Engine
Babbage's Difference Engine calculated polynomial tables:
- Input: turn cranks to set initial values
- Process: turn main crank, gears rotate, addition happens mechanically
- Output: read numbers from gear positions
It worked, but was enormous (thousands of parts) and expensive.
Stage 3: Electromechanical Computer (Relays)
Replace gears with electromagnetic switches (relays):
- Storage: relay positions (on/off)
- Processing: relays wired to implement logic
- Speed: 10-100 operations per second
- Reliability: much better
Example: Harvard Mark I (1944)
- 765,000 components
- 3,000 electromechanical relays
- 500 miles of wire
- Could multiply in 6 seconds
How relays work:
- Coil of wire with iron core
- When current flows through coil, iron becomes magnetic
- Magnetism pulls a metal switch closed
- Switch closing/opening controls another circuit
Relay as logic gate:
- Wire two relays in series: AND gate (both must close)
- Wire two relays in parallel: OR gate (either can close)
- Use relay to switch power on/off: NOT gate (inverter)
Build up from these: you get adders, multipliers, memory—everything we built with rocks.
Stage 4: Vacuum Tube Computer (First Electronic)
Replace relays with vacuum tubes:
- Storage: tube on/off states, later magnetic cores
- Processing: tubes wired as logic gates
- Speed: thousands of operations per second
- Reliability: tubes burn out frequently
Example: ENIAC (1945)
- 17,468 vacuum tubes
- 7,200 crystal diodes
- 1,500 relays
- 5 million hand-soldered joints
- Could do 5,000 additions per second
How vacuum tubes work:
- Glass tube with vacuum inside
- Heated cathode emits electrons
- Grid in middle controls electron flow
- When grid is negative: blocks electrons (off/0)
- When grid is positive: allows electrons (on/1)
Tubes implement the same logic gates as relays, but 1,000× faster because electrons move at light speed, not mechanical parts clanking.
Stage 5: Transistor Computer
Replace tubes with transistors:
- Storage: magnetic cores, then transistor flip-flops
- Processing: transistor logic gates
- Speed: millions of operations per second
- Reliability: solid state, very reliable
Example: IBM 1401 (1959)
- Used transistors instead of tubes
- Much smaller, cooler, more reliable
- Could do 200,000 additions per second
How transistors work (simplified):
- Three layers of silicon (source, gate, drain)
- Gate voltage controls conductivity between source and drain
- High voltage on gate: conducts (on/1)
- Low voltage on gate: blocks (off/0)
Same logic as tube, but:
- Tiny (millimeters vs hand-sized tubes)
- Low power (milliwatts vs watts)
- Reliable (solid state vs hot fragile glass)
- Fast (nanoseconds vs microseconds)
Stage 6: Integrated Circuit Computer (Modern)
Put thousands, then millions, then billions of transistors on one chip:
- Storage: transistor-based RAM, solid state drives
- Processing: billions of transistors in CPU
- Speed: billions of operations per second
- Reliability: extremely high
Example: Modern Intel CPU (2024)
- 20+ billion transistors
- Multiple cores (multiple complete CPUs on one chip)
- Operates at 3-5 GHz (3-5 billion cycles per second)
- Transistors are 3-5 nanometers (1/10,000 width of human hair)
But it's still doing the same thing our dirt computer did:
- Storing bits (rocks in holes → charges in transistors)
- Adding (following addition rules → XOR and AND gates)
- Following instructions (reading program rows → fetch-decode-execute cycle)
Part 11: Scaling - From 8 Bits to Billions
Our dirt computer has 64 bits of RAM (8 bytes). Let's scale that up.
Memory scaling:
- Our dirt computer: 8×8 grid = 64 holes = 8 bytes
- Early computer (1950): 1,024 bytes (1 KB)
- PC (1980): 64,000 bytes (64 KB)
- PC (1990): 4,000,000 bytes (4 MB)
- PC (2000): 256,000,000 bytes (256 MB)
- PC (2010): 4,000,000,000 bytes (4 GB)
- PC (2024): 32,000,000,000 bytes (32 GB)
That's 32 billion holes with rocks. Except they're not holes—they're capacitors, each holding a tiny electrical charge for a few milliseconds before needing refreshing.
CPU scaling:
Your dirt computer has:
- 2 registers (A and B)
- 1 ALU (you, following rules)
- 1 control unit (you, reading instructions)
A modern CPU has:
- 100+ registers
- Dozens of ALUs (can do multiple operations simultaneously)
- Complex control unit with:
- Branch prediction (guessing which instruction comes next)
- Out-of-order execution (doing instructions in non-sequential order for speed)
- Speculative execution (starting operations before knowing if they're needed)
- Cache management (keeping frequently-used data close)
But at the bottom, it's still:
- Read bits
- Route through gates
- Write bits
Part 12: The Software Layer - Programming Our Dirt Computer
Let's write a real program for our dirt computer.
Problem: Find the largest of three numbers
Data:
- Address 10: number A = 7 (00000111)
- Address 11: number B = 12 (00001100)
- Address 12: number C = 5 (00000101)
- Address 13: result (empty)
Program (in our instruction set):
Row 0: LOAD A from address 10 to register R1
Row 1: LOAD B from address 11 to register R2
Row 2: COMPARE R1 and R2 (sets a flag: is R1 > R2?)
Row 3: JUMP-IF-GREATER to row 6
Row 4: COPY R2 to R1 (R2 was bigger, so make R1=R2)
Row 5: (fall through to next)
Row 6: LOAD C from address 12 to R2
Row 7: COMPARE R1 and R2
Row 8: JUMP-IF-GREATER to row 11
Row 9: COPY R2 to R1 (R2 was bigger)
Row 10: (fall through)
Row 11: STORE R1 to address 13
Row 12: HALT
Execution:
Load 7 into R1, 12 into R2. Compare: 12 is bigger, so copy 12 to R1. Load 5 into R2. Compare: 12 is still bigger. Store 12 to result. Done.
Result: 12 ✓
This is programming. You've written an algorithm (find maximum) in machine code (our instruction set).
Higher-Level Languages
Real programmers don't write machine code (patterns of rocks/bits). They write in higher-level languages:
In Python:
numbers = [7, 12, 5]
result = max(numbers)
In C:
int a=7, b=12, c=5;
int max = a;
if (b > max) max = b;
if (c > max) max = c;
In Assembly (closer to machine code):
LOAD R1, [10]
LOAD R2, [11]
CMP R1, R2
JG skip1
MOV R1, R2
skip1:
LOAD R2, [12]
...
Each level is translated to the level below:
- Python → compiled to bytecode → interpreted as machine code
- C → compiled to assembly → assembled to machine code
- Assembly → assembled directly to machine code
Machine code = patterns of bits = patterns of rocks in our dirt computer.
Part 13: Complexity from Simplicity
Here's the profound thing: every program ever written, every website, every video game, every AI model, is ultimately just rocks in holes (or charges in capacitors, but same principle).
Your web browser:
- Millions of lines of code
- Translates to billions of machine instructions
- Each instruction: move bits, add bits, compare bits, jump to different instruction
- Each bit: one capacitor charged or not (one rock in hole or not)
ChatGPT (the AI you might be using to read this):
- 175 billion parameters (numbers)
- Each parameter: 16 or 32 bits
- Trillions of bits total
- All stored as charges in memory
- Processing: matrix multiplication = lots of multiply-adds = lots of XOR and AND gates
- Same gates we built with rocks
The universe's complexity is built from simple rules applied at scale.
- Physics: simple laws (F=ma, Maxwell's equations) → complex phenomena (weather, galaxies)
- Biology: simple rules (DNA replication, natural selection) → complex life
- Computation: simple operations (AND, OR, NOT, shift) → complex software
Our dirt computer demonstrates this perfectly:
- Simple: rock or no rock, yes or no
- Combined: 8 holes = 256 possibilities
- Organized: procedures for add, multiply, compare
- Programmed: sequences of instructions
- Result: can solve any computable problem (given enough holes and time)
Part 14: The Limits - What Our Dirt Computer Can't Do Well
Despite the theoretical power, our dirt computer has severe practical limits:
Speed:
- Placing rocks by hand: 1 operation/minute
- Modern CPU: 10 billion operations/second
- Speed difference: 600 billion times slower
Reliability:
- One misplaced rock = wrong answer
- No error correction
- Weather destroys data
Scale:
- To match 32 GB of RAM: need 256 billion holes
- That's a field of holes stretching miles
- Impractical to build, impossible to maintain
But the principles are identical. Silicon just lets us do it:
- Faster (electrons vs hands)
- Smaller (nanometers vs centimeters)
- More reliably (error correction, redundancy)
Part 15: The Philosophical Point - Substrate Independence
The most important lesson: computation doesn't care about the physical substrate.
The same program that adds 5+3 works on:
- Rocks in dirt holes
- Gears in Babbage's engine
- Relays in Harvard Mark I
- Tubes in ENIAC
- Transistors in modern CPUs
- Photons in optical computers
- Ions in quantum computers
- (Hypothetically) neurons in brains
The information is the thing. The physics just carries it.
This is why:
- Software can run on any compatible hardware
- Virtual machines work (simulating one computer on another)
- Emulators work (running old game consoles on modern PCs)
- Your code doesn't need to know if it's on Intel or AMD or ARM
The patterns matter, not the medium.
In our dirt computer:
- Pattern: rock in position 2 and position 0
- Meaning: the number 5
- Medium: rocks and dirt
In a real computer:
- Pattern: voltage in bit 2 and bit 0
- Meaning: the number 5
- Medium: transistors and silicon
Same pattern, same meaning, different medium.
Part 16: Building It For Real - A Practical Guide
If you actually wanted to build this (for education or demonstration):
Materials:
- Cardboard sheet (2 feet × 2 feet)
- Ruler and marker
- Small stones/pebbles (white and dark, for visibility)
- Egg carton or ice cube tray (pre-made holes)
Construction:
- Memory: Use egg carton (8×8 grid = 64 holes = 8 bytes)
- Registers: Three separate 8-hole rows on cardboard
- Program memory: Another section of 8 rows
- Instruction pointer: One 8-hole row
- Carry flag: One hole (for tracking addition carries)
Operation:
- Load a program (place rocks in program memory rows according to instruction encoding)
- Set program counter to 0
- Execute fetch-decode-execute cycle by hand:
- Fetch: look at program memory row indicated by PC
- Decode: determine what instruction those rocks represent
- Execute: follow the procedure (add, load, store, etc.)
- Increment: move PC to next row
- Continue until HALT instruction
Example programs to try:
- Add two numbers
- Find maximum of three numbers
- Count from 0 to 10
- Multiply two numbers
- Check if number is even (look at bit 0)
Educational value:
Students can:
- See every bit (rock) clearly
- Trace execution step-by-step
- Understand fetch-decode-execute
- Grasp stored-program concept
- Build intuition for how software becomes hardware operations
This is exactly what early computer scientists did with relays and switches to teach themselves.
Conclusion: From Dirt to Silicon, Same Principles
We've journeyed from a single hole in the dirt to a complete programmable computer. Along the way, we discovered:
- Information is physical: Bits need physical states (rock/no rock)
- Storage is addressing: Grid of holes with numbered locations = RAM
- Computation is procedure: Following rules transforms inputs to outputs
- Logic is composition: AND, OR, XOR combine to make adders, multipliers
- Programs are data: Instructions stored same way as numbers
- Control is sequential: Fetch, decode, execute, repeat
- Complexity emerges from scale: Simple operations × billions = powerful computation
- Substrate independence: Same algorithms work in dirt, gears, transistors
Modern computers are built on these exact principles, just implemented with different physics:
- Holes → transistors
- Rocks → electrical charges
- Your hands → control circuits
- Your brain reading instructions → instruction decoder
- Your following procedures → hardwired logic gates
The miracle of computation isn't in the complexity of the parts—it's in the power of simple operations repeated at massive scale.
A single transistor is simple: on or off. But 50 billion of them, organized into adders, multipliers, memory controllers, all synchronized to a 5 GHz clock, executing billions of instructions per second, can:
- Render a photorealistic game world
- Simulate weather patterns
- Train an AI to write poetry
- Stream video from across the planet
All built on the foundation we just laid: rocks in holes, yes or no, 1 or 0, the universal language of information.
If you hate squiggles, stop here.
No cliff notes, no next section—this is the end of the road for comfort. What comes next isn’t meant for casual reading. It’s the proof spine beneath everything built so far, and it’s dense enough to push most people out of the room.
This chapter doesn’t explain how anything works; it only shows that it does. Every line of math that follows exists for one purpose: to hold up the claim that the universe does not lose information, even when swallowed by its own gravity.
The setup is simple but ruthless. We start from the smallest act of division—carving a distinction, creating a bit—and track its energy imprint up through the hardest boundary physics allows: the event horizon. If the conservation laws hold there, they hold everywhere. If they fail there, every byte you’ve ever trusted collapses with them.
So this is the proving ground. And no, “infinite data” is not a gadget, not a trick of compression or storage—it’s a property that only becomes true at the black hole limit. Beyond that threshold, energy and information blur into one conserved field. That’s where our rules stop making sense and start being rewritten.
If equations make your eyes ache, step away now. This isn’t for decoding; it’s for anchoring. Past this point, it’s pure structure—Noether, Pauli, curvature, and the binary pulse of symmetry itself. Once you cross the line, you can’t skip ahead, because there is no “ahead” without this.
This is where the universe keeps its receipts.
So, if you understand computing better by what you read, wonderful. Scroll down real fast and just sample this mess to see if it's your ballywick... it's dense as hell so you will not come away feeling better about spending all that money for college.
If too dense... I hope you enjoyed the read. If you continue, You've been warned.


