So you’ve got a freshly-written block of JavaScript and it looks like a salad recipe written by a corporate lawyer using a phone autocorrect that only knew excel formulas — but it works!

You could refactor it. Or you could address a half-page *mea culpa* to Future You, who will no-doubt have to come back and slog through the masterpiece you just wrote when it finally breaks and ruins the Internet.

We all know why we should resist the urge to editorialize bad code with defensive comments. We know that most problems have definitive canonical solutions and that if we stick with known design paradigms and use well-established conventions, our code will contain many familiar patterns to guide others through. Our spidey senses are on fire when we write comments about *what our code does* because we know that the purpose of the code itself is to articulate what it does. We know that if someone reasonably familiar with the language is unable to follow our work, we should probably hunt for anti-patterns and refactor. We know that good comments add value by imbuing code with information that’s not already there… such as *why* we’re doing things a certain way.

But is the reverse also true? Do we know when comments are not just helpful but absolutely necessary? Say when our code relies on black magic and confirms the existence of the Illuminati? Some people certainly don’t.

Case in point: the infamous fast inverse square root function from *Quake III Arena*. Looking only at the code, do you understand *what* this C function does and *why* it works?

float Q_rsqrt( float number ) { long i; float x2, y; const float threehalfs = 1.5F; x2 = number * 0.5F; y = number; i = * ( long * ) &y; // evil floating point bit level hacking i = 0x5f3759df - ( i >> 1 ); // what the fuck? y = * ( float * ) &i; y = y * ( threehalfs - ( x2 * y * y ) ); // 1st iteration // y = y * ( threehalfs - ( x2 * y * y ) ); // 2nd iteration, this can be removed return y; }

In 2003, Dr. Chris Lomont, who holds degrees in physics, mathematics, and computer science, published a comprehensive analysis of this function. Christian Plesner Hansen provides a more code-centric overview, but for those of us who haven’t studied all of the things, Slashdot user arniebuteft gives a clear, concise, and accessible summary.

Two very cool things happen in the code. First, the line

i = * ( long * ) &y; // evil floating point bit level hacking

takes the bits that represent an encoded floating point number and instructs the compiler to treat these as an integer value instead. *CoOoOoOoL, so its leveraging knowledge of IEEE 754 for some mysterious purpose! — *wait, what’s an IEEE 754?

Most of us know that the native language of computer processors is binary but few of us can say why this is so. The design rationale behind the choice isn’t intuitively obvious and is rooted in information theory, the need for data resiliency, and the nature of electronic signals.

A signal is any function that “conveys information about the behavior or attributes of some phenomenon“. When you need a taxi, for instance, the manner in which you convey your need to a taxi driver (say through a visual gesture or verbal cue) is a signal. There are two ways to encode information within signals, digital and analog. Digital signals are simple, they encode information within a finite sequence of states. Analog signals are continuous, they encode information through analogous variation.

By altering current, voltage, and electromagnetic waveform, electricity can be used to transmit both digital and analog signals. Suppose we want to use an electronic circuit to transmit information about a random number, say 1 to 10. To create a digital signal, we need to encode the value of the random number into a *sequence of states*. A naive implementation could define 2 states, *on* and *off*, and transmit *on* states as 1-second 5-Volt transmissions. Using this system, the number 8 would be digitally encoded into an 8-second 5-Volt signal. To decode this transmission we would only need to time it.

To create an analog signal, we need to treat a property of the signal itself as *an analogue to* the number we wish to transmit. A naive implementation could achieve this by dividing the range of Voltages, 5 Volts, into the range of numbers, 10 numbers, and multiplying the result with the selected number. For instance:

Range of Voltages = 1-5 Volts = 5 Volts Range of numbers = the numbers 1-10 = 10 numbers The number we want to encode = 8 Encoding algorithm = ( 5 Volts / 10 numbers ) x 8 Encoded transmission = 4 Volts

To decode this message, we’d reverse the algorithm by dividing the range of numbers, 10 numbers, into the range in Voltages, 5 Volts, and multiplying the result with the Voltage driving the current:

Range of Voltages = 1-5 Volts = 5 Volts Range of numbers = the numbers 1-10 = 10 numbers Detected Voltage = 4 Volts Decoding algorithm = ( 10 numbers / 5 Volts ) x 4 Volts Decoded transmission = the number 8

Theoretically, we could encode *any *number using this algorithm. So given that analog signals seem more versatile than digital ones, why are digital signals the *lingua franca* of the computing age? Data resiliency, scalability, and engineering constraints.

In the analog signal, *any* interference and even the slightest measurement-error results in data corruption. On the other hand, the digital signal is extremely resilient to interference and mismeasurement because it is only conveying information about two states*.* Even if we detect 4.5 Volts instead of 5, we can safely round up. This is why songs on the radio sound like crap but streaming music maintains its fidelity. Since the purpose of a computer is to reliably perform mathematical calculations, data resiliency is a core requirement; this is the design rationale behind digital encoding. Moreover, it is easier to produce and scale circuitry that only needs to differentiate between two states; this is the design rationale behind binary systems.

But how can 2 states convey information about massive numbers like 1, 597, 463, 007? Or non-integer numbers like Pi? In 1985, the *Institute of Electrical and Electronics Engineers* adopted the IEEE 754-1985 standard specifically to provide canonical answers to such questions. The Intel 8087 Math Coprocessor, which ushered in the x86 computing era, was the first integrated circuit to implement this standard.

The elegance of the C function above is in the manner in which it leverages knowledge of IEEE 754 floating point numbers and binary data encoding to approximate a CPU-expensive square-root function with a performant alternative. It achieves this by shifting the bits that represent an IEEE encoded float, treating those bits as though they represented an integer value, and then subtracting that integer from an unnamed hexadecimal constant, 0x5f3759df (1597463007 in decimal notation).

i = 0x5f3759df - ( i >> 1 ); // what the fuck?

Then, it takes the bits that represent the integer result of the above calculations and once again stores them as bits that represent a float. Upon analysis, the process is shown to approximate a linear interpolation to the inverse square root. Seriously, what the *actual* fuck? Not only does the code demonstrate a twisted knowledge of all-of-the-things, it translates into a 4x performance boost. To indulge in the sheer mindfuckery of it all, I JavaScripted the function to create annotated examples.

See the Pen 0x5f3759df – Quake III Arena’s Fast Inverse Square Root Function Explained by Mo Ismailzai (@moismailzai) on CodePen.

Now, if you read Lomont’s paper, he spoils the magic with equations that explain *why* subtracting the integer-value of a right-shifted float from 0x5f3759df produces this result. What remains a mystery, however, is that Lomont’s work provides a constant that better-fits a linear interpolation than the one used in the code. But when he compares the two numbers in practice, his theoretically-derived, proof-backed constant produces *less accurate *approximations with Newton’s method than does 0x5f3759df. So how did the programmers who wrote this function arrive at the magic number?

We’ll never know because they didn’t comment their fucking code and now none of them can remember (Illuminati confirmed).

*
Also published on Medium. *

## Leave a Reply