Mercurial Memories

Mercurial Memories

In our age of gigahertz processors and gigabyte memories, it's hard to fathom what "state of the art" meant 50 years ago. LEO's entire memory was able to accommodate 2,048 numbers, or 17-bit instructions. Everything had to fit within that space—application programs, system software, device drivers, data. There was no swapping out pages to disk, because there was no disk. The original plan had called for magnetic tape storage, but the balky tape decks didn't work until several years later, so all input and output had to be done with punch cards and perforated paper tape.

The memory units were not dynamic RAM chips or even the magnetic cores familiar to an earlier generation; instead, they were based on a long-forgotten—and unmourned—technology called mercury delay lines. A long tube filled with liquid mercury was fitted at each end with acoustic transducers, equivalent to a microphone and a loudspeaker, to create a sort of barometer wired for stereo. Bits to be stored were converted by one transducer into sonic pulses, which traveled through the mercury as sound waves, to be detected at the far end by the other transducer. The detected signal was amplified and sent back to the first transducer, so that a train of pulses was kept circulating constantly. To read any particular bit, you had to wait for it to come around to the detector, once every 500 microseconds. In all, there were 80 of these bulky and temperamental delay lines.

LEO's logical and arithmetical circuits were built of vacuum tubes (or valves, as the British called them). More than 5,000 glowing tubes were mounted in 21 floor- to-ceiling racks. In photographs of the installation, the rows of tall racks, with ventilation ducts overhead and a raised floor for cabling underfoot, look remarkably like a modern Internet server farm. All that's missing are the brightly colored skeins of fiber-optic cable.

The speed of LEO's processor was determined mainly by the 500-microsecond access time of its acoustic memory. A basic operation, such as adding two numbers, took about 1,300 microseconds. Thus the machine could chug along at a few hundred additions per second, assuming it had nothing else to do. Simmons described this rate of calculation as "almost incredible speed."

Indeed, the CPU speed was more than adequate for Lyons's purposes; the real bottleneck in putting the machine to work was input and output. All electronic computers built up to this time—all three or four of them—were meant for CPU-intensive scientific or engineering tasks such as solving differential equations. The input was just a few numbers, which the computer might chew on for hours before it spit out a few more numbers as answers. The primarily clerical tasks envisioned for LEO were quite different. In processing a payroll, for example, the calculations amounted to just a few additions and subtractions for each employee, but thousands of employee records had to be read in and thousands of paychecks printed out. The challenge was building input/output gear that could keep up with the processor—even a processor running at half a megahertz.

The most fundamental I/O problem, which was deeply vexing to the LEO designers, is one that has totally slipped beneath our notice today. The LEO processor (like modern chips) operated on binary numbers, but people demanded decimal input and output. If the conversions had been done in software, they would have consumed 90 percent of the machine's capacity. Where pounds, shillings and pence couldn't be avoided, the routines would have to deal with base 12 and base 20 as well as base 10. After some false starts, the whole problem was solved by wiring up a special hardware unit just for base conversions.

This article was originally published on 11-01-2001
eWeek eWeek

Have the latest technology news and resources emailed to you everyday.