Mike Schaeffer's Blog

Articles with tag: history
January 20, 2023

As a child of the 80's, I had a front row seat to the beginning of what was then called personal computing. My elementary school got its first Apple around the time I entered kindergarten. That was also the time personal computers were starting to make inroads into offices (largely thanks to VisiCalc and Lotus 1-2-3). By modern standards these machines weren't very good. At the time they were transformative. They brought computing to places it hadn't been before, and gave access to entirely new sets of people. For someone with an early adopter's mindset, it an optimistic and exploratory time. It's for this reason (and the fact it was my childhood) that I like looking back on these old machines. That's something I hope to do here in an informal series of posts. If there happen to be a few lessons for modern computing along the way, so much the better.

If you're reading this, you're probably familar with retrocomputing. It's easy to go to eBay, buy some used equipment, and play around with a period machine from the early 80's. Emulators make it even easier. As much as I appreciate the movement, it doesn't quite provide the full experience of the time. To put it in perspective, an Apple //e was a $4,000 purchase in today's money. This is before adding disk drives, software, or a monitor. After bringing it home, and turning it on, all you had was a black screen and a blinking prompt from Applesoft basic. If you needed help, you were limited to the manual, a few books and magazines at the local bookstore, and whoever else you happened to know. The costs were high, the utility wasn't obvious, and there wasn't a huge network of people to fall back on for help. It was a different time in a way retrocomputing doesn't quite capture.

My goal here is to talk about my own experiences in that time. What it was like to grow up with these machines, both in school and at home. It's one person's perspective (from a position of privlidge) but hopefully it'll capture a little of the spirit of the day.

If you want a way to apply this to modern computing, I'd suggest thinking about the ways it was possible for these machines to be useful with such limited capabilities. I'm typing this on a laptop with a quarter million times the memory of an Apple //e. It's arguably suprising that the Apple was useful at all. But it was, and without much of the software and hardware we take for granted today. This suggests that we might have more ways to produce useful software systems than we think. Do you really need to take on the complexities of Kubernetes or React to meet your requirements? Maybe it's possible to bring a little of the minimalist spirit of 1983 forward, take advantage of the fact modern computers are as good as they are, and deliver more value for less cost.

Before I continue, I should start off by acknowledging just how privileged I am to be able to write these stories. I grew up in a stable family with enough extra resources to be able to devote a significant chunk of money to home computing. My dad is an engineer by training, with experience in computing dating back to the 60's. He was able to apply computing at his job in a number of capacities, and then had the desire and ability to bring it home. To support this, his employer offered financing programs to help employees buy their own home machines. For my mom's part, she taught third grade at my elementary school, which in 1983 (when I was in third grade) happened to be piloting a Logo programming course for third graders. Not only was I part of the course, my mom helped run the lab, and I often had free run after school to explore on my own. (At least one summer, when I was ten or eleven, I was responsible for setting up all the hardware in the lab for the upcoming school year.)

I didn't always see it at the time, but this was an amazingly uncommon set of circumstances. It literally set the direction of my intellectual and professional life, and is something I will always be thankful for. I am thankful to my parents, and also to the good fortune of the circumstances which enabled it to happen for us as a family. It could have been very different, and for most people, it was.

But before most of that, one of the first personal computers I was ever exposed to was my Uncle Herman's Timex Sinclair 1000. This was a Z80 machine, built in Clive Sinclair fashion - to the lowest possible price point. It was intended to be a machine for beginning hobbyists, and sold for $100. (In modern dollars, that's roughtly the same as a low end iPad.) Uncle Herman had his TS1000 connected to a black and white TV and sitting on his kitchen table. It's the first and only time I've ever computed on an embroidered tablecloth.

The machine itself, as you might guess from the price, was dominated by it's limitations. The first was memory. A stock 1000 had a total of 2KB of memory. KB. Not GB. Not MB. KB.

The second limitation of the machine was the keyboard. To save on cost, the keyboard was entirely membrane based. The keys were drawn on a sheet of flat plastic, didn't move when you pressed them, and offered no tactile feedback at all. The closest modern experience is the butterfly keyboard, for which Apple was recently sued and lost.

Fortunately for the machine, the software design had a trick up its sleeve that addressed both limitations at the same time. Like many other machines of the time, the 1000's only user interface was through a BASIC interpreter. When you plug the computer in (there was no power switch) you're immediately dropped into a REPL for a BASIC interpeter that serves as the command line interface. However, due to the memory limitations, the 1000 lacked space for a line editor. There wasn't enough memory in the machine to commit the bytes necessary to buffer a line of text character by character, before parsing it to a more memory efficient tokenized representation.

The solution to this problem was to allow users to enter BASIC code directly in tokenized form, without the need to parse text. Rather than typing the five characters PRINT and having the interpreter translate that to a one byte token code, the user directly pressed a button labeled PRINT. The code for the PRINT button then emitted the one byte code for that operation. This bypassed the need for both the string buffer and the parse/tokenize step.

Beyond the reduced memory consumption of this approach, it also meant you say PRINT with one keypress instead of five. This is good, given the lousy keyboard. There are also discoverability benefits. With each BASIC command labeled directly on the keyboard, it was easy for the beginner to see a list of the possible commands. The downside is that the number of possible operations is limited by the number of keys and shift states. (A problem shared by programmable pocket calculators of the time.)

Of course, the machine had other limitations too. Graphics were blocky and monochrome, and a lack of hardware forced a hard tradeoff between CPU and display refresh. It's easy to forget this now, but driving a display is a demanding task. Displays require continual refresh, with every pixel has to be driven every frame. If this doesn't happen the display goes blank. The 1000 was so down on hardware capacity that it forced a choice on the programmer. There were two modes for controlling the tradeoff between display refresh and execution speed. FAST mode gave faster execution of user code, at the expense of sacrificing display refresh. Run your code and the display goes blank. If you wanted simultaneous execution and display, you had to select SLOW mode and pay the performance price of multiplexing too little hardware to do too much work.

Despite these limitations, the machine did offer a few options for expansion. The motherboard exposed an edge connector on the back of the case. There were enough pins on this connector for a memory expansion module to hang off the back of the machine. 2K was easy to exhaust, so an extra 16K was a nice addition. The issue here is that the connection between the computer and the expansion module was unreliable. The module could rock back and forth as you typed and the machine would occasionlly totally fail when the CPU lost its electrical connection to the expansion memory.

The usual mitigation strategy for an unreliable machine is to save your work often. This is a good idea in general, and even more advisable when pressing any given key key might disconnect your CPU from its memory and totally crash the machine. The difficulty here is that the Timex only had an analog cassette tape interface for storage. I never did get this to work, but it provided at least theoretical persistant storage for your programs. The idea here is that the computer would encode a data stream as an analog signal that could be recorded on audio tape. Later, the signal could be played back from the tape to the computer to reconstruct the data in memory.

This is not the best example of an old computer with a lot of utilty. In fact, the closet analog to a Timex Sinclair 1000 might not have been a computer at all. Between the keystroke programming, limited memory, and flashing display, the 1000 was arguably closest in scope to a programmable pocket calculator. Even with those limitations, if you had a 1000, you had machine you could use to learn programming. It was possible to get a taste of what personal computing was about, and decide about taking the next step.

December 21, 2018

I've lately run across several interesting small computer history sites. If you have any interest in small computing's emergence from 1980 to 1990 or so, these are worth a look.

In no particular order:

  • OS/2 Museum - Covers OS/2, but also gets into detail around PC architecture. Among other interesting bits, this is just one of several articles on A20 gate handling, and here's something on the IBM 8514/A.
  • DTACK Grounded - A newsletter written to promote Hal Hardbergh's side business of attached Motorola 68000 processor boards. Mostly interesting for his commentary on then-crurent events leading up to the emergence and use of 32-bit microprocessors. Notably, this was written at the time of Intel's pivot from the iAPX 432 to the 80386. The commentary on the relative unreliability of DRAM is amusing too.
  • CRPG Addict - Not sure how he has the time, but the author of this blog has set himself the challenge of playing through and documenting every early CRPG game from the late 70's and well into the 90's.
  • The Digital Antiquarian - Critical commentary on early small computer gaming. Lots of details about how games came to be made and their content.
  • Retrocomputing Stack Exchange site - This is currently more like Netflix than anything else. Coverage is spotty, but that doesn't mean you can't find something interesting to read.
November 6, 2008

In an era in which customers are almost begging Microsoft not to discontinue Windows XP, I was suprised to see a recent news story on the end of life of Windows for Workgroups 3.11 (WfWG). If you're not completely up on the early history of Windows, WfWG 3.11 was released in August of 1993, and was the last of the major US-market versions of Windows without native Win32 support out of the box. It was also one of a series of Windows releases in the early 90's that turned Windows from 'the library you need to run Excel' into a legitimate platform for general purpose computing.

From it's introduction in 1985 until the release of Windows 3.0 in 1990, Windows was almost entirely composed of the same basic core: DOS for file access and system startup, and a collection of three DLL's (KERNEL, GDI, and USER) for memory management, device independant graphics, and the GUI widget library and window manager. Atop the core sat programs written to the Windows API. All of this ran sharing the one 20-bit segmented address space provided by x86 real mode: with 640K usable memory. If you were lucky, you might have had a LIM/EMS board that allowed a few MB of extra memory to be addressed through a 64KB window at the top of the addres space. If you were really lucky, you might have had a 80386 computer with a special program that let it pretend its extra memory worked like a LIM/EMS board. Needless to say, memory was tight, difficult to use, and dangerous to share it between multiple programs.

The solution to this memory problem was initially to be OS/2. OS/2 was the operating system part of IBM's vast (and doomed) PS/2 program to recapture the PC space back from clone vendors. Like DOS, it was done in partnership with Microsoft, but IBM took a much more active role in the design and development of OS/2 than they did with DOS. OS/2's most noteworthy feature was the fact that it was designed to run in 80286 'protected mode' rather than the 'real mode' of DOS and Windows. Protected mode, like its name implies, added memory protection between processes that made multi-tasking more reliable. Protected mode also widened the physical address space of the CPU from 20-bits to 24-bits, making it possible to directly address 16MB of memory without resorting to tricks like LIM/EMS paging. This was all good, but it was tempered by the fact that OS/2 was expensive to run and didn't run DOS programs very well, thanks to its choice of 80286 protected mode over 80386. The only programs that could actually use the benefits of protected mode under OS/2 were OS/2-specific software that nobody had.

By the time 1988 rolled around, PC's with the capability of addressing more than 1MB of memory had been around since 1984, and there still wasn't a viable mainstream operating system that took advantage of this capability. This is when Windows got its big break: David Weise at Microsoft figured out how to run Windows itself in Protected Mode, along with unmodified Windows programs. Running existing software in protected mode was something of a holy grail, and Dr. Weise's idea ultimately resulted in Windows 3.0, released in 1990 to heady acclaim. Windows 3.0 also included the V86 multitasker from the older Windows/386 product. This meant Windows 3.0 could do things OS/2 could not do, like run multiple DOS programs at the same time and run them in graphical windows on the desktop.

Windows 3.0 ended up being a runaway sales success, and after its release, the rest of the dominos fell fairly quickly. Microsoft's partnership with IBM effectively ended, with IBM getting a source licence to Microsoft products through the early 1990's. IBM ultimately used this license to develop a special version of Windows they bundled with OS/2 2.0 to let Windows programs run under OS/2 ("a better Windows than Windows" went the ad). Microsoft's own 32-bit OS/2 2.0 got dropped, and the work done on OS/2 NT (3.0) ultimately formed the basis for 1993's Windows NT and the Win32 API. The next version of 16-bit Windows, Windows 3.1, dropped support for real mode entirely, and as it evolved into Windows 95, more and more system services were moved into 32-bit code. This 16/32-bit hybrid version of Windows lasted until Windows Me. It was definately barouque, and ended up notoriously unreliable, but its evolution from 256K 8088's to 128MB Pentiums is to my eye one of the more impressive examples of evolutionary software engineering. I don't miss using these versions of Windows, but it's easy to miss the 'brave new world' spirit they embodied.

June 12, 2007

I just saw this story on Oregon Trail over on Slashdot. I was born in 1975, which puts me in grade school in the early 1980's, the prime time for these early Apple ][ educational games. As much as I liked Oregon Trail, I liked Rocky's Boots even better. The basic premise of Rocky's Boots was that you had to use basic boolean logic circuits to build machines to solve particular tasks. You ran around a little maze using a very early 'point and click' style user interface to gather parts and put them together. Once you were done, you got to watch your creation do its thing. A fun little bit of trivia regarding Rocky's Boots is that it was developed by Warren Robinette, the creator of Atari 2600 Adventure, including the famous, ego-driven easter egg.

Older Articles...