TransWikia.com

Was it possible to write a novel on a BBC Micro 16kb/32kb memory era computer without expansions?

Retrocomputing Asked by NibblyPig on August 25, 2021

BBC Micro model B has 32k memory. An average book, like Mary Shelley’s Frankenstein, has about 350,000 characters in it. So you’d need over 10 times the memory to load it in, plus the software to edit it.

If people wanted to use a BBC Micro era computer to write a novel, how did they go about doing it?

Would it be a case of maxing out the expandable memory? The wikipedia article for the BBC Micro suggests it could support quite a large number of expansions, presumably if they are 32kb you would need around 10 to load the entire novel plus the software.

Or did word processing software use say, a floppy disk as a way to store the novel and load portions of it into memory? It looks like disks could be around 200kb for the micro, so multiple disks might work (plus, you’d need to store the novel offline anyway).

Or perhaps there was some kind of clever compression that would let you get more out of the memory?

11 Answers

Of course.

CPU is not a bottleneck. In fact, some people used a simple computer based on NES connect a printer to process documents containing Chinese characters, and memory of NES was only a few KB.

Sideways ROM/RAM with massive banks controlled by an ASIC is very common in 1990s NES games and software. Similar deck is also possible on BBC Micro.

But, is it an expansion?

Answered by Milowork on August 25, 2021

The problem with the idea of expanding the memory is that the 6502 only had 64K of address space and pretty much all of it was allocated to something. 32K was used for the ram, 16K for the current "sideways rom", 15¼K for the OS, ¼K for internal memory mapped IO and ½K for the "1MHz bus" expansion interface. So it was not possible to simply expand the main user memory area. This meant that while memory expansion options existed they were of limited utility.

Sideways roms were all banked into the same 16K chunk of address space. The OS provided mechanisms that allowed calls to be directed to sideways roms and handled the bank switching. The OS supported up to 16 sideways roms, though extra hardware was needed for more than four.

It was common for the word processing software to be implemented as a "language" sideways rom. Thus meaning that the word processor software did not take up space in user memory. So a machine used for word processing might have three sideways roms installed, one for Basic (you could theoretically remove this but I doubt many people did), one for the word processor and one for the Disk (or network) filling system.

"Shadow ram" boards did exist, these split the screen memory off into a seperate bank from user memory allowing the higher screen resolution modes to be used without eating in to user memory. I'm not sure what if any word processor software supported them.

There were also "sideways ram" boards which banked ram into the sideways rom space. As far as I can tell this was mostly used for soft-loading programs that were intended to run from sideways roms.

As far as I can tell the way you would work with lots of text on such systems would be to work in multiple files and save and re-load from disk as needed. Initially such saving and reloading would be manual but automated options apparently existed later.

Answered by Peter Green on August 25, 2021

You’ve laid the assumption that one would load the whole novel into memory. That’s a false assumption.

Performance is the #1 reason

If you got anywhere near 16,000 book characters in RAM at one time, the system started to suffer performance issues, which were very annoying and would tend to break workflow.

It can’t be overstated how annoying these would be.

Also, a person working fast and touch typing puts burden on the CPU that is often proportional to document size. Now, the BBC has a keyboard buffer, which helps, but even it is finite and can be overrun - especially if every keystroke is creating work that takes longer than an average keystroke! But much more critically, this makes the system balky and inefficient. I’m sure you’ve typed on a modern PC when system task load makes the word processor laggy - but it happens for moments. Now, imagine all the time.

So your solution was to save files chapter by chapter, and keep chapters sanely small.

Based on your question I’m not sure you understand this, but every system back to the Apple II ca. 1978 had some sort of DASD available for it - typically a floppy disk, sometimes a smart tape drive. So you could store many files on it, and access any of them at will. So it was easy to have a bunch of small files.

You could do it with the basic/low-end storage too, like tape drives, but anybody who sunk real time into writing would’ve ante’d up for the DASD.

Yes, you saw large documents. Classic example would be release notes of software. And you could open the documents and read them, but there was serious lag, to the point where typing even one word would be tedious. And that was fine, because it was largely intended to be read-only. Obviously the publisher didn’t write it as one document, but as several which were merged into one for distribution.

Answered by Harper - Reinstate Monica on August 25, 2021

Stephen Fry describes here how he wrote a book on the BBC Micro, saving on cassette:

In 1982 I bought a BBC Acorn for £399. It came complete with a firmware programme called Wordwise which I adored and which, in my fond memory, was the best word processor ever. I used it to write the book (ie story and dialogue) of a stage musical, saving on cassette tape as I went along and finally outputting to a daisywheel printer. The show was enough of a hit to allow me to indulge my passion for computer gadgetry for the rest of my life.

(the BBC Acorn for £399 being the BBC model B)

It should be remembered that documents would still be exchanged (with the publisher etc) largely on paper, so the fact they had to be split up into multiple files on the computer was no problem - you simply collated all your printout pages and bound them together. If the publisher also had an electronic system for editing they could deal with the split files in the same way you did.

Eventually it got collated at press time, but in 1982 there was little computer typesetting so at some point in the process your manuscript got turned into printed form (the 'camera ready') and then photographed by the printing house for duplication.

There was no sharing of the finished version in electronic format (PDF, etc) where it needed to be all together electronically. Paper was king.

Answered by user1908704 on August 25, 2021

The first novel written on a microcomputer was probably Jerry Pournelle's portion of Oath of Fealty (cowritten with Larry Niven). At the time, Pournelle did most of his work on a Cromemco Z-2 with 64KiB RAM and CP/M (the machine is described in more detail in Pournelle's column in the July 1980 issue of Byte).

It's not much of a stretch to imagine that similar work could have been performed using a BBC B - it may only have half the RAM, but it has both OS and (optionally) a word processor in ROM, substantially reducing the amount of RAM that would be unavailable to store text, so the largest editable document would probably be about 75% or so of the size that Pournelle's system would have been able to handle.

The main problem would be convenient storage - the largest disk commonly used with the BBC (80-track 5 1/4") held 400KiB total, which is slightly less than the typical size of a novel (somewhere between 600KiB and 800KiB), so during editing you'd likely have to swap disks quite frequently. It's not clear what capacity drives Pournelle used, but there were larger formats available, so he may well have not had this problem.

Answered by occipita on August 25, 2021

BBC Micro model B has 32k memory. An average book, like Mary Shelley's Frankenstein, has about 350,000 characters in it. So you'd need over 10 times the memory to load it in, plus the software to edit it.

True, but only if you insist on having all text at all time in RAM.

If people wanted to use a BBC Micro era computer to write a novel, how did they go about doing it?

Using a text editor and following whatever concept it offered:

  • The most simple would be splitting int into chapters. One is never really working on the whole book. It requires saving changes each time changing a section and loading another. The size would much depend on memory and book type.

Of course on a bare BBC, the only media would be cassette, so saving and loading is rather slow and cumbersome. Investing in a floppy does help a lot to speed up, which brings the next method:

  • Some editors did offer fast switching between sections and

  • others offered a kind of virtual memory system by loading only the part of the document shown right now (plus maybe some screens before and after). Think of it like a window into the text which is not in main memory, but at a second level storage. Text size was now only limited by media size.

  • Some even combined both, allowing to split a text in sections residing on different drives/floppies and only loading a part of either.

Would it be a case of maxing out the expandable memory?

More relevant than main memory might be the disk size. After all, it is of no help if one could edit a 300 KiB text, but only 200 could be stored to floppy

Or did word processing software use say, a floppy disk as a way to store the novel and load portions of it into memory? It looks like disks could be around 200kb for the micro, so multiple disks might work (plus, you'd need to store the novel offline anyway).

Exactly that is what 'better' editors offered.

Or perhaps there was some kind of clever compression that would let you get more out of the memory?

A few did go that way, but savings aren't as great as one might think. Processors were slow, thus only simple algorithms could be used, ending up with maybe 25% saved, thus not offering much gain. The Zork-Engine is a great example about what could be done back then.


Back then many people did use their machines for serious word processing. All the way from letters to whole books. I remember one guy writing his thesis on a CPC 464 using cassette storage. So yes, they were used in any imaginable way. Today's ease of Libre Ofice and alike makes us forget that there was a time when it took dedication and skills beyond the topic to write a paper.


Answered by Raffzahn on August 25, 2021

It was common to install word processing software as a ROM into one of the spare "sideways ROM" sockets on the BBC Micro, in the same way as the DFS ROM needed to operate a floppy drive. WordWise and Inter-Word were two popular options. This left more RAM available for text than if the software were loaded into RAM from tape or disk; typically enough to write a chapter in.

From late 1984, WordWise Plus gained the ability to use a floppy disk as backing storage for editing a larger document. It also allowed having multiple documents open at once. This would effectively remove any practical limitation on the size of an individual chapter. However, the size of the document would still be limited to the capacity of one side of a floppy disk.

It was reasonably common to fit twin double-sided 80-track floppy drives to a BBC Micro, and these would collectively have enough capacity for a full novel, divided into a file per chapter. Also, the twin drives could be used to make backups.

Text compression algorithms were not in common use on 8-bit micros. About a 3:1 compression ratio could be expected from running DEFLATE on typical English text. However, DEFLATE was not invented until approximately 1990 (Katz' patent is dated 1991) and ran on a contemporary DOS PC. Before then, less effective algorithms such as LZW and basic Huffmann were available. On an 8-bit CPU, compression algorithms would have been slow and rather memory-hungry, and thus not obviously worthwhile for a word-processing application.

Answered by Chromatix on August 25, 2021

Computer Concepts (now part of Xara) produced Wordwise Plus in 1984 on a 16 K EPROM. It allowed a document to use the entire space on an attached disk drive as virtual memory.

Answered by scruss on August 25, 2021

This answer is not specific to the BBC Micro, but is generally illustrative of editing technology on systems where document size is likely to exceed available memory. I'd allege that prior to virtual-memory systems, this style was the norm, since there was "never" as much memory as you needed.

I've edited fairly lengthy programs on a system where the core available to the editor (the 'amender') was only a handful or words larger than one disk block, as I recall 640 words or 3840 characters; this had to hold the input version of the file and the output version at the same time.

You can imagine two separate buffers for input and output, but space was tight. So the input text would be at the top of the buffer and the output at the bottom. You edit the text sequentially, a line at a time. As you proceed through the file, lines are transferred from the one to the other.

When you navigate off the end of the current input, the output is written out and the next block of input is read in.

There are of course complications in the amender for inserting more text than you've got space for, and perhaps if lines span block boundaries (I forget whether the latter was allowed, maybe there were no partial lines in blocks). But the user did not need to deal with it.

If you needed to 'go backwards' in the file then this can be accomplished by writing the rest of the output and starting again with that as the new input file.

So, with one simple restriction -- forward motion only -- you can edit in practically no memory at all (by modern standards).

A slightly more flexible editing arrangement puts the 'chunk' control into the hands of the user. Some amount of text is read into the available buffer space. The user can edit randomly within the buffer. When he's done with that buffer-full, he can move on to the next. This is sequential through the file, but randomly within a single buffer.

The venerable TECO, on many DEC systems, used this approach.

As usual, you can quickly accommodate working styles to the technological limits of the available tools. All of this was better than using a card punch.

Answered by another-dave on August 25, 2021

Since you're asking not (just) about the BBC Micro, but about computers of that era in general: More sophisticated word processors like WordStar, running on CP/M, were able to swap both code and text between RAM and disk, letting the user edit lengthy documents in the typical 64k of early-80s CP/M systems. This would, of course, be slow, and profit greatly from more RAM.

Nonetheless, as Brian wrote, you'd break down longer texts into separate files. This would continue for long after; MS Word had a special function for that until at least Word 97.

Answered by Michael Graf on August 25, 2021

You just made a file for each chapter, like sensible people do with current word processing!

It is very unusual to write something lengthy in a single document.

Answered by Brian Tompsett - 汤莱恩 on August 25, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP