- new
- past
- show
- ask
- show
- jobs
- submit
Execution and bus access clock rates up to 10 MHz
Memory Management Unit supporting 512K bytes of memory (one megabyte for the HD64180 packaged in a PLCC)
I/O space of 64K addresses
12 new instructions including 8 bit by 8 bit integer multiply, non-destructive AND and illegal instruction trap vector
Two channel Direct Memory Access Controller (DMAC)
Programmable wait state generator
Programmable DRAM refresh
Two channel Asynchronous Serial Communication Interface (ASCI)
Two channel 16-bit Programmable Reload Timer (PRT)
1-channel Clocked Serial I/O Port (CSI/O)
Programmable Vectored Interrupt Controller
As a consequence it was really popular in the 90s as an embedded processor just when I was starting my career. This lead to me writing thousands of lines of Z80 assembly. You could program it in C but the compiler was useless at making stuff go fast.
One of those things I wrote was an LZ77 decompressor used in a satellite broadcast system. It took me about a week to write it, test it and optimise it. Quite a challenge! I remember optimising it about the LDIR instruction to copy memory.
The compressor was written in C and ran on the PCs of the day.
I would love to learn more about this. Does more "position-independent code" mean the linker has much less to do [0], or is there an actual difference in the code base for similar tasks?
In theory, the motivation for position independent code was to support the development and use of software libraries that could be "plugged in" to an application.
In practice, RAM was often limited to 16 KB; software reuse that I'm familiar with on a 6809 platform was at the source-code level and optimized by the programmer.
I remember editing and assembling, but not compiling or linking.
That said, I believe Motorola wrote some floating-point libraries.
I was a kid on a Tandy Color Computer, and the $49.95 EDTASM cartridge was a huge investment for our family. So my point of view could be way off... but the simplicity of the Color Computer with the design of the 6809 made programming delightful. (20 years later, my enjoyment in programming the Palm Pilot felt like that... although by then I could use C as a fancy macro assembler.)
Larger and later systems could use OS-9, which reasonably resembled UNIX and maybe supported a C compiler.
This wasn't the only way to skin the cat. Multi-pass compilers were another way.
Relocatable code could make more efficient use of memory, for instance not having to worry that your object code would end up crossing a page boundary after linkage.
Modern systems don’t worry about PIC code, they have virtual memory so everyone sees memory the same. The virtual memory system manages the relocation automatically.
OS/9 relies pretty much entirely on PIC code, that made the loader and multi-tasking easy.
Original MacOS also relied on PIC. For similar reasons, and it’s partly why code segments were limited to 32k.
Then you have things like the original 8086. As long as you stick with the “tiny”/“small” memory models, everything was relative to the segment registers, so code and data could be moved easily.
In contrast you had systems like the Apple IIGS. The 65816 does not support PIC well, so code segments carry a relocation table that allows the segment loader to relocate code during transfer from disk. The creation of segment and relocation table is the job of the linker.
https://en.wikipedia.org/wiki/Address_space_layout_randomiza...
Position independent code (PIC) on the 6809 is pretty easy [1], but it does increase the code side a bit, but the resulting code can be placed anywhere in memory with no changes and still work. As mentioned, Motorola intended to sell a ROM with IEEE-754 floating point routines for the 6809 (as the MC6839) that was PIC. As far as I could tell, they never did sell the ROM, but they did provide it (with source) for anyone to use.
[1] Relative branches instead of absolute jumps, using the index registers to address memory, as well as addressing relative to the program counter. You can still do jump tables, but instead of a list of addresses, they're just a list of relative jump instructions. That type of thing.
I'm not familiar with the instruction sets of the 6809 but I could also see more compact opcodes, e.g. a JMP with a relative offset can be encoded smaller than JMP with an absolute address.
In modern terms PIC is used for ASLR and is therefore a security requirement. Some arches (I'm most familiar with arm64) are entirely designed around PIC and you need extra hoops to do anything in absolute terms.
Could have mentioned the use of the 6809 in the Radio Shack TRS-80 Color Computer and the Dragon in the UK. Using the TRS-80 tag on something not using a Z-80 never made sense.
Juggling? Ouch! These comments reveal an apparent unawareness of the 65816's long address modes, which offer four different ways of computing a full, 24-bit address. None of the long address modes involves the "data bank pointer" (Data Bank Register), which can more appropriately and less painfully be used for legacy code (6502) and other 64K-oriented contexts.
Two of the 65816's long address modes use three-byte indirect pointers in zero-page/Direct-Page, where any reasonable number of long pointers can be simultaneously available (in contrast to only DS ES CS SS). And segment override prefixes never come into the picture.
Finally, "hitting bank boundaries" is not the excruciating issue it's made out to be, because the indexed long modes transparently span said boundaries. And note that the 16-bit index is added to a fully specified 24-bit base (not a 16-bit base inflated to 20 by shifting zeros into the LSBs).
Otherwise an interesting article -- I enjoyed it.
My impression of the Z80 being clean and simple probably resulted from that book being so clearly written. It gave me a good enough understanding of how micro's work, that lasted until the more modern chips came out with things like pipelining. But I think that learning one of those old 8 bit chips would still be a great place to start for understanding things at a hardware level.
Dynamic memory refresh on chip was clever.
In 2025 I started programming 6502 assembly just for fun as intellectual exercise (i did TINY bit of x86 asm in the past) and MY GOD: this is so easy and so valuable to learn!
Programming 6502 seems simpler than learning lets say JS framework or to learn just about anything modern.
Its super fun, super easy and very rewarding.
I ended up designing my own ultra RISC, stack based and uniform 32-bit fixed-length data size (all instructions and data have exactly the same size) with mimo and other cool features. 6502 on steroids
I felt competent first time for long time as jobless programmer doing that :)
Vast, complex, changing APIs make programming unfun. Retro machines were fun because the things to learn were concise; the rest was up to your thinking.
Thank you!
I acquired a Z-80 softcard for my Apple ][ (for trying out CPM) and was flabbergasted by the expanded register set, the complexity of some instructions (e.g. DJNZ) and the fact it ran at 4MHz vs 1MHz for the 6502 (got a speed demon 65C02 card later). However I couldn't keep all instructions and timings in my head. Speedwise the 1MHz 6502 and 4MHz Z80 were on par. I preferred, however, the fact that I/O was memory mapped on the 6502.
This is a bit of an exaggeration, the 6502 was efficient but not that efficient. While generally understood that the Z80 took 2x-4x ticks to execute instructions as the 6502, in the real world its larger register set meant properly-written Z80 code could avoid expensive, slow round trips to memory.
Outside of artificial benchmarks real world performance shows that the 6502 is roughly 2x as efficient per clock cycle as the Z80[0], i.e. a 1 MHz 6502 is approximately equivalent to a 2 Mhz Z80.
This is reflected in the computers of the day, i.e. TRS-80s were not being blown out of the water by Commodore PETs.
[0] https://github.com/soegaard/minipascal/blob/master/minipasca...
> the complexity of some instructions (e.g. DJNZ)
Well, of course the idea of DJNZ was to implement a very common pattern (decrement a register and jump (normally backwards) if the result was not zero) - this tended to simplify code rather than make it more complex.
> However I couldn't keep all instructions and timings in my head.
I was never really interested in the timings, but I did get to the stage (not by conscious memorisation) of being able to assemble and disassemble Z80 code in my head, with some accuracy.
> I preferred, however, the fact that I/O was memory mapped on the 6502.
Many (most?) Z80 systems used memory mapped I/O. It's down to the hardware designer.
Same here.
I never got any fluency using EXX and the shadow registers - there were so few situations it was worth the effort. I always felt like I must be missing something.
The Z80 could do memory mapped IO as well of course (used at least in some arcade machines), but why waste valuable address space when there's an entire 64 KB of extra address space reserved for IO ;)
As others said, a 4MHz Z80 is clearly capable of outperforming a 1MHz 6502 as is evidenced by the many ZX Spectrum demos that show off 3D/plotting effects.