Notes From The Dork Web

On Countercomputing

I recently read MOS8502's countercomputing manifesto. I like it, particularly these parts:

We assert that the sole agency over the use and output of privately owned tools belongs with and only with the owners of those tools; we assert an inherent right to decide for ourselves what software will and won't run, and when, on computers we own and operate for our own private benefit, and the same right held collectively over shared computers.

We therefore resolve to create, under common purpose and shared ownership, a new standard for a computing device, which is designed to educate and empower the owner, rather than to expose and exploit them.

This kind of standard arisen specifically before with the MSX and almost de facto rather than de jure with CP/M, the IBM PC and to some extent the Apple II. Outside of the commercial space lie projects like Varvara and UXN. I think it's a good idea, providing we don't attempt to just reinvent the computing of the 70s-90s, but at the same time we need to have usable tools for such a standard to be useful.

The Retro Zer0 And ZX20

Some years ago, my dear friend Saumil asked me if I could design and build a thankyou project for his teaching assistants. This resulted in the Retro Zer0, a single-board hand-solderable personal computer based on the ESP32, running a slightly modified version of Fabrizio DiVittorio's FabGL Altair 8800 emulator. It was supplied pre-built with mouse, keyboard and VGA cables but it was the realization of something I'd wanted to do for some time, create a computer anyone can build from the components up without relying on big tech. It was the logical conclusion of what started with my HIDIOT ATTiny boards I built at 44CON to teach soldering to hackers.

The Retro Zer0 came with a ton of CP/M software. Using Emulation I was able to tweak resolution, features, multi-session support while maintaining CP/M compatibility and even include WiFi support. I don't have the SD card image anymore, but the 64 page manual is online.

The Retro Zer0 was meant to be the first stage towards a hand-buildable computer useful enough and capable enough for personal computing tasks, that could come up for air and interact with other computers on a network based on principles of mutual aid rather than consumption. I used CP/M after reaching limits quickly in BASIC. It was around this time I started writing about Heirloom Computing on my old newsletter.

A couple of years later I added a basic and buggy 9P implementation at the ESP layer. This let me share ESP32 filesystems and virtualized devices as files. I could read PCM audio on one Retro Zero and write it to N:snd and the audio would play on a system on the WiFi. I had some very crude code execution capabilities too. Because this was managed through the ESP32 layer the CP/M layer was completely unaware. The Multisession support acted as a kind of limited multi-tasking feature. Being able to remotely control sessions let me 'borrow' resources from another Retro Zer0.

While it was crude, extremely buggy and without any security whatsoever, it was a paradigm that was built around collaborative sharing around the user(s), for the user(s).

I spent some time working on biodegradable circuit boards, ink and copper tape based conductivity to make things more DIY but a combination of declining health and life meant the ZX20 was ultimately shelved. None of this led me to a destination but showed me what was possible.

My Hypothetical Vision Of Countercomputing

If I was starting ZX20 again I'd draw inspiration from Amiga Exec provide a lightweight pre-emptive multitasking core.

Personally, my ideal software stack would be a thin kernel/device layer with a simple virtualization layer to bring an initial volume of software to the platform. That could come from CP/M, PET, Apple II, or early DOS emulation. The purpose of this is to provide a large tool base. Without it, you have to implement all of the tools yourself. It's less about retrocomputing and more about a lightweight abstraction layer providing interaction between already available tools. There's no reason for example that UXN couldn't be included, but the tool pool is smaller than CP/M.

A system constrained by emulated tools could be doomed to being a collection of museums, each static. The core OS needs to also be able to run tools of it's own, starting with automation. As we wouldn't be bound to commercial paradigms, I'd look at wedding AREXX's port concept to 9p namespaces, enabling powerful automation through simple scripting. Local apps would primarily be scripts, manipulating the filesystem to access exposed namespaces locally and across the network. If properly implemented there would be no user-visible distinction between you resizing an image locally or across a network beyond latency.

Implementing 9p for a system with transient connectivity using lightweight, mutual auth (whether that's through shared keys, pairing or something like mutual TOFU) would enable the orchestration of resources volunteered by others collaboratively when needed, instead of through a commercial or centralized prism. Prioritizing a LAN or private network layer based setup creates a simple boundary, but a 9grid-style or GlobalTalk style capability might be nice too.

Those emulated systems with their applications become part of the exposed namespace. Hypothetically, there would be little stopping you from interacting with a CP/M session via the session port, extracting data and passing it to an Apple II or UXN session.

By basing networking on filesystems, network and filesystem concepts are largely unified. This would also let users directly or indirectly provide connectivity to tools that are unaware of it. As an alternative to full networking, tools like UUCP would provide the ability to interact over networks without an IP stack, for an offpunk style experience, if web-type experiences were genuinely desired.

That combination of mutual authentication, transient connectivity, offline sync, AREXX-style control mapped to 9p-style filesystem namespaces opens up a voluntary, collaborative approach based on mutual trust and aid instead of commercial exploitation. It is the antithesis of enshittification. The rejection of enshittification is not enough. We must build systems where the concept of enshittification is rejected a violation of the social good.

Chips have moved on from the ESP32 in 2020. That lightweight kernel layer should be easy to implement or emulate across SOCs. But the lower the requirements, the cheaper and easier it becomes to build. I don't envision a bicycle for the mind, but a forest of trees we plant for others.

There is much to be upset about in the world today, and a desire to disengage is understandable. But a world in which computing capability can be co-operatively built, maintained and shared for mutual benefit, one day without reliance on a brittle and exploitative supply chain would be a little less upsetting. We don't need to fight, rise up or resist, we just need to build the tools we want without asking permission. And what could be more punk than that?

#centralization #countercomputing #philosophy #programming