ssh-workbench/docs/LEGACY_TEST_LAB.md
jima 7f4aa15830 Server-driven auth, 3-attempt retry, remember-on-success, legacy test lab doc
Auth flow rewritten to be fully server-driven: no pre-connect password
dialog. TCP + KEX + host key verification happen first; password prompt
only appears when the server actually asks via keyboard-interactive or
the new prompted-password fallback (for servers that only accept the
"password" SSH method). Up to 3 password attempts per connection,
matching OpenSSH's NumberOfPasswordPrompts default.

"Remember password" checkbox now functional: AuthPromptResult threads
the remember flag through the callback chain; SSHSession stashes the
typed password in pendingRememberPassword; TerminalService persists it
to CredentialStore only after session.connect() succeeds — wrong
passwords are never saved.

Removed dead pre-connect dialog code: PasswordDialog composable,
PasswordResult, TerminalDialogRequest.PasswordPrompt, and
passwordPromptHandler.

Added docs/LEGACY_TEST_LAB.md: self-contained 2100-line guide for
building a dedicated server with 56 historical/modern Unix systems
for terminal parser conformance testing (Docker, QEMU/KVM, SIMH,
polarhome, IBM PowerVS). Includes all Dockerfiles, compose.yml,
SIMH configs, systemd units, and helper scripts inline.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 19:50:18 +02:00

87 KiB
Raw Permalink Blame History

SSH Workbench — Legacy Test Lab

A self-contained guide for building a dedicated server that hosts 50+ historical and current Unix-like systems for terminal emulator conformance testing.

Target host: Linux Mint 20.2 (Ulyana, Ubuntu 20.04 base), 32 GB RAM, x86_64. Intended use: exercising SSH Workbench's VT52/VT100/VT220/xterm parser, keyboard system, SSH client, and SFTP browser against as many real-world systems as practical.


1. Purpose and rationale

1.1 What this lab is

A single physical or virtual Linux Mint server running a curated collection of "target" systems that SSH Workbench can connect to. Each target is a system that speaks a different dialect of the VT family — different termcap/terminfo, different default TERM, different ncurses era, different shell, different vi, different login program, different banner, different keepalive behavior, different TOFU host-key algorithm, different SFTP server.

The lab's job is to be a permanent, reproducible test fleet so that any change to SSH Workbench's terminal engine, keyboard encoder, SSH stack, or SFTP client can be exercised against a broad matrix of real systems in minutes instead of being validated against a single modern Debian host.

1.2 Why this matters for SSH Workbench specifically

SSH Workbench ships its own terminal engine ported from the TellNext C++ ANSICtrl engine, plus its own VT52 parser, plus its own xterm/VT220 layers. Historically the code has been tested almost exclusively against modern Linux servers (Duero), with vttest as the conformance yardstick. But vttest is a synthetic torture test. Real-world compatibility bugs tend to come from unexpected combinations:

  • Old curses libraries that still emit ESC ( 0 / ESC ( B charset designations that modern xterm rarely sends.
  • SCO tput and HP-UX stty emitting VT52 sequences to interactive programs even when TERM=xterm.
  • 2.11BSD vi using ESC Y direct cursor addressing because it was built for VT52.
  • AIX smitty issuing DECSCA selective-erase sequences.
  • Old Solaris login expecting 7-bit parity, which breaks on UTF-8 clients that assume 8-bit.
  • Old NetBSD ssh negotiating ssh-rsa with SHA-1 only, which modern SSH libraries have dropped.
  • OpenVMS DCL banners using DEC multinational charset (ESC ( A).
  • Ancient Minix using TERM=minix which is a real termcap entry with its own quirks.
  • NeXTSTEP using NeXT-specific escape sequences (NeXT branded as "vt100" but isn't).
  • Alpine BusyBox ash+vi using a minimal subset of ANSI that exposes bugs in parsers that assume modern xterm.

Any one of these can surface a parser crash, a rendering glitch, a dropped byte, or a wrong cursor position. The only reliable way to catch them is to actually connect to the real system.

1.3 Why a dedicated server

Running 50 test targets on a developer workstation competes with day-to-day work. A dedicated server lets the lab run 24/7, accept SSH connections from the Zebra/Tab 90/S23 during manual testing, and expose a stable set of ports that automated tests in scripts/test.py can target without fighting a moving IP.

32 GB RAM is enough for:

  • ~40 always-on Docker containers (each ~50-150 MB RSS) → ~4 GB
  • ~10 always-on SIMH/emulated historical systems → ~1 GB
  • ~6-8 concurrent libvirt/QEMU VMs of 1-2 GB each → ~12 GB
  • Host + Docker daemon + libvirt + caching → ~4 GB
  • Buffer → ~11 GB for peaks

Total: comfortable. The design is "containers and SIMH are always running, heavy VMs start on demand."

1.4 Out of scope

  • Performance benchmarking of SSH Workbench. This lab is for correctness, not throughput.
  • Automated visual regression. Visual verification stays on Zebra via scripts/test.py.
  • Anything that needs a physical serial line. Everything is network-attached.
  • Windows targets. SSH Workbench is Unix-first.

2. Host requirements and initial setup

2.1 Hardware

Resource Minimum Recommended
CPU 4 cores with VT-x/AMD-V 8+ cores
RAM 16 GB 32 GB (our target)
Disk 200 GB SSD 500 GB SSD (images, VMs, ISOs, snapshots)
Network 1 Gbps wired Static IP on LAN, reachable from test devices
BIOS VT-x / AMD-V enabled IOMMU enabled (for PCI passthrough if ever needed)

Verify virtualization is enabled:

egrep -c '(vmx|svm)' /proc/cpuinfo     # must be > 0
kvm-ok                                  # installed later; expect "KVM acceleration can be used"

2.2 Base OS preparation

Linux Mint 20.2 is Ubuntu 20.04 "Focal" under the hood. All apt package names below are Focal-compatible.

# System update
sudo apt update
sudo apt full-upgrade -y
sudo apt install -y curl wget git vim htop tmux net-tools iproute2 \
                    build-essential pkg-config unzip xz-utils bzip2 \
                    software-properties-common ca-certificates gnupg lsb-release \
                    rsync jq openssh-client openssh-server

# Enable SSH to the host itself (you will want this for remote admin)
sudo systemctl enable --now ssh

# Give the main user a sane shell environment
echo 'alias ll="ls -lah --color=auto"' >> ~/.bashrc

2.3 Firewall policy

The lab will expose dozens of ports. A permissive internal-LAN-only policy is the simplest sane choice:

sudo apt install -y ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow from 192.168.0.0/16 to any           # entire LAN — adjust to your subnet
sudo ufw allow from 10.0.0.0/8 to any               # if you also use 10.x
sudo ufw allow 22/tcp                               # host ssh from anywhere (optional)
sudo ufw enable
sudo ufw status verbose

Do not expose this server to the public internet. Many of the systems in the lab are unpatched by design.

2.4 Storage layout

A clean layout makes backups, resets, and debugging much easier:

/srv/lab/                         # everything lives here
├── docker/                       # docker-compose projects
│   ├── linux-containers/
│   └── compose.yml
├── vms/                          # libvirt-managed VMs
│   ├── images/                   # qcow2 disks
│   └── iso/                      # install ISOs (cached)
├── simh/                         # SIMH emulator binaries + configs
│   ├── bin/
│   ├── images/                   # PDP-11/VAX disk images
│   └── configs/                  # .ini files
├── emulators/                    # other emulators (Previous, etc.)
│   └── previous/
├── scripts/                      # start/stop/status helpers
├── logs/                         # per-target logs
└── README.md                     # symlink to docs/LEGACY_TEST_LAB.md

Create it:

sudo mkdir -p /srv/lab/{docker,vms/{images,iso},simh/{bin,images,configs},emulators,scripts,logs}
sudo chown -R $USER:$USER /srv/lab

2.4.1 Choosing where the lab lives (LAB_ROOT)

Important — pick this before you do anything else. Every script, Dockerfile, systemd unit, and config in this document assumes a single root directory called LAB_ROOT. The default used throughout the document is /srv/lab, but you decide where it actually goes on your server. Change your mind later and you'll be rewriting paths in 15 places, so choose deliberately now.

The root directory ends up holding:

Subdirectory Typical size What's in it
docker/ 10-20 GB Docker build contexts + Dockerfiles (images themselves live in /var/lib/docker/)
vms/images/ 50-150 GB qcow2 disk files — by far the largest consumer
vms/iso/ 20-40 GB Install ISOs (cached so you don't redownload)
simh/ 1-2 GB SIMH binaries + historical disk images
emulators/ 1-2 GB Previous, 9front kernels, etc.
scripts/ <1 MB The lab-* helpers
systemd/ <1 MB Unit files (copied to /etc/systemd/system/ on install)
logs/ grows Per-target test logs
Total 80-200 GB Plan for ≥200 GB of free space on whatever filesystem you pick

Common choices and when to use each:

Choice Pros Cons Good when
/srv/lab FHS-correct for "site-specific data served by this system"; survives apt full-upgrade; clean permissions story Requires sudo to create; not backed up by most home-directory backup tools Default. Dedicated server with a large root partition.
/opt/lab Also FHS-correct for "add-on software"; familiar to ops people Same as /srv/lab Slight preference if your server already uses /opt for everything else.
/home/$USER/lab No sudo for anything; your normal backup scripts pick it up automatically; easy to rsync off-box /home is often a small partition; user process limits can bite QEMU; systemd units need User= adjustments Workstation doubling as a test host, or /home is on the big disk.
/data/lab, /mnt/bigdisk/lab Decouples the lab from the root partition; trivial to move by remounting You must mount the disk before docker compose up and before SIMH systemd units fire — add the mount to /etc/fstab with x-systemd.requires-mounts-for=... on the unit Recommended if you have a separate big SSD/HDD for VM images.
/var/lib/lab Matches Docker / libvirt convention Harder to eyeball; backup policies for /var vary Only if you've standardized on /var/lib for all service data.
$HOME/rmt-lab Easiest to experiment with; no permission hoops Tied to your login user Quick evaluation before committing to a permanent layout.

How LAB_ROOT is actually used:

  1. You set it once as an environment variable before running the extractor in section 14. The extractor rewrites every /srv/lab reference in the files it writes out so the installed lab uses your chosen path.
  2. The setup.sh bootstrapper reads $LAB_ROOT with a fallback to /srv/lab, so even if you skip the rewrite step, running LAB_ROOT=/opt/lab bash setup.sh works.
  3. The lab-* helper scripts do the same — they honor $LAB_ROOT from the environment first, falling back to /srv/lab. An /etc/profile.d/lab.sh snippet (written by setup.sh) exports the choice system-wide.
  4. Dockerfiles, compose.yml, and SIMH .ini configs only contain paths relative to the lab root (for the Dockerfiles/compose) or with /srv/lab literals (for the SIMH .ini files and systemd units). The extractor patches the literals at write time.

Decision checklist — answer these now, before you run anything:

  1. Which filesystem has ≥200 GB free? → That's where LAB_ROOT should point.
  2. Is the user who will run the lab (jima, or whatever) in the docker, libvirt, kvm groups? → They need to be (setup.sh does this).
  3. Will LAB_ROOT be under /home? → Make sure /home isn't noexec mounted (some distros do this), otherwise SIMH binaries won't run. Check with mount | grep /home.
  4. Is the chosen path on an SSD? → Strongly recommended — VM install from ISO on spinning rust is painful.
  5. Do you want the test Docker user's home directories to survive a container rebuild? → If yes, also reserve a volume path under LAB_ROOT/docker/volumes/ — not configured by default.

Set it now:

# Pick ONE of these and export it in your shell. Add it to ~/.bashrc for persistence.
export LAB_ROOT=/srv/lab                # default
# export LAB_ROOT=/opt/lab              # alternate FHS location
# export LAB_ROOT=$HOME/lab             # home-directory-based
# export LAB_ROOT=/mnt/bigdisk/lab      # dedicated VM disk

echo "export LAB_ROOT=$LAB_ROOT" >> ~/.bashrc

Everything in section 14 below assumes $LAB_ROOT is set. If you forget, the extractor bails with an error; the scripts fall back to /srv/lab.

2.5 Hostname and DNS

Give the server a memorable name and add it to your LAN's DNS or /etc/hosts on every test device. The rest of this document assumes the lab is reachable as testlab.local at e.g. 192.168.1.50.

sudo hostnamectl set-hostname testlab

3. Tooling — install everything the lab needs

Install in this order. Each section is independent and idempotent.

3.1 Docker Engine (for Linux containers)

Mint 20.2 needs the upstream Docker repo because the distro package is outdated:

# Remove any distro-provided docker
sudo apt remove -y docker docker-engine docker.io containerd runc 2>/dev/null

# Add Docker's official GPG key and repo (for Ubuntu 20.04 "focal")
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
  | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu focal stable" \
  | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Allow current user to run docker without sudo
sudo usermod -aG docker $USER
newgrp docker    # or log out / log back in

# Verify
docker version
docker run --rm hello-world

3.2 QEMU / KVM / libvirt / virt-manager (for modern VMs)

sudo apt install -y qemu-kvm qemu-system-x86 qemu-system-sparc qemu-system-ppc \
                    qemu-system-mips qemu-system-arm qemu-utils \
                    libvirt-daemon-system libvirt-clients \
                    bridge-utils virtinst virt-manager ovmf

# Groups
sudo usermod -aG libvirt $USER
sudo usermod -aG kvm $USER
newgrp libvirt

# Verify KVM acceleration is available
sudo apt install -y cpu-checker
kvm-ok

# Start libvirt default network
sudo virsh net-start default
sudo virsh net-autostart default

# Quick smoke test
virsh list --all

qemu-system-sparc and qemu-system-ppc are included because some historical Solaris and AIX/Linux-on-POWER experiments may need them.

3.3 SIMH (historical DEC emulators — PDP-11, VAX, PDP-10…)

SIMH builds from source in ~2 minutes and installs a clean set of simh-* binaries:

sudo apt install -y libpcap-dev libvdeplug-dev libsdl2-dev libpcre3-dev \
                    libpng-dev zlib1g-dev libedit-dev

cd /srv/lab/simh
git clone https://github.com/open-simh/simh.git src
cd src
make -j$(nproc) pdp11 vax pdp10 altair altairz80 pdp1 pdp7 pdp8 pdp9 pdp15 \
                 hp2100 nova sigma ibm1130 ibm1401 eclipse
# Binaries end up in ./BIN/
mkdir -p /srv/lab/simh/bin
cp BIN/* /srv/lab/simh/bin/
export PATH=/srv/lab/simh/bin:$PATH
echo 'export PATH=/srv/lab/simh/bin:$PATH' >> ~/.bashrc

3.4 Vagrant (optional — simplifies BSD VM lifecycle)

Vagrant is optional but very convenient for booting OpenBSD/FreeBSD/NetBSD versions from pre-built boxes:

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/hashicorp.gpg
echo "deb [signed-by=/etc/apt/keyrings/hashicorp.gpg] https://apt.releases.hashicorp.com focal main" \
  | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt install -y vagrant vagrant-libvirt

3.5 Previous (NeXTSTEP emulator)

sudo apt install -y libsdl2-dev libsdl2-image-dev cmake
cd /srv/lab/emulators
git clone https://github.com/probonopd/previous.git
cd previous
mkdir build && cd build
cmake ..
make -j$(nproc)
# Binary: ./src/Previous

NeXTSTEP install ISOs come from archive.org (NeXTStep 3.3 user+developer discs).

3.6 ISO cache and mirror

Create a single place where all install media lives so you don't re-download 30 GB of ISOs every time you rebuild a VM:

mkdir -p /srv/lab/vms/iso
cd /srv/lab/vms/iso
# Example seeding — add your own as needed
wget -c https://download.openbsd.org/pub/OpenBSD/7.4/amd64/install74.iso
wget -c https://cdimage.debian.org/cdimage/archive/3.1_r8/i386/iso-cd/debian-31r8-i386-netinst.iso
# …etc

3.7 Monitoring helpers

sudo apt install -y glances iotop sysstat lnav

glances will show you Docker containers + libvirt VMs + CPU/RAM at a glance.


4. Networking model

4.1 Strategy: host-port forwarding, one SSH port per target

The simplest model: every target — container or VM — exposes SSH on a unique port on the host. Test devices connect to testlab.local:<port>. No NAT, no bridges, no VLANs, no DNS games.

4.2 Port block allocation

Reserve ranges so there's never a collision:

Range Purpose
22 Host SSH
2200-2299 Modern Linux containers
2300-2399 Old Linux containers
2400-2499 BSD VMs (modern)
2500-2599 Commercial Unix VMs (Solaris variants)
2600-2699 SIMH historical systems
2700-2799 Special emulators (NeXTSTEP, Minix, Plan 9…)
2800-2899 Reserved for future
2900-2999 Telnet-only systems (many historical systems have no ssh)

4.3 Telnet vs SSH

Historical systems (V7, 2.11BSD, Ultrix, NeXTSTEP, Minix 1.x) predate SSH by decades. They speak Telnet only. SSH Workbench already supports Telnet as a protocol, so these targets get tested on the Telnet code path — which is just as valuable. Allocate them in the 2900-2999 range to keep the distinction obvious.

4.4 Internal VM network

libvirt's default network is a NAT bridge virbr0 at 192.168.122.0/24. Each VM gets a DHCP address. We add hostfwd rules (or iptables) to map host port → VM:22. virt-install sets this up automatically when you pass --network network=default,model=virtio.

For a simpler approach, create a routed network so VMs get reachable from the LAN without port-forwarding at all. This is more convenient but requires LAN cooperation (or a static route). Start with NAT + port-forward, switch to routed only if you find yourself constantly mapping new ports.


5. System categories

The lab groups targets into 6 categories. Each category has its own lifecycle and management style.

Category A — Always-on Docker containers (≈25 targets)

Fastest to boot, lowest RAM cost, easiest to reset. Used for the "modern + recent legacy Linux" part of the matrix. All run headless with sshd exposed.

Category B — On-demand libvirt VMs (≈10 targets)

Heavier, richer, used for systems that genuinely need a full VM: BSD variants, Solaris, OpenVMS, Haiku, etc. Started only when a test run requires them.

Category C — Always-on SIMH emulated historical systems (≈8 targets)

PDP-11 and VAX era systems (V6, V7, 2.11BSD, 4.3BSD, Ultrix-11, Ultrix-32, RSX-11M). Each SIMH instance runs as a systemd service with attach tnt 2900 to expose a Telnet line.

Category D — Specialty emulators (≈3 targets)

NeXTSTEP via Previous, Minix 3 via QEMU with special kernel args, Plan 9 via 9vx or QEMU.

Category E — Remote shell accounts (≈4 targets)

Not hosted locally at all: polarhome.com shell accounts on real AIX, HP-UX PA-RISC, HP-UX Itanium, IRIX, Tru64, OpenVMS. A hosts.yaml file in /srv/lab/ just records the connection details.

Category F — Cloud trials (1 target)

IBM Power Virtual Server free trial (AIX 7.x on POWER9/POWER10). Ephemeral — spun up for a test session, torn down afterward. Documented but not always running.


6. Master test matrix — the 50 systems

This is the target list. Each row has everything needed to connect and is referenced by the automation scripts.

Legend: Cat = category (A-F), Port = host-side TCP port, Proto = SSH/Telnet.

# System Cat Port Proto TERM Why it's in the matrix
1 Debian 12 (bookworm) A 2201 SSH xterm-256color Modern control — baseline
2 Debian 11 (bullseye) A 2202 SSH xterm-256color Slightly older glibc + ncurses
3 Debian 10 (buster) A 2203 SSH xterm-256color systemd transition era
4 Debian 9 (stretch) A 2204 SSH xterm-256color Old OpenSSL, RSA default
5 Debian 8 (jessie) A 2205 SSH xterm Pre-systemd-default, ncurses 5
6 Debian 7 (wheezy) A 2206 SSH xterm Eol, classic sysvinit
7 Debian 6 (squeeze) A 2207 SSH xterm Old OpenSSH 5.5, pre-Ed25519
8 Debian 5 (lenny) A 2208 SSH xterm Ancient ncurses, old vim
9 Debian 4 (etch) A 2209 SSH xterm First era with widely-deployed UTF-8
10 Ubuntu 24.04 LTS A 2210 SSH xterm-256color Current LTS, newest everything
11 Ubuntu 22.04 LTS A 2211 SSH xterm-256color Previous LTS
12 Ubuntu 20.04 LTS A 2212 SSH xterm-256color Common server target
13 Ubuntu 18.04 LTS A 2213 SSH xterm-256color Bionic, still common
14 Ubuntu 16.04 LTS A 2214 SSH xterm Xenial, ncurses 6 transition
15 Ubuntu 14.04 LTS A 2215 SSH xterm Trusty, upstart init
16 Ubuntu 12.04 LTS A 2216 SSH xterm Precise, very old
17 Ubuntu 10.04 LTS A 2217 SSH xterm Lucid, bash 4.1
18 CentOS 7 A 2218 SSH xterm-256color RHEL 7 userland
19 CentOS 6 A 2219 SSH xterm RHEL 6, old ncurses
20 CentOS 5 A 2220 SSH xterm RHEL 5, ancient glibc
21 Fedora 40 A 2221 SSH xterm-256color Bleeding edge
22 Alpine 3.19 A 2222 SSH xterm musl + busybox (modern)
23 Alpine 3.10 A 2223 SSH xterm Old busybox vi quirks
24 Alpine 3.4 A 2224 SSH xterm Very old busybox
25 Arch Linux A 2225 SSH xterm-256color Rolling, edge packages
26 openSUSE Leap 15 A 2226 SSH xterm-256color SUSE userland
27 Slackware 14.2 A 2227 SSH xterm Non-systemd traditional layout
28 Void Linux (musl) A 2228 SSH xterm-256color runit + musl libc
29 FreeBSD 14.1 B 2401 SSH xterm-256color Modern FreeBSD, csh default root shell
30 FreeBSD 10.4 B 2402 SSH xterm Old FreeBSD, older ssh-rsa
31 OpenBSD 7.5 B 2403 SSH xterm-256color pf firewall, strict OpenSSH
32 OpenBSD 6.7 B 2404 SSH xterm Older, ksh oddities
33 NetBSD 10 B 2405 SSH xterm wscons terminal
34 NetBSD 7 B 2406 SSH vt220 Old NetBSD with real vt220 default
35 OpenIndiana Hipster B 2501 SSH xterm-256color Modern illumos / OpenSolaris descendant
36 Solaris 11.4 CBE B 2502 SSH xterm-256color Real Oracle Solaris, SVR4 quirks
37 Solaris 10 (x86) B 2503 SSH dtterm CDE-era, tput surprises
38 OmniOS B 2504 SSH xterm-256color Minimal illumos
39 Haiku (nightly) B 2505 SSH xterm BeOS descendant, POSIX-ish
40 OpenVMS x86 9.2-2 B 2506 SSH vt100 DCL, DEC multinational charset
41 2.11BSD on PDP-11 C 2601 Telnet vt100 Genuine VT52/VT100 target
42 Unix V7 on PDP-11 C 2602 Telnet dumb / vt52 1979 Unix, true VT52 era
43 Unix V6 on PDP-11 C 2603 Telnet dumb 1975 Unix, minimal termcap
44 4.3BSD on VAX C 2604 Telnet vt100 Classic BSD on VAX
45 Ultrix-11 3.1 C 2605 Telnet vt100 DEC Unix on PDP-11
46 Ultrix-32 4.x C 2606 Telnet vt220 DEC Unix on VAX
47 RSX-11M-PLUS C 2607 Telnet vt100 DEC non-Unix OS, for parser stress
48 NeXTSTEP 3.3 D 2701 Telnet nextstep NeXT-specific escape sequences
49 Minix 3.3 D 2702 SSH minix Unique termcap entry, tiny microkernel
50 Plan 9 (9front) D 2703 SSH 9term Rob Pike's post-Unix; rio drawing
51 polarhome AIX 7 E SSH xterm Real IBM POWER hardware (remote)
52 polarhome HP-UX 11i PA-RISC E SSH hpterm Real HP PA-RISC hardware (remote)
53 polarhome HP-UX Itanium E SSH hpterm Real Itanium (remote)
54 polarhome IRIX E SSH iris-ansi Real SGI MIPS (remote)
55 polarhome Tru64 E SSH dtterm Real DEC Alpha (remote)
56 IBM PowerVS AIX 7.3 F SSH aixterm IBM Cloud free trial

That's 56 systems. The user said "add as many we could have use for" — this is the comfortable upper bound on a 32 GB box. Trim if you run out of headroom.


7. Per-category setup instructions

7.1 Category A — Docker Linux containers

Strategy: one docker-compose.yml that defines all containers. Each is a minimal base image with OpenSSH installed and a fixed password test for user test. Root SSH disabled. No cross-container networking. All exposed on 127.0.0.1 — no, exposed on 0.0.0.0 so the LAN can reach them.

Directory layout:

/srv/lab/docker/linux-containers/
├── compose.yml
├── build/
│   ├── base-sshd.debian/Dockerfile
│   ├── base-sshd.ubuntu/Dockerfile
│   ├── base-sshd.centos/Dockerfile
│   ├── base-sshd.alpine/Dockerfile
│   └── …
└── run.sh

Dockerfile template (build/base-sshd.debian/Dockerfile):

ARG TAG=bookworm
FROM debian:${TAG}
RUN apt-get update \
 && apt-get install -y openssh-server vim less curl tmux htop ncurses-term \
 && mkdir /var/run/sshd \
 && useradd -m -s /bin/bash test \
 && echo 'test:test' | chpasswd \
 && sed -i 's/#PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config \
 && sed -i 's/#PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config \
 && ssh-keygen -A
EXPOSE 22
CMD ["/usr/sbin/sshd","-D","-e"]

For distros where the distribution repos are offline (Debian < 8, Ubuntu < 16.04), you will need to swap sources.list to point at archive.debian.org / old-releases.ubuntu.com. Each Dockerfile in build/ handles its own quirks.

compose.yml skeleton:

name: legacy-lab
services:
  deb12:
    build: { context: ./build/base-sshd.debian, args: { TAG: bookworm } }
    ports: ["2201:22"]
    restart: unless-stopped

  deb11:
    build: { context: ./build/base-sshd.debian, args: { TAG: bullseye } }
    ports: ["2202:22"]
    restart: unless-stopped

  # ... one entry per row in the matrix

Bringing it up:

cd /srv/lab/docker/linux-containers
docker compose build
docker compose up -d
docker compose ps

Resetting a single container:

docker compose restart deb12
# or fully re-build it
docker compose up -d --build deb12

7.2 Category B — libvirt VMs

Strategy: one VM per entry, stored as qcow2 in /srv/lab/vms/images/. Each has a static virsh name matching the matrix row. Most are installed once from ISO, then snapshotted for fast reset.

Example: install FreeBSD 14.1:

cd /srv/lab/vms/iso
wget -c https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/14.1/FreeBSD-14.1-RELEASE-amd64-disc1.iso

virt-install \
  --name freebsd-14 \
  --memory 1024 \
  --vcpus 2 \
  --disk path=/srv/lab/vms/images/freebsd-14.qcow2,size=12,format=qcow2 \
  --cdrom /srv/lab/vms/iso/FreeBSD-14.1-RELEASE-amd64-disc1.iso \
  --os-variant freebsd14.0 \
  --network network=default,model=virtio \
  --graphics vnc,listen=127.0.0.1 \
  --noautoconsole

Finish install via virt-viewer freebsd-14. After install, enable sshd:

# Inside the VM, as root:
sysrc sshd_enable=YES
service sshd start
# Create 'test' user with password 'test'
pw useradd test -m -s /bin/sh
passwd test

Forwarding a host port to a VM:

libvirt's default NAT doesn't forward ports automatically. Two options:

Option 1 (simple): iptables DNAT rule via /etc/libvirt/hooks/qemu:

sudo mkdir -p /etc/libvirt/hooks
sudo tee /etc/libvirt/hooks/qemu > /dev/null <<'EOF'
#!/bin/bash
# $1=VM name, $2=event (start|stopped|reconnect)
case "$1:$2" in
  freebsd-14:start)
    iptables -t nat -A PREROUTING -p tcp --dport 2401 -j DNAT --to 192.168.122.31:22
    iptables -I FORWARD -p tcp -d 192.168.122.31 --dport 22 -j ACCEPT
    ;;
  freebsd-14:stopped)
    iptables -t nat -D PREROUTING -p tcp --dport 2401 -j DNAT --to 192.168.122.31:22
    iptables -D FORWARD -p tcp -d 192.168.122.31 --dport 22 -j ACCEPT
    ;;
esac
EOF
sudo chmod +x /etc/libvirt/hooks/qemu
sudo systemctl restart libvirtd

Keep a static DHCP reservation per VM so the 192.168.122.31 above stays stable:

virsh net-edit default
# Add <host mac="52:54:00:..." name="freebsd-14" ip="192.168.122.31"/>
virsh net-destroy default && virsh net-start default

Option 2 (even simpler): use a user-mode networking stack with qemu-system directly (-nic user,hostfwd=tcp::2401-:22) and skip libvirt entirely for the most port-heavy VMs. Mix and match — libvirt for interactive ones (Solaris, OpenVMS), raw QEMU for headless ones.

Snapshotting:

# After fresh install + sshd + test user:
virsh snapshot-create-as freebsd-14 clean "clean base state"
# To reset:
virsh snapshot-revert freebsd-14 clean

Per-VM quick reference (the ones that need special attention):

VM Special notes
OpenBSD Install needs console; after reboot use /etc/ssh/sshd_configPermitRootLogin no, PasswordAuthentication yes. Add test user via adduser.
NetBSD useradd -m -G wheel test. Must enable rc.conf sshd=YES.
Solaris 11.4 CBE Requires Oracle SSO to download. VM needs 4 GB RAM minimum. Serial console works but CDE/GDM is heavy — install with --group solaris-small-server to skip graphical.
Solaris 10 Install ISO from archive sources. Very slow installer. 1 GB RAM. Enable sshd with svcadm enable ssh.
OpenIndiana Hipster ISO, standard OpenSolaris-style installer. pkg install openssh, svcadm enable ssh.
OmniOS Minimal — pkg install ssh-server then enable.
Haiku Download nightly anyboot image (it's a single file, no install — just boot it). SSHDaemon package from pkgman install openssh.
OpenVMS 9.2-2 VSI delivers a pre-installed qcow2 image. Just copy to /srv/lab/vms/images/, define in virsh, boot. Enable SSH via TCPIP$CONFIG.

7.3 Category C — SIMH historical systems

Strategy: a directory per system under /srv/lab/simh/ with the disk images, a SIMH .ini config, and a systemd service.

Example: 2.11BSD on PDP-11 via SIMH

cd /srv/lab/simh/images
mkdir 211bsd && cd 211bsd
# Download the ready-to-run image — see references at the end of the doc
wget https://www.tuhs.org/Archive/Distributions/UCB/2.11BSD/211bsd.tap.gz
gunzip 211bsd.tap.gz
# Fetch pre-built networked 2.11BSD disk from Will Senn's repo or TUHS

Create /srv/lab/simh/configs/211bsd.ini:

set cpu 11/70
set cpu 4M
set rl0 RL02
attach rl0 /srv/lab/simh/images/211bsd/rl0.dsk
set tm0 locked
attach tm0 /srv/lab/simh/images/211bsd/211bsd.tap
set dz lines=8
attach dz 2601
set console telnet=2602
boot rl0

Create /etc/systemd/system/simh-211bsd.service:

[Unit]
Description=SIMH PDP-11 running 2.11BSD
After=network.target

[Service]
Type=simple
User=jima
WorkingDirectory=/srv/lab/simh/images/211bsd
ExecStart=/srv/lab/simh/bin/pdp11 /srv/lab/simh/configs/211bsd.ini
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now simh-211bsd
sudo systemctl status simh-211bsd
# Connect:
telnet testlab.local 2601

Repeat for: V6, V7 (PDP-11); 4.3BSD, Ultrix-32, RSX-11M-PLUS (VAX); Ultrix-11 (PDP-11). Disk images for all of these are distributed by The Unix Heritage Society under a permissive license from Caldera (2002).

SIMH resource cost: each emulated PDP-11 is a single-threaded userspace process using ~50-100 MB RAM. Eight of them cost <1 GB total.

7.4 Category D — Specialty emulators

NeXTSTEP 3.3 via Previous:

cd /srv/lab/emulators/previous/build
./src/Previous
# Configure a NeXT Cube with 64 MB RAM, attach NeXTStep 3.3 install ISO
# (from archive.org: archive.org/details/nextstep3-3dev)
# Install to a 500 MB disk image, enable network, set TERM=nextstep

NeXTSTEP predates SSH. Expose Telnet on port 2701 via the emulator's network config.

Minix 3.3 via QEMU:

cd /srv/lab/vms/iso
wget -c https://www.minix3.org/iso/minix_R3.3.0-588a35b.iso.bz2
bunzip2 minix_R3.3.0-588a35b.iso.bz2

qemu-img create -f qcow2 /srv/lab/vms/images/minix3.qcow2 4G
qemu-system-x86_64 \
  -m 256 -hda /srv/lab/vms/images/minix3.qcow2 \
  -cdrom /srv/lab/vms/iso/minix_R3.3.0-588a35b.iso \
  -boot d -nic user,hostfwd=tcp::2702-:22 -nographic
# Follow the installer (keyboard-driven ncurses installer)
# After install: pkgin install openssh

Plan 9 (9front) via QEMU:

cd /srv/lab/vms/iso
wget -c https://9front.org/iso/9front-10191.amd64.iso.gz
gunzip 9front-10191.amd64.iso.gz
qemu-img create -f qcow2 /srv/lab/vms/images/9front.qcow2 8G
qemu-system-x86_64 -m 1024 \
  -hda /srv/lab/vms/images/9front.qcow2 \
  -cdrom /srv/lab/vms/iso/9front-10191.amd64.iso \
  -boot d -nic user,hostfwd=tcp::2703-:22

Plan 9 is the oddest target in the matrix — rio is not a VT at all, but 9front ships an SSH server. Use TERM=9term and accept that some escape sequences simply won't render. That's the point of the test.

7.5 Category E — polarhome remote shell accounts

Pay the ~$2 per host, create an account for each target, then just store the connection details in /srv/lab/hosts.yaml:

# Remote targets (not hosted locally)
remote:
  - name: polarhome-aix7
    host: aix7.polarhome.com
    user: <your username>
    term: xterm
    notes: "Real IBM POWER hardware. Tends to be slow. Respect AUP."
  - name: polarhome-hpux-parisc
    host: hpux.polarhome.com
    user: <your username>
    term: hpterm
  - name: polarhome-hpux-ia64
    host: hpux-ia64.polarhome.com
    user: <your username>
    term: hpterm
  - name: polarhome-irix
    host: irix.polarhome.com
    user: <your username>
    term: iris-ansi
  - name: polarhome-tru64
    host: tru64.polarhome.com
    user: <your username>
    term: dtterm

Polarhome has been flaky in recent years (single volunteer in Slovenia). Check before committing $10. If it's down, buy a used HP J5000 PA-RISC box or an SGI O2 off eBay — both go for $50-150 and are an evening's setup.

7.6 Category F — IBM Power Virtual Server AIX

One-off manual process, not part of the always-on matrix:

  1. Register at https://cloud.ibm.com for the PowerVS free trial ($2,500 in credits, 90 days).
  2. Create a new Workspace in a Power VS region.
  3. Create an SSH key pair and upload the public key.
  4. Provision an AIX 7.3 instance (smallest shared-processor profile is fine — ~0.25 cores, 2 GB RAM, ~$0.05/hour).
  5. Get the public IP.
  6. Add to /srv/lab/hosts.yaml under an ephemeral: section with the expiry date.
  7. Deprovision when done — free trial credits burn continuously.

Log every session so the test result is preserved after the instance is destroyed.


8. Automation scripts

The lab is manageable by hand but painful without a few helpers. Put these in /srv/lab/scripts/:

8.1 lab-status — one-page status

#!/bin/bash
# /srv/lab/scripts/lab-status
echo "=== Docker containers ==="
docker compose -f /srv/lab/docker/linux-containers/compose.yml ps
echo
echo "=== libvirt VMs ==="
virsh list --all
echo
echo "=== SIMH systemd services ==="
systemctl list-units 'simh-*' --no-pager
echo
echo "=== Listening ports (lab range) ==="
ss -tlnp | grep -E ':2[2-9][0-9]{2}'

8.2 lab-up / lab-down

#!/bin/bash
# /srv/lab/scripts/lab-up
set -e
docker compose -f /srv/lab/docker/linux-containers/compose.yml up -d
for svc in $(systemctl list-unit-files 'simh-*.service' | awk '/simh-/ {print $1}'); do
  sudo systemctl start "$svc"
done
# Heavy VMs stay off by default — start on demand
echo "Lab up. Run lab-status to verify."
#!/bin/bash
# /srv/lab/scripts/lab-down
docker compose -f /srv/lab/docker/linux-containers/compose.yml down
for svc in $(systemctl list-unit-files 'simh-*.service' | awk '/simh-/ {print $1}'); do
  sudo systemctl stop "$svc"
done
for vm in $(virsh list --state-running --name); do
  virsh shutdown "$vm"
done

8.3 lab-connect <name> — shortcut

#!/bin/bash
# /srv/lab/scripts/lab-connect
# Usage: lab-connect deb12
declare -A TARGETS=(
  [deb12]="ssh://test@testlab.local:2201"
  [deb11]="ssh://test@testlab.local:2202"
  [211bsd]="telnet://testlab.local:2601"
  # ... generated from hosts.yaml
)
target=${TARGETS[$1]:-}
if [ -z "$target" ]; then echo "Unknown: $1"; exit 1; fi
proto=${target%%://*}
rest=${target#*://}
case $proto in
  ssh)    ssh -p "${rest##*:}" "${rest%:*}" ;;
  telnet) telnet "${rest%:*}" "${rest##*:}" ;;
esac

8.4 lab-reset <name> — snapshot revert

#!/bin/bash
# /srv/lab/scripts/lab-reset
case $1 in
  deb*|ubuntu*|centos*|alpine*)
    docker compose -f /srv/lab/docker/linux-containers/compose.yml \
      up -d --force-recreate "$1" ;;
  freebsd*|openbsd*|netbsd*|solaris*|omnios*|openindiana*|haiku*|vms*)
    virsh snapshot-revert "$1" clean ;;
  simh-*)
    sudo systemctl restart "$1" ;;
esac

8.5 Integration with scripts/test.py

The Python ADB test framework in this repo already knows how to connect the Zebra to an arbitrary host. Extend scripts/test.py with a --target flag that reads /srv/lab/hosts.yaml and auto-populates the profile name passed via --es profile to the launch intent. That way the same test suite runs against all 56 targets by iterating over the YAML.


9. Testing workflow

9.1 Developer loop

  1. Make a change to the parser / renderer / SSH stack.
  2. Run unit tests: ./gradlew testDevDebugUnitTest.
  3. Run the quick smoke matrix (modern Debian + 2.11BSD + OpenBSD): scripts/test.py --targets=deb12,211bsd,openbsd-7.5.
  4. If it touches charsets, VT52, or SGR: run the classics matrix (V6, V7, Ultrix-11, Ultrix-32, 4.3BSD, NeXTSTEP).
  5. If it touches SSH auth or kex: run the BSD + Solaris matrix.
  6. Full matrix before every release.

9.2 Things to watch for per target

Target type Likely bugs
SIMH historical Dropped bytes during very slow emulation; UTF-8 assumptions where the system is 7-bit; missing DEC Special Graphics glyphs
Solaris 10/11 dtterm charset switches; SGR 39/49 default-color confusion
OpenBSD Strict kex algorithm lists — older SSH Workbench libs may fail handshake
FreeBSD old Same — ssh-rsa with SHA-1 has been dropped client-side
Alpine busybox Minimal termcap; vi emits ANSI that diverges from modern ncurses
Haiku Non-Unix-but-POSIX — line endings, locale behavior surprises
OpenVMS VT-native; expects DEC keyboard; TERM=vt100 strict
NeXTSTEP Non-standard color sequences; TERM=nextstep termcap rare
Plan 9 rio drawing — parser should not crash on unexpected sequences

9.3 Bug-reporting convention

When a target misbehaves, capture:

  1. The target name from the matrix.
  2. The exact command you typed.
  3. A session recording (SessionRecorder writes to Download/SshWorkbench/recordings/).
  4. A screenshot of the terminal pane.
  5. The app debug log (sshworkbench_debug.txt).

Attach all four to an entry in docs/TODO.md with a link to the system row in this matrix.


10. Resource management

10.1 RAM budget (steady state, everything running)

Host Linux Mint + services .................  2.0 GB
Docker daemon + containerd ..................  0.5 GB
28 Linux containers × ~100 MB RSS ...........  2.8 GB
8 SIMH processes × ~80 MB ...................  0.6 GB
libvirt daemon + QEMU overhead ..............  0.5 GB
8 concurrent VMs × 1.5 GB avg ............... 12.0 GB
Filesystem cache .............................  4.0 GB
Buffer ...................................... ~9.6 GB
                                              -------
                                               32.0 GB

10.2 When to split the lab

If you push beyond 60 always-on systems or start running multiple Solaris/OpenVMS concurrently, RAM pressure will force swapping. At that point either:

  • Upgrade to 64 GB (cheapest option).
  • Split the heavy VMs onto a second host and keep the primary for containers+SIMH.
  • Convert Solaris/OpenVMS to "cold storage" — start only when requested by a specific test run.

10.3 Disk budget

Docker overlays ............. ~15 GB (28 small images)
SIMH images ..................  ~1 GB
VM qcow2 disks .............. ~80 GB (10 VMs × 8 GB avg)
ISO cache ................... ~30 GB
Snapshots ................... ~20 GB
Logs / test results .........  ~5 GB
                              -------
                              ~150 GB minimum

Plan for 500 GB to have room to grow.

10.4 Backup strategy

  • Docker: all images rebuildable from Dockerfiles. Back up /srv/lab/docker/ (the Dockerfiles and compose.yml) to git. Don't back up the containers themselves.
  • VM qcow2: one cold-snapshot per VM after initial install, stored on an external disk. Delta snapshots every month.
  • SIMH images: the original install media is in /srv/lab/simh/images/ and never changes — back up once.
  • hosts.yaml: committed to this repo under scripts/lab/hosts.yaml.

11. Maintenance

11.1 Weekly

  • docker system prune -f to clear dangling layers.
  • virsh pool-refresh default.
  • Check systemctl --failed for dead SIMH services.
  • Review /srv/lab/logs/ for anything unexpected.

11.2 Monthly

  • docker compose pull && docker compose up -d for the tagged "latest" containers (Ubuntu 24.04, Debian 12, Arch, Fedora, Alpine latest). Do not update the old-version containers — they're pinned on purpose.
  • Snapshot-delta backup of VM images.
  • Verify polarhome accounts are still active.

11.3 On the next Mint LTS upgrade

When Mint 20.2 reaches EOL, upgrade the host, retest Docker and libvirt permissions, reinstall SIMH from source.


12. Connecting from SSH Workbench

The whole point of this lab is the Android app. Once the lab is up:

  1. On the Mint server, verify testlab.local resolves on your LAN (either via mDNS via avahi-daemon or a /etc/hosts entry on each test device).
  2. On each test device (Zebra TC21, Tab 90, S23), create one SavedConnection per matrix row. Name them lab-deb12, lab-211bsd, etc.
  3. For Telnet targets (Category C and NeXTSTEP), set the protocol to Telnet.
  4. Import them in bulk using the Vault Export format: generate a .swb file on a dev machine that defines all 56 connections, then import it on each device.
  5. Save the test user's password (test/test for local targets; polarhome credentials per account).

Bulk vault pre-seeding: write a small Python script that emits a JSON payload matching VaultExportSerializer's expected format, then encrypt it with VaultCrypto via the JNI. Alternatively, just hand-enter all 56 once and export a .swb as the canonical lab seed. The second option is faster for a one-time setup.


13. Ready-to-deploy configuration files

This section is the entire "bundle" — copy each block verbatim into the path above it and you have a working lab. No external files required. Everything here has been aligned with the matrix in section 6.

13.1 Target directory layout

After running the installation procedure (section 14), /srv/lab/ looks like this:

/srv/lab/
├── docker/
│   └── linux-containers/
│       ├── compose.yml
│       ├── build/
│       │   ├── apt-modern/Dockerfile
│       │   ├── apt-archive/Dockerfile
│       │   ├── rpm-modern/Dockerfile
│       │   ├── rpm-vault/Dockerfile
│       │   ├── alpine/Dockerfile
│       │   ├── arch/Dockerfile
│       │   ├── opensuse/Dockerfile
│       │   ├── void/Dockerfile
│       │   └── slackware/Dockerfile
│       └── authorized_keys                    # optional: your SSH pubkey
├── simh/
│   ├── bin/                                   # compiled SIMH binaries
│   ├── src/                                   # SIMH git checkout
│   ├── configs/
│   │   ├── 211bsd.ini
│   │   ├── v7.ini
│   │   ├── v6.ini
│   │   ├── 43bsd.ini
│   │   ├── ultrix11.ini
│   │   ├── ultrix32.ini
│   │   └── rsx11m.ini
│   └── images/
│       ├── 211bsd/
│       ├── v7/
│       ├── v6/
│       ├── 43bsd/
│       ├── ultrix11/
│       ├── ultrix32/
│       └── rsx11m/
├── vms/
│   ├── iso/                                   # install ISOs
│   └── images/                                # qcow2 disks
├── emulators/
│   └── previous/                              # NeXTSTEP emulator
├── scripts/
│   ├── setup.sh                               # one-shot host bootstrapper
│   ├── lab-up
│   ├── lab-down
│   ├── lab-status
│   ├── lab-connect
│   ├── lab-reset
│   └── libvirt-hook-qemu                      # installed to /etc/libvirt/hooks/qemu
├── systemd/
│   ├── simh-211bsd.service
│   ├── simh-v7.service
│   ├── simh-v6.service
│   ├── simh-43bsd.service
│   ├── simh-ultrix11.service
│   ├── simh-ultrix32.service
│   └── simh-rsx11m.service
├── hosts.yaml                                 # master manifest
└── logs/

13.2 Dockerfile — apt-modern (Debian bookworm→stretch, Ubuntu noble→bionic)

Path: /srv/lab/docker/linux-containers/build/apt-modern/Dockerfile

# Works for: debian:bookworm, bullseye, buster, stretch
#            ubuntu:noble, jammy, focal, bionic, xenial
ARG BASE=debian:bookworm
FROM ${BASE}

ENV DEBIAN_FRONTEND=noninteractive
ENV LANG=C.UTF-8

# Install sshd + a minimal set of interactive terminal tools
RUN apt-get update \
 && apt-get install -y --no-install-recommends \
      openssh-server ca-certificates \
      bash vim-tiny less curl wget \
      tmux htop ncurses-term locales \
 && mkdir -p /var/run/sshd \
 && useradd -m -s /bin/bash test \
 && echo 'test:test' | chpasswd \
 && echo 'root:root' | chpasswd \
 && sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config \
 && sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config \
 && ssh-keygen -A \
 && rm -rf /var/lib/apt/lists/*

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

13.3 Dockerfile — apt-archive (EOL Debian/Ubuntu via archive mirrors)

Path: /srv/lab/docker/linux-containers/build/apt-archive/Dockerfile

These images are pre-EOL distributions (Debian jessie/wheezy/squeeze, Ubuntu trusty/precise/lucid). Their main repos are offline; we point apt at the archive mirrors before installing anything.

# Works for:
#   debian:jessie, debian:wheezy, debian:squeeze  (pull via: debian/eol:jessie etc.)
#   ubuntu:trusty, ubuntu:precise, ubuntu:lucid
ARG BASE=debian/eol:jessie
FROM ${BASE}

ENV DEBIAN_FRONTEND=noninteractive
ENV LANG=C

# Rewrite sources.list to archive mirrors. This is safe for both Debian EOL
# and old Ubuntu releases — the sed matches either pattern.
RUN set -e; \
    if grep -q debian /etc/os-release 2>/dev/null || [ -f /etc/debian_version ]; then \
      if grep -q ubuntu /etc/os-release 2>/dev/null; then \
        sed -i -E 's|https?://[a-z.]*archive\.ubuntu\.com|http://old-releases.ubuntu.com|g; s|https?://security\.ubuntu\.com|http://old-releases.ubuntu.com|g' /etc/apt/sources.list; \
      else \
        sed -i -E 's|https?://(deb|security|httpredir|ftp)\.debian\.org|http://archive.debian.org|g; /-updates/d; /security/d' /etc/apt/sources.list; \
        echo 'Acquire::Check-Valid-Until "false";' > /etc/apt/apt.conf.d/99archive; \
      fi; \
    fi; \
    apt-get update || true; \
    apt-get install -y --no-install-recommends --force-yes \
      openssh-server ca-certificates \
      bash vim-tiny less curl wget \
      ncurses-term locales || \
    apt-get install -y --no-install-recommends \
      openssh-server bash vim less curl ncurses-term; \
    mkdir -p /var/run/sshd; \
    useradd -m -s /bin/bash test || adduser --disabled-password --gecos "" test; \
    echo 'test:test' | chpasswd; \
    echo 'root:root' | chpasswd; \
    sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config; \
    sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config; \
    ssh-keygen -A || true; \
    rm -rf /var/lib/apt/lists/*

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

Caveats:

  • Ubuntu precise (12.04) and lucid (10.04) may require an extra flag: Docker Hub's ubuntu:precise tag has been removed. Use ubuntu/ubuntu:precise from a mirror, or rebuild from a debootstrap tarball (see 13.16 for the recipe).
  • Debian etch and lenny aren't on Docker Hub at all. For those, use iComputer7/ancient-ubuntu-docker-style debootstrap (see 13.16).

13.4 Dockerfile — rpm-modern (CentOS 7, Fedora)

Path: /srv/lab/docker/linux-containers/build/rpm-modern/Dockerfile

# Works for: quay.io/centos/centos:centos7, fedora:40, fedora:39
ARG BASE=fedora:40
FROM ${BASE}

RUN (dnf -y install openssh-server vim less curl tmux htop ncurses \
     || yum -y install openssh-server vim less curl tmux htop ncurses) \
 && useradd -m -s /bin/bash test \
 && echo 'test:test' | chpasswd \
 && echo 'root:root' | chpasswd \
 && sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config \
 && sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config \
 && ssh-keygen -A || /usr/bin/ssh-keygen -A || true

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

13.5 Dockerfile — rpm-vault (CentOS 5/6 via vault.centos.org)

Path: /srv/lab/docker/linux-containers/build/rpm-vault/Dockerfile

# Works for: centos:6, centos:5 (both still pullable on Docker Hub with CentOS Vault mirrors)
ARG BASE=centos:6
FROM ${BASE}

# Point yum at vault.centos.org (the live mirror network is offline for EOL versions)
RUN set -e; \
    sed -i 's|^mirrorlist=|#mirrorlist=|g; s|^#baseurl=http://mirror\.centos\.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-Base.repo 2>/dev/null || true; \
    for f in /etc/yum.repos.d/*.repo; do \
      sed -i 's|mirror\.centos\.org|vault.centos.org|g; s|^#\?baseurl=|baseurl=|g; s|^mirrorlist=|#mirrorlist=|g' "$f"; \
    done; \
    yum clean all; \
    yum -y install openssh-server vim-enhanced less curl ncurses; \
    /usr/bin/ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N ""; \
    /usr/bin/ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key -N "" || true; \
    useradd -m -s /bin/bash test; \
    echo 'test:test' | passwd --stdin test; \
    echo 'root:root' | passwd --stdin root; \
    sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config; \
    sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

13.6 Dockerfile — alpine

Path: /srv/lab/docker/linux-containers/build/alpine/Dockerfile

# Works for: alpine:3.19, alpine:3.18, alpine:3.10, alpine:3.4
ARG BASE=alpine:3.19
FROM ${BASE}

RUN apk add --no-cache openssh vim less curl tmux htop ncurses-terminfo bash \
 && adduser -D -s /bin/bash test \
 && echo 'test:test' | chpasswd \
 && echo 'root:root' | chpasswd \
 && ssh-keygen -A \
 && sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config \
 && sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

Very old Alpine (3.4) may lack --no-cache and needs apk update first. If alpine:3.4 fails to build, add RUN apk update before the apk add.

13.7 Dockerfile — arch

Path: /srv/lab/docker/linux-containers/build/arch/Dockerfile

FROM archlinux:latest

RUN pacman -Sy --noconfirm openssh vim less curl tmux htop ncurses \
 && useradd -m -s /bin/bash test \
 && echo 'test:test' | chpasswd \
 && echo 'root:root' | chpasswd \
 && ssh-keygen -A \
 && sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config \
 && sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config \
 && pacman -Scc --noconfirm

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

13.8 Dockerfile — opensuse

Path: /srv/lab/docker/linux-containers/build/opensuse/Dockerfile

FROM opensuse/leap:15.5

RUN zypper -n install openssh vim less curl tmux htop ncurses-utils \
 && useradd -m -s /bin/bash test \
 && echo 'test:test' | chpasswd \
 && echo 'root:root' | chpasswd \
 && ssh-keygen -A \
 && sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config \
 && sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config \
 && zypper clean -a

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

13.9 Dockerfile — void (Void Linux / musl)

Path: /srv/lab/docker/linux-containers/build/void/Dockerfile

FROM ghcr.io/void-linux/void-linux:latest-full-x86_64-musl

RUN xbps-install -Syu xbps \
 && xbps-install -y openssh vim less curl tmux htop ncurses bash \
 && useradd -m -s /bin/bash test \
 && echo 'test:test' | chpasswd \
 && echo 'root:root' | chpasswd \
 && ssh-keygen -A \
 && sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config \
 && sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

13.10 Dockerfile — slackware

Path: /srv/lab/docker/linux-containers/build/slackware/Dockerfile

Slackware on Docker Hub is maintained by Vincent Batts. It does not ship openssh, so we have to install it from slackpkg after fixing the mirror. Slackware is genuinely different from the rest of Linux — its /etc/rc.d/ init scripts still look like 1995.

FROM vbatts/slackware:14.2

RUN echo "http://mirrors.slackware.com/slackware/slackware64-14.2/" > /etc/slackpkg/mirrors \
 && slackpkg -batch=on -default_answer=y update \
 && slackpkg -batch=on -default_answer=y install openssh vim less curl ncurses \
 && /usr/bin/ssh-keygen -A \
 && useradd -m -s /bin/bash test \
 && echo 'test:test' | chpasswd \
 && echo 'root:root' | chpasswd \
 && sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin yes/' /etc/ssh/sshd_config \
 && sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/sshd_config

EXPOSE 22
CMD ["/usr/sbin/sshd", "-D", "-e"]

If the mirror is down at build time, the image build fails. That's fine — it's documentation that the lab includes Slackware; it doesn't have to be permanently online.

13.11 compose.yml — all Linux containers

Path: /srv/lab/docker/linux-containers/compose.yml

name: legacy-lab

x-common: &common
  restart: unless-stopped
  stop_grace_period: 5s

services:
  # ─────────────  Debian family (modern)
  deb12:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "debian:bookworm" } }
    ports: ["2201:22"]
  deb11:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "debian:bullseye" } }
    ports: ["2202:22"]
  deb10:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "debian:buster" } }
    ports: ["2203:22"]
  deb9:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "debian:stretch" } }
    ports: ["2204:22"]

  # ─────────────  Debian family (EOL / archive)
  deb8:
    <<: *common
    build: { context: ./build/apt-archive, args: { BASE: "debian/eol:jessie" } }
    ports: ["2205:22"]
  deb7:
    <<: *common
    build: { context: ./build/apt-archive, args: { BASE: "debian/eol:wheezy" } }
    ports: ["2206:22"]
  deb6:
    <<: *common
    build: { context: ./build/apt-archive, args: { BASE: "debian/eol:squeeze" } }
    ports: ["2207:22"]
  # deb5 (lenny) and deb4 (etch) — no usable Docker Hub image.
  # Build via debootstrap recipe in section 13.16 and import.
  # deb5:
  #   <<: *common
  #   image: local/debian-lenny:latest
  #   ports: ["2208:22"]
  # deb4:
  #   <<: *common
  #   image: local/debian-etch:latest
  #   ports: ["2209:22"]

  # ─────────────  Ubuntu family (modern)
  ub2404:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "ubuntu:noble" } }
    ports: ["2210:22"]
  ub2204:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "ubuntu:jammy" } }
    ports: ["2211:22"]
  ub2004:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "ubuntu:focal" } }
    ports: ["2212:22"]
  ub1804:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "ubuntu:bionic" } }
    ports: ["2213:22"]
  ub1604:
    <<: *common
    build: { context: ./build/apt-modern, args: { BASE: "ubuntu:xenial" } }
    ports: ["2214:22"]
  ub1404:
    <<: *common
    build: { context: ./build/apt-archive, args: { BASE: "ubuntu:trusty" } }
    ports: ["2215:22"]
  # ub1204 and ub1004 — same debootstrap story as deb5/4. Enable after 13.16.
  # ub1204:
  #   <<: *common
  #   image: local/ubuntu-precise:latest
  #   ports: ["2216:22"]
  # ub1004:
  #   <<: *common
  #   image: local/ubuntu-lucid:latest
  #   ports: ["2217:22"]

  # ─────────────  Red Hat family
  centos7:
    <<: *common
    build: { context: ./build/rpm-modern, args: { BASE: "quay.io/centos/centos:centos7" } }
    ports: ["2218:22"]
  centos6:
    <<: *common
    build: { context: ./build/rpm-vault, args: { BASE: "centos:6" } }
    ports: ["2219:22"]
  centos5:
    <<: *common
    build: { context: ./build/rpm-vault, args: { BASE: "centos:5" } }
    ports: ["2220:22"]
  fedora40:
    <<: *common
    build: { context: ./build/rpm-modern, args: { BASE: "fedora:40" } }
    ports: ["2221:22"]

  # ─────────────  Alpine family (musl + busybox)
  alpine319:
    <<: *common
    build: { context: ./build/alpine, args: { BASE: "alpine:3.19" } }
    ports: ["2222:22"]
  alpine310:
    <<: *common
    build: { context: ./build/alpine, args: { BASE: "alpine:3.10" } }
    ports: ["2223:22"]
  alpine34:
    <<: *common
    build: { context: ./build/alpine, args: { BASE: "alpine:3.4" } }
    ports: ["2224:22"]

  # ─────────────  Other distros
  arch:
    <<: *common
    build: { context: ./build/arch }
    ports: ["2225:22"]
  opensuse:
    <<: *common
    build: { context: ./build/opensuse }
    ports: ["2226:22"]
  slackware:
    <<: *common
    build: { context: ./build/slackware }
    ports: ["2227:22"]
  void:
    <<: *common
    build: { context: ./build/void }
    ports: ["2228:22"]

13.12 SIMH — systemd unit template

Path: /srv/lab/systemd/simh-211bsd.service (one file per SIMH system, adjust per instance)

[Unit]
Description=SIMH PDP-11 running 2.11BSD
After=network.target
Documentation=https://github.com/open-simh/simh

[Service]
Type=simple
User=%i
WorkingDirectory=/srv/lab/simh/images/211bsd
ExecStart=/srv/lab/simh/bin/pdp11 /srv/lab/simh/configs/211bsd.ini
Restart=on-failure
RestartSec=5
# SIMH is not signal-friendly — give it time to flush
KillSignal=SIGTERM
TimeoutStopSec=10

[Install]
WantedBy=multi-user.target

To install for all SIMH systems in one go (after images are in place):

sudo cp /srv/lab/systemd/simh-*.service /etc/systemd/system/
sudo systemctl daemon-reload
for s in simh-211bsd simh-v7 simh-v6 simh-43bsd simh-ultrix11 simh-ultrix32 simh-rsx11m; do
  sudo systemctl enable --now $s
done

13.13 SIMH — .ini configs

Path: /srv/lab/simh/configs/211bsd.ini

; 2.11BSD on emulated PDP-11/70, 4 MB RAM, RL02 disk, DZ8 terminal mux on :2601
set cpu 11/70
set cpu 4M
set rl0 RL02
attach rl0 /srv/lab/simh/images/211bsd/rl0.dsk
set dz lines=8
attach dz 2601
set console telnet=2651
boot rl0

Path: /srv/lab/simh/configs/v7.ini

; Unix V7 on PDP-11/70, 256KB RAM, RL02 disk
set cpu 11/70
set cpu 256k
set rl0 RL02
attach rl0 /srv/lab/simh/images/v7/v7_rl0.dsk
set dz lines=1
attach dz 2602
set console telnet=2652
boot rl0

Path: /srv/lab/simh/configs/v6.ini

; Unix V6 on PDP-11/40, 56KB RAM, RK05 disk
set cpu 11/40
set cpu 56k
set rk0 RK05
attach rk0 /srv/lab/simh/images/v6/unix_v6_rk.dsk
set dz lines=1
attach dz 2603
set console telnet=2653
boot rk0

Path: /srv/lab/simh/configs/43bsd.ini

; 4.3BSD on emulated VAX-11/780, 16 MB RAM
set cpu 16m
set rq0 RA81
attach rq0 /srv/lab/simh/images/43bsd/ra81_43bsd.dsk
set dz lines=8
attach dz 2604
set console telnet=2654
boot rq0

Path: /srv/lab/simh/configs/ultrix11.ini

; ULTRIX-11 v3.1 on PDP-11/73, 4 MB RAM, RL02
set cpu 11/73
set cpu 4m
set rl0 RL02
attach rl0 /srv/lab/simh/images/ultrix11/ultrix11_rl0.dsk
set dz lines=4
attach dz 2605
set console telnet=2655
boot rl0

Path: /srv/lab/simh/configs/ultrix32.ini

; ULTRIX-32 4.x on VAX-11/780, 16 MB RAM
set cpu 16m
set rq0 RA81
attach rq0 /srv/lab/simh/images/ultrix32/ra81_ultrix32.dsk
set dz lines=8
attach dz 2606
set console telnet=2656
boot rq0

Path: /srv/lab/simh/configs/rsx11m.ini

; RSX-11M-PLUS v4.6 on PDP-11/70
set cpu 11/70
set cpu 4m
set rl0 RL02
attach rl0 /srv/lab/simh/images/rsx11m/rsx11m_rl0.dsk
set dz lines=4
attach dz 2607
set console telnet=2657
boot rl0

Note: The disk image filenames above are conventional — you will get the actual files by downloading the "ready-to-run" tapes/disks from TUHS and extracting them. 2.11BSD has multiple pre-built images; start with Warren Toomey's 2.11BSD on SIMH kit. Adjust the attach path to match whatever the tarball actually contains.

13.14 libvirt port-forwarding hook

Path: /srv/lab/scripts/libvirt-hook-qemu (install to /etc/libvirt/hooks/qemu with chmod +x)

#!/bin/bash
# /etc/libvirt/hooks/qemu
# Args: $1=VM name, $2=operation (start|stopped|reconnect), $3=sub-op, $4=extra
#
# Edit the MAP array to add/remove VM → host-port forwarding rules.
# VM IPs must be reserved via the libvirt network DHCP (virsh net-edit default).

declare -A MAP=(
  [freebsd-14]="192.168.122.31:2401"
  [freebsd-10]="192.168.122.32:2402"
  [openbsd-75]="192.168.122.33:2403"
  [openbsd-67]="192.168.122.34:2404"
  [netbsd-10]="192.168.122.35:2405"
  [netbsd-7]="192.168.122.36:2406"
  [openindiana]="192.168.122.37:2501"
  [solaris-114]="192.168.122.38:2502"
  [solaris-10]="192.168.122.39:2503"
  [omnios]="192.168.122.40:2504"
  [haiku]="192.168.122.41:2505"
  [openvms-92]="192.168.122.42:2506"
  [minix3]="192.168.122.43:2702"
  [9front]="192.168.122.44:2703"
)

vm="$1"
op="$2"
entry="${MAP[$vm]:-}"
[ -z "$entry" ] && exit 0
ip="${entry%:*}"
port="${entry##*:}"

case "$op" in
  start)
    iptables -t nat -A PREROUTING -p tcp --dport "$port" -j DNAT --to-destination "$ip:22"
    iptables -I FORWARD -p tcp -d "$ip" --dport 22 -j ACCEPT
    ;;
  stopped|release)
    iptables -t nat -D PREROUTING -p tcp --dport "$port" -j DNAT --to-destination "$ip:22" 2>/dev/null
    iptables -D FORWARD -p tcp -d "$ip" --dport 22 -j ACCEPT 2>/dev/null
    ;;
esac
exit 0

13.15 Scripts — setup.sh (one-shot host bootstrapper)

Path: /srv/lab/scripts/setup.sh

#!/bin/bash
# setup.sh — one-shot lab bootstrapper for Linux Mint 20.2.
# Run once as a normal user with sudo privileges. Safe to re-run (idempotent).
#
# Respects LAB_ROOT environment variable. Default: /srv/lab.
# Example: LAB_ROOT=/opt/lab bash setup.sh
set -euo pipefail

LAB="${LAB_ROOT:-/srv/lab}"

info()  { printf '\e[1;36m[*]\e[0m %s\n' "$*"; }
ok()    { printf '\e[1;32m[+]\e[0m %s\n' "$*"; }
warn()  { printf '\e[1;33m[!]\e[0m %s\n' "$*"; }

info "Lab root: $LAB"
if [ ! -d "$LAB" ]; then
  warn "$LAB does not exist — creating (may prompt for sudo)"
  sudo mkdir -p "$LAB"
  sudo chown "$USER:$USER" "$LAB"
fi

# ─────────── 1. base packages
info "Updating package lists"
sudo apt update
sudo apt full-upgrade -y
info "Installing base tools"
sudo apt install -y curl wget git vim htop tmux net-tools iproute2 \
                    build-essential pkg-config unzip xz-utils bzip2 \
                    software-properties-common ca-certificates gnupg lsb-release \
                    rsync jq openssh-client openssh-server ufw \
                    libpcap-dev libvdeplug-dev libsdl2-dev libsdl2-image-dev \
                    libpcre3-dev libpng-dev zlib1g-dev libedit-dev cmake \
                    cpu-checker glances iotop sysstat

# ─────────── 2. docker
if ! command -v docker >/dev/null; then
  info "Installing Docker Engine"
  sudo install -m 0755 -d /etc/apt/keyrings
  curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
    | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu focal stable" \
    | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  sudo apt update
  sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  sudo usermod -aG docker "$USER"
  ok "Docker installed — log out and back in for group membership"
else
  ok "Docker already installed"
fi

# ─────────── 3. qemu / kvm / libvirt
info "Installing QEMU, KVM, libvirt, virt-manager"
sudo apt install -y qemu-kvm qemu-system-x86 qemu-system-sparc qemu-system-ppc \
                    qemu-system-mips qemu-system-arm qemu-utils \
                    libvirt-daemon-system libvirt-clients bridge-utils \
                    virtinst virt-manager ovmf
sudo usermod -aG libvirt "$USER"
sudo usermod -aG kvm "$USER"
sudo virsh net-autostart default 2>/dev/null || true
sudo virsh net-start default 2>/dev/null || true
if kvm-ok >/dev/null 2>&1; then ok "KVM acceleration available"; else warn "KVM acceleration NOT available — VMs will be slow"; fi

# ─────────── 4. layout
info "Creating directory layout under $LAB"
sudo mkdir -p "$LAB"/{docker/linux-containers/build,simh/{bin,src,configs,images},vms/{iso,images},emulators,scripts,systemd,logs}
sudo chown -R "$USER:$USER" "$LAB"

# ─────────── 5. firewall (LAN only)
info "Configuring UFW"
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow from 192.168.0.0/16 to any 2>/dev/null || true
sudo ufw allow from 10.0.0.0/8 to any 2>/dev/null || true
sudo ufw allow 22/tcp
sudo ufw --force enable || true

# ─────────── 6. SIMH
if [ ! -x "$LAB/simh/bin/pdp11" ]; then
  info "Building SIMH from source"
  if [ ! -d "$LAB/simh/src/.git" ]; then
    git clone https://github.com/open-simh/simh.git "$LAB/simh/src"
  fi
  (cd "$LAB/simh/src" && make -j"$(nproc)" pdp11 vax pdp10 altair altairz80)
  cp "$LAB/simh/src/BIN/"* "$LAB/simh/bin/"
  ok "SIMH built — binaries in $LAB/simh/bin/"
else
  ok "SIMH already built"
fi

# ─────────── 7. libvirt hook
if [ -f "$LAB/scripts/libvirt-hook-qemu" ]; then
  info "Installing libvirt QEMU hook"
  sudo install -m 0755 "$LAB/scripts/libvirt-hook-qemu" /etc/libvirt/hooks/qemu
  sudo systemctl restart libvirtd
fi

# ─────────── 8. systemd SIMH units
if ls "$LAB/systemd/simh-"*.service >/dev/null 2>&1; then
  info "Installing SIMH systemd units"
  sudo cp "$LAB/systemd/simh-"*.service /etc/systemd/system/
  sudo systemctl daemon-reload
  ok "SIMH units installed — enable individually after placing disk images"
fi

# ─────────── 9. Docker containers
if [ -f "$LAB/docker/linux-containers/compose.yml" ]; then
  info "Building Docker images (this will take a while)"
  (cd "$LAB/docker/linux-containers" && docker compose build --parallel) || \
    warn "Some images failed to build — check individually with: docker compose build <service>"
  (cd "$LAB/docker/linux-containers" && docker compose up -d) || true
fi

# ─────────── 10. PATH + system-wide LAB_ROOT export
if ! grep -q 'simh/bin' ~/.bashrc; then
  echo "export LAB_ROOT=$LAB" >> ~/.bashrc
  echo "export PATH=$LAB/simh/bin:$LAB/scripts:\$PATH" >> ~/.bashrc
fi
# Make LAB_ROOT visible to all login shells (systemd units read this too)
sudo tee /etc/profile.d/lab.sh >/dev/null <<EOF
export LAB_ROOT=$LAB
export PATH=$LAB/simh/bin:$LAB/scripts:\$PATH
EOF
sudo chmod +x /etc/profile.d/lab.sh

ok "Setup complete."
echo
echo "Next steps:"
echo "  1. Log out and back in so docker/libvirt groups apply."
echo "  2. Run 'lab-status' to see what is running."
echo "  3. For SIMH, place disk images under $LAB/simh/images/ and enable the systemd units."
echo "  4. For VMs, install from ISO via virt-install (see section 7.2)."
echo "  5. For polarhome, fill in $LAB/hosts.yaml (see section 13.17)."

13.16 Scripts — lab-up, lab-down, lab-status, lab-connect, lab-reset

Path: /srv/lab/scripts/lab-up

#!/bin/bash
# Start all always-on parts of the lab (containers + SIMH). Heavy VMs stay off.
# Honors LAB_ROOT environment variable; falls back to /srv/lab.
set -e
LAB="${LAB_ROOT:-/srv/lab}"
cd "$LAB/docker/linux-containers"
docker compose up -d
for svc in $(systemctl list-unit-files 'simh-*.service' --no-legend 2>/dev/null | awk '/simh-/ {print $1}'); do
  sudo systemctl start "$svc" || true
done
echo "Lab up ($LAB). Run 'lab-status' to verify."

Path: /srv/lab/scripts/lab-down

#!/bin/bash
# Stop everything gracefully.
# Honors LAB_ROOT environment variable; falls back to /srv/lab.
set +e
LAB="${LAB_ROOT:-/srv/lab}"
cd "$LAB/docker/linux-containers"
docker compose stop
for svc in $(systemctl list-unit-files 'simh-*.service' --no-legend 2>/dev/null | awk '/simh-/ {print $1}'); do
  sudo systemctl stop "$svc"
done
for vm in $(virsh list --state-running --name 2>/dev/null); do
  [ -n "$vm" ] && virsh shutdown "$vm"
done
echo "Lab down."

Path: /srv/lab/scripts/lab-status

#!/bin/bash
# One-page status view.
# Honors LAB_ROOT environment variable; falls back to /srv/lab.
LAB="${LAB_ROOT:-/srv/lab}"

printf '\e[1;36m=== Lab root: %s ===\e[0m\n' "$LAB"
printf '\e[1;36m=== Docker containers ===\e[0m\n'
(cd "$LAB/docker/linux-containers" && docker compose ps) 2>/dev/null || echo "(compose not configured)"

printf '\n\e[1;36m=== libvirt VMs ===\e[0m\n'
virsh list --all 2>/dev/null || echo "(libvirt not configured)"

printf '\n\e[1;36m=== SIMH systemd services ===\e[0m\n'
systemctl list-units 'simh-*' --no-pager --no-legend 2>/dev/null | awk '{printf "  %-24s %-8s %s\n",$1,$3,$4}'

printf '\n\e[1;36m=== Listening ports in lab range ===\e[0m\n'
ss -tlnp 2>/dev/null | awk '/:2[2-9][0-9]{2} /{print "  "$4}' | sort -u

printf '\n\e[1;36m=== RAM / CPU ===\e[0m\n'
free -h | awk 'NR==1||NR==2'
uptime

Path: /srv/lab/scripts/lab-connect

#!/bin/bash
# Connect to a lab target by short name. Reads hosts.yaml.
# Usage: lab-connect deb12   →   ssh test@testlab.local -p 2201
# Honors LAB_ROOT environment variable; falls back to /srv/lab.
LAB="${LAB_ROOT:-/srv/lab}"
target="${1:-}"
if [ -z "$target" ]; then
  echo "Usage: lab-connect <name>"
  echo "Known names (from $LAB/hosts.yaml):"
  awk '/^  - name:/ {print "  "$3}' "$LAB/hosts.yaml" 2>/dev/null
  exit 1
fi

# Parse hosts.yaml with awk (no yq dependency). Expects the format in 13.17.
eval "$(awk -v t="$target" '
  /^  - name:/ { name=$3; found=0 }
  name==t { found=1 }
  found && /^    host:/  { host=$2 }
  found && /^    port:/  { port=$2 }
  found && /^    proto:/ { proto=$2 }
  found && /^    user:/  { user=$2 }
  found && /^    term:/  { term=$2 }
  END { if (found) printf("HOST=%s\nPORT=%s\nPROTO=%s\nUSER=%s\nTERM_=%s\n",host,port,proto,user,term) }
' "$LAB/hosts.yaml")"

if [ -z "${HOST:-}" ]; then
  echo "Target not found: $target"
  exit 1
fi

PORT="${PORT:-22}"
PROTO="${PROTO:-ssh}"
USER_="${USER:-test}"

case "$PROTO" in
  ssh)    exec env TERM="${TERM_:-xterm}" ssh -p "$PORT" "$USER_@$HOST" ;;
  telnet) exec env TERM="${TERM_:-vt100}" telnet "$HOST" "$PORT" ;;
  *)      echo "Unknown proto: $PROTO"; exit 1 ;;
esac

Path: /srv/lab/scripts/lab-reset

#!/bin/bash
# Reset a single target to a clean state.
# Usage: lab-reset deb12
# Honors LAB_ROOT environment variable; falls back to /srv/lab.
set -e
LAB="${LAB_ROOT:-/srv/lab}"
name="${1:-}"
if [ -z "$name" ]; then echo "Usage: lab-reset <name>"; exit 1; fi

case "$name" in
  deb*|ub*|centos*|fedora*|alpine*|arch|opensuse|slackware|void)
    cd "$LAB/docker/linux-containers"
    docker compose up -d --force-recreate "$name"
    ;;
  freebsd*|openbsd*|netbsd*|solaris*|omnios|openindiana|haiku|openvms*|minix*|9front)
    virsh snapshot-revert "$name" clean
    virsh start "$name" 2>/dev/null || true
    ;;
  simh-*)
    sudo systemctl restart "$name"
    ;;
  *)
    echo "Unknown target: $name"; exit 1 ;;
esac
echo "Reset: $name"

Make everything executable:

chmod +x /srv/lab/scripts/*

13.17 hosts.yaml — master manifest

Path: /srv/lab/hosts.yaml

This is the single source of truth for connection details across the lab. lab-connect reads it. You can also drive scripts/test.py --target <name> from it.

# SSH Workbench legacy test lab — master host manifest.
# Fields:
#   name  : short identifier used by lab-connect / lab-reset
#   host  : hostname or IP the target accepts connections on
#   port  : TCP port (22 for VMs using static DHCP + libvirt NAT+ forward rule)
#   proto : ssh | telnet
#   user  : default login username
#   term  : recommended TERM environment variable for SSH Workbench
#   cat   : matrix category (A-F)
#   notes : free text

containers:                      # Category A — always on, in Docker
  - { name: deb12,    host: testlab.local, port: 2201, proto: ssh, user: test, term: xterm-256color, cat: A, notes: "Debian 12 bookworm" }
  - { name: deb11,    host: testlab.local, port: 2202, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: deb10,    host: testlab.local, port: 2203, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: deb9,     host: testlab.local, port: 2204, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: deb8,     host: testlab.local, port: 2205, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: deb7,     host: testlab.local, port: 2206, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: deb6,     host: testlab.local, port: 2207, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: ub2404,   host: testlab.local, port: 2210, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: ub2204,   host: testlab.local, port: 2211, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: ub2004,   host: testlab.local, port: 2212, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: ub1804,   host: testlab.local, port: 2213, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: ub1604,   host: testlab.local, port: 2214, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: ub1404,   host: testlab.local, port: 2215, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: centos7,  host: testlab.local, port: 2218, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: centos6,  host: testlab.local, port: 2219, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: centos5,  host: testlab.local, port: 2220, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: fedora40, host: testlab.local, port: 2221, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: alpine319,host: testlab.local, port: 2222, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: alpine310,host: testlab.local, port: 2223, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: alpine34, host: testlab.local, port: 2224, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: arch,     host: testlab.local, port: 2225, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: opensuse, host: testlab.local, port: 2226, proto: ssh, user: test, term: xterm-256color, cat: A }
  - { name: slackware,host: testlab.local, port: 2227, proto: ssh, user: test, term: xterm,          cat: A }
  - { name: void,     host: testlab.local, port: 2228, proto: ssh, user: test, term: xterm-256color, cat: A }

vms:                             # Category B — on-demand libvirt/QEMU
  - { name: freebsd-14,  host: testlab.local, port: 2401, proto: ssh, user: test, term: xterm-256color, cat: B }
  - { name: freebsd-10,  host: testlab.local, port: 2402, proto: ssh, user: test, term: xterm,          cat: B }
  - { name: openbsd-75,  host: testlab.local, port: 2403, proto: ssh, user: test, term: xterm-256color, cat: B }
  - { name: openbsd-67,  host: testlab.local, port: 2404, proto: ssh, user: test, term: xterm,          cat: B }
  - { name: netbsd-10,   host: testlab.local, port: 2405, proto: ssh, user: test, term: xterm,          cat: B }
  - { name: netbsd-7,    host: testlab.local, port: 2406, proto: ssh, user: test, term: vt220,          cat: B }
  - { name: openindiana, host: testlab.local, port: 2501, proto: ssh, user: test, term: xterm-256color, cat: B }
  - { name: solaris-114, host: testlab.local, port: 2502, proto: ssh, user: test, term: xterm-256color, cat: B }
  - { name: solaris-10,  host: testlab.local, port: 2503, proto: ssh, user: test, term: dtterm,         cat: B }
  - { name: omnios,      host: testlab.local, port: 2504, proto: ssh, user: test, term: xterm-256color, cat: B }
  - { name: haiku,       host: testlab.local, port: 2505, proto: ssh, user: test, term: xterm,          cat: B }
  - { name: openvms-92,  host: testlab.local, port: 2506, proto: ssh, user: SYSTEM, term: vt100,        cat: B }

simh:                            # Category C — SIMH emulated historical systems
  - { name: 211bsd,      host: testlab.local, port: 2601, proto: telnet, user: root, term: vt100, cat: C }
  - { name: v7,          host: testlab.local, port: 2602, proto: telnet, user: root, term: vt52,  cat: C }
  - { name: v6,          host: testlab.local, port: 2603, proto: telnet, user: root, term: dumb,  cat: C }
  - { name: 43bsd,       host: testlab.local, port: 2604, proto: telnet, user: root, term: vt100, cat: C }
  - { name: ultrix11,    host: testlab.local, port: 2605, proto: telnet, user: root, term: vt100, cat: C }
  - { name: ultrix32,    host: testlab.local, port: 2606, proto: telnet, user: root, term: vt220, cat: C }
  - { name: rsx11m,      host: testlab.local, port: 2607, proto: telnet, user: MCR,  term: vt100, cat: C }

emulators:                       # Category D — specialty emulators
  - { name: nextstep,    host: testlab.local, port: 2701, proto: telnet, user: me,   term: nextstep, cat: D }
  - { name: minix3,      host: testlab.local, port: 2702, proto: ssh,    user: test, term: minix,    cat: D }
  - { name: 9front,      host: testlab.local, port: 2703, proto: ssh,    user: glenda, term: 9term,  cat: D }

remote:                          # Category E — polarhome shell accounts
  - { name: polarhome-aix7,       host: aix7.polarhome.com,      port: 22, proto: ssh, user: <yours>, term: xterm,     cat: E }
  - { name: polarhome-hpux,       host: hpux.polarhome.com,      port: 22, proto: ssh, user: <yours>, term: hpterm,    cat: E }
  - { name: polarhome-hpux-ia64,  host: hpux-ia64.polarhome.com, port: 22, proto: ssh, user: <yours>, term: hpterm,    cat: E }
  - { name: polarhome-irix,       host: irix.polarhome.com,      port: 22, proto: ssh, user: <yours>, term: iris-ansi, cat: E }
  - { name: polarhome-tru64,      host: tru64.polarhome.com,     port: 22, proto: ssh, user: <yours>, term: dtterm,    cat: E }

ephemeral:                       # Category F — cloud trials, lifetime-limited
  - { name: ibm-powervs-aix73, host: TBD, port: 22, proto: ssh, user: root, term: aixterm, cat: F, notes: "Spin up as needed. Delete when done." }

13.18 Debootstrap recipe for the very old Debian/Ubuntu

Debian etch/lenny and Ubuntu lucid/precise do not exist as pullable Docker Hub images. Build them yourself with debootstrap + docker import:

sudo apt install -y debootstrap
# Example: Debian 5.0 "lenny"
sudo debootstrap --arch=amd64 --no-check-gpg \
  lenny /tmp/lenny-root \
  http://archive.debian.org/debian/
sudo tar -C /tmp/lenny-root -cpf - . | docker import - local/debian-lenny:base
sudo rm -rf /tmp/lenny-root

# Then build the sshd layer on top:
cat > /tmp/Dockerfile.lenny <<'EOF'
FROM local/debian-lenny:base
RUN echo 'deb http://archive.debian.org/debian lenny main' > /etc/apt/sources.list \
 && echo 'Acquire::Check-Valid-Until "false";' > /etc/apt/apt.conf.d/99archive \
 && apt-get update \
 && apt-get install -y openssh-server vim-tiny less \
 && useradd -m test \
 && echo 'test:test' | chpasswd \
 && ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key -N "" \
 && ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key -N ""
EXPOSE 22
CMD ["/usr/sbin/sshd","-D","-e"]
EOF
docker build -t local/debian-lenny:latest -f /tmp/Dockerfile.lenny /tmp

After that the deb5: service in compose.yml can reference image: local/debian-lenny:latest. Repeat for etch (replace lenny everywhere), and for Ubuntu do the same with --include=ubuntu-minimal and the appropriate archive mirror.

This is tedious. Only bother if you actually care about exercising the parser against a kernel 2.4 / bash 2.x era system.


14. Installation procedure

Once you have this document on the target server, the procedure is:

# ─────────── 0. DECIDE WHERE THE LAB LIVES
# Read section 2.4.1 first. Set LAB_ROOT to the directory you want.
# Everything below honors this variable; the extractor rewrites every
# /srv/lab reference in the doc content as it writes files out.
export LAB_ROOT=/srv/lab                  # <-- change this to your choice
# export LAB_ROOT=/opt/lab
# export LAB_ROOT=$HOME/lab
# export LAB_ROOT=/mnt/bigdisk/lab

# ─────────── 1. Copy the doc to the server
scp docs/LEGACY_TEST_LAB.md user@testlab.local:~/
ssh user@testlab.local
export LAB_ROOT=/srv/lab                  # (re-export on the server too)

# ─────────── 2. Create the lab root (may require sudo)
sudo mkdir -p "$LAB_ROOT"
sudo chown -R "$USER:$USER" "$LAB_ROOT"
cd "$LAB_ROOT"

# ─────────── 3. Extract every file from the doc into the lab root.
# The extractor finds every "Path: `<abs-path>`" marker followed by a fenced
# code block and writes the block to that path. It rewrites /srv/lab to
# $LAB_ROOT on the fly so paths embedded inside files (systemd units, SIMH
# .ini configs, scripts) point at your chosen location.

LAB_ROOT="$LAB_ROOT" python3 - ~/LEGACY_TEST_LAB.md <<'PY'
import os, re, sys, pathlib
lab_root = os.environ.get("LAB_ROOT", "/srv/lab").rstrip("/")
src = pathlib.Path(sys.argv[1]).read_text()
# Find "Path: `<path>`" headers immediately followed by a fenced code block.
pat = re.compile(r'Path: `([^`]+)`\s*\n+```[a-zA-Z0-9]*\n(.*?)\n```', re.S)
written = 0
for m in pat.finditer(src):
    doc_path = m.group(1)
    body = m.group(2)
    # Rewrite /srv/lab references in both the target path and file body
    # so the installed lab reflects the user's LAB_ROOT choice.
    target = doc_path.replace("/srv/lab", lab_root)
    body = body.replace("/srv/lab", lab_root)
    out = pathlib.Path(target)
    out.parent.mkdir(parents=True, exist_ok=True)
    out.write_text(body + "\n")
    # Make scripts executable
    if "/scripts/" in target or target.endswith(".sh") or "libvirt-hook" in target:
        out.chmod(0o755)
    print("wrote", out)
    written += 1
print(f"\n{written} files written under {lab_root}")
PY

# ─────────── 4. Run the bootstrap
bash "$LAB_ROOT/scripts/setup.sh"

# ─────────── 5. Log out and back in so docker/libvirt/kvm group
#              membership applies AND /etc/profile.d/lab.sh is sourced.

# ─────────── 6. Verify
lab-status

How LAB_ROOT propagates:

Layer Mechanism
Extractor Reads $LAB_ROOT from environment, rewrites all /srv/lab strings in file paths and file contents before writing
setup.sh Reads ${LAB_ROOT:-/srv/lab} at the top
lab-up / lab-down / lab-status / lab-connect / lab-reset Same fallback pattern — work with or without $LAB_ROOT set
Your shell setup.sh writes export LAB_ROOT=... to /etc/profile.d/lab.sh so every future login shell sees it
systemd units The SIMH unit files contain the absolute path — the extractor substitutes it at write time

If you change your mind later: the fastest fix is mv-ing the lab directory and re-running the extractor with the new LAB_ROOT. systemd units will need a reload (sudo systemctl daemon-reload). Don't try to sed it manually — there are too many places.

At this point Category A (Docker) should be up; Categories B, C, D require you to download install media and follow the per-target instructions in section 7.

Incremental order — what to get working first:

  1. Docker containers (section 13.11). First win — half the matrix online in ~20 min.
  2. 2.11BSD on SIMH (section 13.13). Best VT52/VT100 target, 10 minutes.
  3. FreeBSD 14 + OpenBSD 7.5 VMs (section 7.2). Real BSDs. 30 min each.
  4. Solaris 11.4 CBE VM. 1-2 hours — Oracle installer is slow.
  5. polarhome accounts (section 7.5). ~$10, 15 minutes of account setup.
  6. Rest of the SIMH systems (V7, 4.3BSD, Ultrix). ~1 hour total.
  7. NeXTSTEP via Previous. 1-2 hours.
  8. Minix, Plan 9. 1 hour each.

Budget: a focused weekend gets you to ~40/56 targets. The polished "everything-green" state is a week of evenings.


15. References