When a WAL drive for an OSD(s) dies, the OSD is taken out of service. I ended up destroying the affected OSDs, replacing the SSD, and rebuilding the OSDs from scratch. I believe there is a way to keep the OSDs in-tact and only replace the WAL drive and add it back to the OSDs - I haven't figured out how to do that.
1. Take the OSD out of the cluster
ceph osd out <osd-number>
2. Stop the OSD
systemctl stop ceph-osd@<osd-number>
3. Purge the OSD from the cluster. This removes the OSD from the CRUSH map, removes the OSD authentication key, and removes the OSD from the OSD map.
ceph osd purge <osd-number> --yes-i-really-mean-it
4. Remove the OSD entry from /etc/ceph/ceph.conf
(if it exists - the OSD wasn't listed in my instance).
[osd.1]
host= {hostname}
5. Copy the modified file to all hosts in the cluster.
Since the WAL drive is failed, I just pulled it from the server without trying to unmount anything.
The issues come when formatting the OSDs so they can be re-added/re-built once the new WAL device is installed. LVM is still setup on the OSDs and will need to be removed to allow a re-format.
For example, one of my OSDs is still "in use" by crypt
sda 8:0 0 5.5T 0 disk
└─ceph--e2cc31e8--6454--406f--99aa--6fa60be82220-osd--block--74cbb77c--f5b2--4550--ab4e--af68d0afb581 253:2 0 5.5T 0 lvm
└─dsAnSO-6sZR-Dlz3-rLv9-ebPm-XkAV-H7hxxB 253:5 0 5.5T 0 crypt
Close the crypt mount
cryptsetup close dsAnSO-6sZR-Dlz3-rLv9-ebPm-XkAV-H7hxxB
Remove the LV
lvm lvremove /dev/ceph-e2cc31e8-6454-406f-99aa-6fa60be82220/osd-block-74cbb77c-f5b2-4550-ab4e-af68d0afb581
Remove the VG
lvm vgremove ceph-e2cc31e8-6454-406f-99aa-6fa60be82220
Remove the PV
lvm pvremove /dev/sda
The OSD should now be available to be added back using the new WAL device