Gallery

VNS Matrix / Merchants of Slime

Live on June 30th, a digital archive for Australian cyberfeminist collective, VNS Matrix / Merchants of Slime.

’90s-period CRT phosphor colours, monospace fonts, highly structured and interlinked data, emerging from over a year of conversations and work with the Merchants of Slime. Deep adoration for Web 1.0 aesthetics, sliding into contemporary possibilities for accessibility, interaction, responsiveness, and clarity.

By far the largest project I’ve undertaken, handling archival data management, utterly masses of PHP, JS, and CSS, and teasing out over months the design, aesthetic, and movement through hundreds of pages and thousands of media files – all while trying to keep it properly accessible, semantic, responsive, logical, even simple, while the phosphor burns the screen.

Heaps big thanks to Virginia Barratt and VNS Matrix for going, “Yeah, Frances is what we want.” And hectic reps to research assistant Clare Bartholomaeus for all the scanning and cataloguing.

Image

Slime is Live, Cunt

Phosphor burn digital archaeology slime archive for the 21st century, cunt.

Done.

Seems that keeping 3000 posts and 10,000 images updated takes about half a blog lifetime.

I moved from Movable Type to WordPress in 2009, and ditched ecto, the old blogging app, about the same time. Over the years, I wrote SQL queries, grepped the hell out of the database, redesigned the whole website (while keeping the same black and white aesthetic), recoded stuff, wrote some hella shonky redirections, and slowly went through all the posts turning images into galleries and using WordPress’ Featured Image, and then gave up on it all a couple of years ago before getting weirdly ‘inspired’ this weekend and doing 1000+ posts over the course of 2 days.

My database queries tell me all the galleries are now correct, and all the single images also. A stupid amount of work I hope I never have to do again, because I know my singular, obsessive focus will do it. Legit, my wrist is going ”WTF, Frances, WTfuckingF.” and if I keep blogging like this, eventually maintenance will take longer than there is days in a year.

After many years of supernaut images being tiny wi…

Status

After many years of supernaut images being tiny with ragged right background captions, I’ve been slowly ditching it for supernaut teenage Instabanga big images. And finally I cleared out all the old styles and code, made new medium and large image galleries, redid the styles and scripts repeatedly, said goodbye to funky bodges. Kept the blazing deep pink tho’. ???❌?‼️

Hacking & Bodging a Git Hook + Vagrant + WP-CLI + Bash Local to Dev Database Transfer

Ever since I started using Git to push local website changes to a development server, I’ve been vaguely irritated about dealing with the database in the same manner. For a long time, I used interconnect/it’s Search Replace DB for this side of things, but I was always wondering if I could somehow increase laziness time by automating the process. One hungover Sunday, plus a couple of hours on Monday, and one hacked and bodged success.

This isn’t a “How to do yer Git + Vagrant + local to dev” thing, nor is it a copy-paste, “Works for me!” party. Nonetheless, provided you’re using git-push, and are comfortable with WP-CLI or MySQL command-line, Bash, and generally thrashing small bits of code around, in principle it would work in any situation. And I do feel kinda dirty throwing all this into Git post-receive, but whatever seems to work.

So, here’s what I wanted to do:

  1. Do all my commits, and run git-push and throw all the file changes to the dev server.
  2. Combine that with a dump of the database, then get it to the dev server.
  3. Use something at the other end to import the database, and search-replace strings
  4. Clean up after at both ends.

When I first looked into this, it seemed using the pre-commit Git hook was the most common approach, dumping the database and adding it to the commit. I didn’t want to do this, for a couple of reasons: I do a lot of commits, and the majority have no database component; I wasn’t looking to version control the database; All I wanted to do was push local to dev database with the changed files. Looks like a job for pre-push hook.

Earlier this year, I started using Vagrant, so the first issue was how to dump the database from there. I do commits from the local folder, rather than SSH-ing into the VM, so mysqldump is not going to work without first getting into the VM. Which brought its own set of weirdnesses, and this was the point when I decided to flop over to WP-CLI, the WordPress command-line tool.

I often find solutions to this sort of thing are dependant on the combination of software and commands being used. I use mysqldump on its own all the time, but here, I needed to use Git to set the path for where the database would be dumped to — because git hooks are in a sub-directory of the git folder — and that, in combination with dumping the database inside the VM while within a Git command running from the local folder (yeah, probably should just do all my git via SSH), and hurling it at a remote server, means sometimes things that work in isolation get cranky. And this is a hack/bodge, so I went with:

  1. Set up paths for the database dump with Git, ’cos Git is running this show.
  2. SSH into the Vagrant box.
  3. WP-CLI dump the database to a gzipped file.
  4. SCP that up to the dev server.
  5. Delete all that on the local server, ’cos I’m tidy.

That’s half of it done. I’ve got my pushes working, the database file is up on the dev server, the local server is all cleaned up, so now it’s time for the other end.

In this case, I was doing it for a site on DreamHost, who conveniently give all kinds of fun command-line access, plus WP-CLI on their shared servers. Once Git has finished checking out the new file changes in post-receive, it’s time for frankly bodging it.

My current usual setup is a bare repository on the dev server, which checks out to the development website directory. This means neither the uploaded database, nor WP-CLI and the WordPress root are in the same place as the running hook. No big deal, just use –path=. The next thing though, is cleaning up post-import. Strings to be changed all over the place, like local URLs swapped to dev. And for that we have, wp search-replace, which is an awful lot like Search Replace DB. At the dev end then:

  1. Set up paths again, this time it’s WP-CLI running the show.
  2. Unzip the database then import it.
  3. Do database stuff like search-replace strings, and delete transients.
  4. Delete that uploaded database file on the dev server, ’cos I’m tidy.

I was looking at all this late last night, all those repeating lines of ‘wp search-replace’ and I thought, “That looks like a job for an array.” Which led me down the tunnel of Bash arrays, associative arrays, “How can I actually do ‘blah’, ’cos bash seems to be kinda unwilling here?” and finally settling on not quite what I wanted, but does the job. Also, bash syntax always looks like it’s cursing and swearing.

The pre-push hook:

#!/bin/sh

# a pre-push hook to dump the database to a folder in the repo's root directory, upload it to the dev server, then delete when finished

echo '***************************************************************'
echo 'preparing to back up database'
echo '***************************************************************'

# set up some variables, to keep things more readable later on
# backup_dir is relative to git hooks, i.e. 2 directories higher, so use git to set it

ROOT="$(git rev-parse --show-toplevel)"
BACKUP_DIR="$ROOT/.database"
DB_NAME="database"

# check there is a database backup directory, make it if it doesn't exist then cd to it

if [ ! -d "$BACKUP_DIR" ]; then
mkdir "$BACKUP_DIR"
cd "$BACKUP_DIR"
else
cd "$BACKUP_DIR"
fi

# cos this is vagrant, first ssh into it. there will be a password prompt
# using EOF to write the commands in bash, rather than in ssh quotation marks

ssh -t vagrant@172.17.0.10 << EOF

# cd to the new databases folder. this is absolute, cos is vm and not local folder
cd "/var/www/user/domain.tld/.database" 

# then export the database with wp-cli and gzip it
wp db export --add-drop-table - | gzip -9 > $DB_NAME.sql.gz

# exit ssh
exit

# bail out of eof
EOF

# scp the backup directory and database to dev server
scp -r $BACKUP_DIR user@domain.tld:~/

# remove that backup directory so it's not clogging up git changes
rm -r $BACKUP_DIR

echo '***************************************************************'
echo 'all done, finishing up git push stuff'
echo '***************************************************************'

The post-receive hook:

#!/bin/sh

echo '***************************************************************'
echo 'post-receive is working. checking out pushed changes.'
echo '***************************************************************'

# check out the received changes from local to the dev site
git --work-tree=/home/user/dev.domain.tld  --git-dir=/home/user/.repo.git checkout -f


# import the database with wp-cli
echo '***************************************************************'
echo 'starting database import'
echo '***************************************************************'

# setting up some paths
# on some webhosts, e.g. all-inkl, setting the alias to wp-cli.phar is required, uncomment and set if needed
# alias wp='/path/to/.wp-cli/wp-cli.phar'

# the path to wp-config, needed for wp-cli
WP_PATH="/home/user/dev.domain.tld/wordpress"
# database directory, created in git pre-push
DB_DIR="/home/user/.database"

# check there is a database directory
if [ -d "$DB_DIR" ]; then

	# then check it for sql.gz files
	DB_COUNT=`ls -1 $DB_DIR/*.sql.gz 2>/dev/null | wc -l` 

	# if there is exactly 1 database, proceed
	if [ $DB_COUNT == 1 ]; then

		#grab the db name, this way the db name isn't hardcoded
		DB_NAME=$(basename $DB_DIR/*)

		echo 'importing the database'
		echo '***************************************************************'

		# unzip the database, then import it with wp-cli
		gunzip < $DB_DIR/$DB_NAME | wp db import - --path=$WP_PATH

		# clear the transients
		wp transient delete --all --path=$WP_PATH

		# run search replace on the main strings needing to be updated
		# make an array of strings to be searched for and replaced
		search=(
			"local.domain.tld:8443"
			"local.domain.tld"
			"/var/www/user/"
		)
		replace=(
			"dev.domain.tld"
			"dev.domain.tld"
			"/home/user/"
		)

		#loop through the array and spit it into wp search-replace
		for (( i=0; i < ${#search[@]}; ++i )); do
			eval wp search-replace --all-tables --precise \"${search[i]}\" \"${replace[i]}\" --path=$WP_PATH
		done

		# any other wp-cli commands to run
		wp option update blogname "blog name" --path=$WP_PATH

		# delete the backup directory, so there's no junk lying around
		rm -rf $DB_DIR
	
	else
	
		echo 'database was not found'
		echo '***************************************************************'
	
	fi

else

	echo 'database folder was not found'
	echo '***************************************************************'

fi

echo '***************************************************************'
echo 'all done'
echo '***************************************************************'

What else? Dunno. It’s pretty rough, but basically proves something I didn’t find an example of all combined into one: that you can use git hooks to push the database and file changes at the same time, and automate the local-to-dev database transfer process. Is this the best way to do it? Nah, it’s majorly bodgy, and would have to be tailored for each server setup, and I’m not even sure doing such things in a git hook is advisable, even if it works. It does demonstrate that each step of the process can be automated — irrespective of how shonky your setup is — and provided you account for that and your own coding proclivities, there’s multiple ways of doing the same thing.

(edit, a day later.)
I decided to throw this into ‘production’, testing it on a development site I had to create on webhost I’m not so familiar with but who do provide the necessities (like SSH and Let’s Encrypt). Two things happened.

First, WP-CLI didn’t work at all in the post-receive script, even while it did if I ran commands directly in Terminal (or iTerm as I’m currently using). After much messing about, and trying a bunch of things it turned out that this was an issue of “has to be tailored for each server setup”, in this case adding an alias to wp-cli.phar.

Second, having a preference for over-compensation while automating, it occurred to me that I’d made some assumptions, like there’d only be one database file in the uploaded directory, and that hardcoding the filename — which was one of those “I’ll fix that later” things — had morphed into technical debt. So, feeling well competent in Bash today, I decided for the “make sure there’s actually a database folder, then check there’s actually a sql.gz file in it, and there’s only one of them, then get the name of that file, and use it as a variable”. I often wonder how much of this is too much, but trying to cover the more obvious possible bollocks seems reasonably sensible.

Both of these have been rolled into the code above. And as always, it occurs to me already there’s better — ‘better’ — ways to do this, like in pre-push, piping the database directly to the dev server with SSH, or simultaneously creating a separate, local database backup, or doing it all in SQL commands.

5-Character Dev Environment

Messing with my .bash_profile this afternoon, post-diving into Laravel and Git (which I’ve been doing much of the last week), I realised I could boot my entire dev environment with 5 letters. Fewer, if I wanted.

So instead of going to the Dock, clicking each of the icons, going to each and faffing around, I could at least boot them all, and set off some commands in Terminal (or ITerm2 as I’m now using).

Weirdly, until Justine gave me an evening of command-line Git learning, and wanted my .bash_profile, “Like so,” I hadn’t realised you could do stuff like that, despite amusing myself with all manner of shell scripts. Now I know what’s possible, I’m over-achieving in efficient laziness.

What’s missing is:

  • Opening multiple windows in ITerm or Terminal and running a different command in each (I don’t want to boot multiple instances of an app).
  • Setting off a menu action in an opened app, e.g. in Transmit going to my work drive.
  • Extending it to boot the environment and then a specific project, e.g. “devup laravel” would open my laravel installation in each of the apps, like opening the database in Sequel Pro; cd-ing to the laravel folder after automatic SSH-ing into my Vagrant box, and so on.

Some of these are probably uncomplicated, but this was a 30-minute experiment that turned out to be very useful.

Code Stupidity

Aside

I got sick of the tiny, Web1.0 images everywhere here, a hangover from the earliest days of supernaut, so I decided — ’cos I like visuality & pix — to make small, big. I thought it would be easy. Little did I know I also create and add to the pile of Technical Debt. So: most single images in the recent past are now huge-ified, 666px wide; recent image galleries which are not full of diverse image ratios are now evenly splitting the Number of the Beast. Older images and galleries should be retaining their previous diminutiveness, but who knows, 13 years of blog is difficult to homogenise. Mostly I got distracted with how to make portrait images not blow out of the available browser window space, which turns out to be a kinda traumatising process I didn’t achieve. Plus how to Lazy Load srcsets by preg_replacing the new WordPress caption shortcode. OMFG, Frances, WTF? All of which makes me think it might be time for yet another supernaut refresh. So much code. So many images. So much …

Website rsync Backups the Time Machine Way

Continuing my recent rash of stupid coding, after Spellcheck the Shell Way, I decided for Website rsync Backups the Time Machine Way.

For a few years now, I’ve been using a bash script I bodged together that does incremental-ish backups of my websites using the rather formidable rsync. This week I’ve been working for maschinentempel.de, helping get frohstoff.de‘s WooCommerce shop from Trabant to Hoonage. Which required repeated backing up of the entire site and database, and made me realise the shoddiness of my original backup script.

I thought, “Wouldn’t it be awesome, instead of having to make those stupid ‘backup.blah’ folders, to let the script create a time-stamped folder like Time Machine for each backup, and use the most recent backup for the rsync hard links link destination?” Fukken wouldn’t it, eh?

Creating time-stamped folders was easy. Using the most recent backup folder — which has the most recent date, and in standard list view on my Mac, the last folder in a list — was a little trickier. Especially because once a new folder was created to backup into, that previously most recent was now second to last. tail and head feels hilariously bodgy, but works? Of course it does.

Bare bones explaining: The script needs to be in a folder with another folder called ‘backups’, and a text file called ‘excludes.txt’.  Needs to be given chmod +x to make it executable, and generally can be re-bodged to work on any server you can ssh into. Much faster, more reliable, increased laziness, time-stamped server backups.

#!/bin/sh
# ---------------------------------------------------------------
# A script to manually back up your entire website
# Backup will include everything from the user directory up
# excludes.txt lists files and folders not backed up
# Subsequent backups only download changes, but each folder is a complete backup
# ---------------------------------------------------------------
# get the folder we're in
this_dir="`dirname \"$0\"`"
# set the folder in that to backup into
backup_dir="$this_dir/backups"
# cd to that folder
echo "******************"
echo "cd-ing to $backup_dir"
echo "******************"
cd "$backup_dir" || exit 1
# make a new folder with timestamp
time_stamp=$(date +%Y-%m-%d-%H%M%S)
mkdir "$backup_dir/${backuppath}supernaut-${time_stamp}"
echo "created backup folder: supernaut-${time_stamp}"
echo "******************"
# set link destination for hard links to previous backup
# this gets the last two folders (including the one just made)
# and then the first of those, which is the most recent backup
link_dest=`ls | tail -2 | head -n 1`
echo "hardlink destination: $link_dest"
echo "******************"
# set rsync backup destination to the folder we just made
backup_dest=`ls | tail -1`
echo "backup destination: $backup_dest"
echo "******************"
# run rsync to do the backup via ssh with passwordless login
rsync -avvzc --hard-links --delete --delete-excluded --progress --exclude-from="$this_dir/excludes.txt" --link-dest="$backup_dir/$link_dest" -e ssh username@supernaut.info:~/ "$backup_dir/$backup_dest"
echo "******************"
echo "Backup complete"
echo "******************"
#------------------------------------------------
# info on the backup commands:
# -a --archive archive mode; same as -rlptgoD (no -H)
# -r --recursive recurse into directories
# -l --links copy symlinks as symlinks
# -p --perms preserve permissions
# -t --times preserve times
# -g --group preserve group
# -o --owner preserve owner (super-user only)
# -D same as --devices --specials
# --devices preserve device files (super-user only)
# --specials preserve special files
# -v --verbose increase verbosity - can increment for more detail i.e. -vv -vvv
# -z --compress compress file data during the transfer
# -c --checksum skip based on checksum, not mod-time & size – SLOWER
# -H --hard-links preserve hard links
# --delete delete extraneous files from dest dirs
# --delete-excluded also delete excluded files from dest dirs
# --progress show progress during transfer
# --exclude-from=FILE read exclude patterns from FILE – one file or folder per line
# --link-dest=DIR hardlink to files in DIR when unchanged – set as previous backup
# -e --rsh=COMMAND specify the remote shell to use – SSH
# -n --dry-run show what would have been transferred

Gallery

Neo-Grotesk Crypto-Brutalist emilezile.com

End of March, right when I’m throwing finally together my design portfolio (I swear I resisted, and now love having one), Emile asked if I might want to hurl together something for him. Something Web1.0, something like we’d handcode in HTML in the late-’90s, not quite something MySpace in the days of its browser-crashing gif-frenzy inferno, but definitely something that would be in its lineage; something tuner Nissan Skyline, unassuming on the surface, but all Fast & Furious: Tokyo Drift when you pop the hood. Something Helvetica, Neo-Grotesk, what’s getting called Brutalist right now, though not traipsing behind a fashion; this is Emile and when I was looking through years of his work putting his new website together, he has a deep love and understanding of the aesthetic, and the art and philosophy underpinning it.

First things first:

Emile: I have two websites. Can we make them one?
Frances: Yes!
Emile: Can we do all these other things?
Frances: OMG Yes!

Lucky I’d just done my portfolio, cos that gave me the framework to build on without having to bodge together fifty different functions and stuff. Saves a few hours there, which we made good use of in timezone-spanning conversations on typography, aesthetics, and usability.

First off, getting all those years of blog posts and work projects into a single database / website / organism. I used the hell out of interconnect/it’s Search & Replace DB script, merging, shuffling, shifting, getting rid of old code, jobs that would take a week or more to do by hand, done in seconds. We’d pretty much sorted out structure and functionality in a couple of afternoons; for a website that looks so simple, it was most of two weeks diligent work, back-and-forth conversations, picking away at details, (stripping and rebuilding, stancing, slamming, tuning … we are very good at turning all this into hoonage, especially with 24h Le Mans in the middle).

Obviously it had to be ‘Responsive’, look hella flush hectic antiseptic no matter what device, and for me (recently taking this stuff proper serious) it had to also be ‘Accessible’. I put those words in scare-quotes cos they’re kinda bullshit.

It occurred to me as I was finishing, that for a website to be neither responsive nor accessible — for example it looks crap if the screen size is too small or not ‘right’, or you can’t navigate with keyboard or screenreader — you have to actively remove this functionality. You have to break the website and override browser default behaviour. It’s a very active process to systematically remove basic functionality that’s been in web browsers since the beginning. You also have to actively not think, not empathise, intentionally not do or not know your job. Me for probably all of my earlier websites.

The funny thing is, it’s not really any additional work to make sure basic responsive and accessible design / functionality is present; the process of testing it always, always, always brings up usability issues, things I haven’t thought of, little points that become involved discussions about expectations, interactivity, culture, philosophy. Like ‘left and down’ is back in time, and ‘right and up’ forward; 下个礼拜 / 上个礼拜. Next week / last week. Yet the character for ‘next’ is xià, down, less than, lower; and ‘last’ (in the sense of ‘previous’) is shàng, up, more than, higher. So how to navigate between previous and next posts or projects turns into an open-ended contextual exchange on meaning.

And ‘responsive’, ‘accessible’? Basic, fundamental web design. Not something tacked on at the end.

Back to the design. System fonts! Something I’ve not done in years, being all web-font focussed these days. Another trip through the wombat warren of devices, operating systems, CSS declarations. It’s crazy impressive how deep people go in exploring this stuff. Emile Blue! A bit like International Klein Blue, and a bit like Web / HTM 4.01 Blue. But not! We worked this in with a very dark grey and very slightly off-white, bringing in and throwing out additional colours, and managing in the end to sort out all the interaction visual feedback though combinations of these three — like the white text on blue background for blockquotes. Super nice.

As usual, mad props to DreamHost for I dunno how many years of hosting (it was Emile who said to me, “Frances. Use DreamHost.”), WordPress for running Emile’s old and new sites (and all of mine), and Let’s Encrypt for awesome and free HTTPS. And to Emile for giving me the pleasure of making the website of one of my favourite artist.

Emile’s new website is here: https://emilezile.com