Some musings on a simple, private cloud

Yesterday Google announced they were killing Reader, which has caused wailing, gnashing of teeth, general internet meltdown, and it trended on Twitter longer and harder than the new pope. supernaut has been around for longer than all three (previous pope included) and will probably outlive the remaining two, which has given me a little to think about as I simultaneously bash the command line in search of data anti-impermanence.

Despite my current roll of 300-something feeds, which I use to stay barely coherent on everything from astronomy to China, theatre, black metal, and porn, I’m not especially sad about the demise of Reader, as there was a time – pre-multi-device syncing – when it was unnecessary, and I hope the coming weeks will make that so again; monoculture leads to ecosystem collapse and so on. What it does denote is the precariousness of the current state of how anyone who engages with platforms that hold our data, and the urgent need for some non–open-source / GitHub mentality alternatives.

And considering I do the majority of my work in or around open-source and things that live on GitHub and other similar platforms, I can say that unless I am more useless than I can possibly imagine, the state of things is diabolical.

Some years ago, when my first webhost screwed up my domain registration and lost, I moved (thanks to Emile) to DreamHost, where I’ve been signing up others ever since. As far as cheap, reliable, ethical webhosting goes, I think they’re about as good as it gets – yes, even with the occasional downtime and other problems; I think they take what they do seriously enough that things would have to go very wrong for me to decide to move.

Late last year they began to offer DreamObjects, which is “an inexpensive, scalable, reliable object storage service for web and app developers and tech-savvy individuals” – basically whopping great amounts of secure hard drive – which I thought, “Ooo, I could back up my entire laptop to that! … if it were a bit cheaper …” which they duly are doing, and at 2¢ a gig, it works out to under 10,-€ a month for my 500 gig hard drive, so it becomes something of a no-brainer in terms of, ‘Yes, this is affordable”.

Affordable but conditional clause = ‘tech-savvy’. And now we return to the open-source mentality. I’m not especially tech literate. I couldn’t write a bash script from scratch to save my life, but I can scrape things together from various similar examples and usually my “repeat until unbroken” approach works, combined with trawling stackoverflow for error message solutions. So I approached DreamObjects with some fairly specific ideas of how I wanted to use it, and was prepared for a degree of gargling the command line.

Oh dear, did I gag.

Anyone who knows me, who has a Mac has been pestered to do their backups (anyone using Linux by definition has already done this, and Windows users can sod off), and over the years I’ve tried everything, all manner of syncing, local and remote backups, encrypted sparse disc images, rsync from the command line via SSH, TimeMachine and SuperDuper! and more that I’ve forgotten. I’ve also attempted various “in the cloud” services, which I’ve avoided for three reasons:

First, the pricing is not commensurate with the real costs of storage (effectively free); second, I don’t trust my data with anyone – I like privacy and I’m determined to stick with it as a human right; third, as evinced by Google Reeder, these things tend to die. Let’s forget for the moment the fourth: uploading 500gb of data and syncing it requires a certain monomanical effort.

So, DreamObjects meets the first, and conditionally the third (they’ve been around longer than I’ve been on the internet), and for the critical second … they’ve been good so far.

And off I go into the land of “backing up your life to the cloud”. Thus far I’ve tried boto_rsync, boto, s3cmd, Duplicity, Duplicati, GoodSync, xTwin, DragonDisk, Cyberduck, ownCloud… and currently (unsatisfiedly) settled on s3cmd. Most of these are command line tools which require building and installing and a degree of ‘tech-savvy’ that is way beyond the average person; others are proper apps but don’t make it easy (this is also to say most are designed for Amazon S3, so using them for DreamObjects is a bit of a hack), don’t allow for easy exclusion of files and folders, or are otherwise generally problematic, opaque, unintuitive, or just ugly.

Maybe to say that for most people I know on a Mac, TimeMachine is about as complex as they can handle or want to handle, and so backing up remotely needs to add at most one more straightforward step to this.

And this is the problem: none of what I tried is remotely simple, even if it was intended for S3 and I was hacking it to work on DreamObjects, the amount of suffering would cause most people to give up; the complexity of owning your own data in all instances from Reeder to backing up to FaceBook and Twitter is directly responsible for driving people to use free platforms where everything – security, privacy, longevity – is secondary to advertisers.

I looked at ownCloud also, which is close to my idea of a self-owned private cloud, and even that requires messing around with php and really, the whole rhetoric of self-controlled data in the cloud is just one giant elitist, chauvinist wank unless my (dead) Turkish grandmother can do it herself, without feeling confused or stupid.

Admittedly with DreamObjects it was also a test for me to see how possible all this is, because I’d like to be able to tell my friends, “hey, you can back up your laptop remotely for under 10 euros a month!”, so I have spent an obsessive amount of time messing around, and apps like xTwin come the closest to what I think is desirable, however what is crucial is that there isn’t a simple way to have remote backups that aren’t bound to some company like Amazon – or a simple way to manage RSS feeds that isn’t bound to Google Reader.

Given than cloud storage is so cheap now, having this as a ‘part’ of my or anyone’s computer is going becoming normal – it’s about where mobile phones were in 1998 – and I think some imagination is necessary to prevent a situation where this part of me is owned by a corporation with a specific agenda to make money off me, my data, and my privacy.

Using the cloud should be transparent; backing up to it as simple as clicking an icon – or setting once and then it’s automatic; my public presence, be it on the equivalent of Twitter,  FaceBook, G+, Reader, should be owned and controlled by me; that is to say while the interface of these networks might be public in the way these are now, my data remains with me, and is owned by me and I am not a saleable commodity of a corporation.

Or perhaps to put it another way, if we are ever going to have true personal and private clouds and social networks, we need the demise of Reader and we need open source and other developers to make simple software that isn’t bound to these monoculture platforms.

(s3cmd just crashed, and I feel like throwing a few hours at ownCloud…)

6 Second Repeating Flight Lufthansa from Berlin Tegel

I’ve been looking at Google Maps a lot lately, planning out ride possibilities around Tegeler Forst / Jungfernheide, or deducing where I’ve been on my morning training ride — actually Open Street Maps is much better, but sometimes I like the satellite view. Which led me to wondering if the old jet at the far south-west end of Flughaven Tegel was also on the satellite view.

Yes! And so were many other planes. I wondered also if any private jets might be lurking, god know why I might want to go looking for those, which led to seeing a plane just past the eastern end of the runway, which only made sense if it was actually taking off. Oh, neat, the satellite data for Google Maps caught a plane taking off! But further back on the same runway is another plane.

Certainly not also taking off, because it was physically too close and anyway there needs to be a minute or two between takeoffs because of turbulence, so perhaps it was landing. Pretty snug arrangement of arrival and departure then.

But then there turns out to be another plane on the same runway further back. And another. And another. Four on the same runway. Hmm maybe taxi-ing to the other runway? And scrolling a bit further right, above the grass at the end of the runway? Oh, another!

So being the quick thinker I am (har), I realise it’s all the same plane, a Lufthansa Boeing 737-500 (I spent half an hour looking at Lufthansa fleet planforms to work out that), captured repeatedly in the process of taking off by the satellite as it made the image tiles over Flughaven Tegel, and further away near Kurt-Schuman Damm, Scharnwerberstr, Cambridgerstr, and finally vanishing just after Höllanderstr.

Anyway, it was amusing for me late last night after a week where I racked up nearly 60 hours of coding and became decidedly incapable of communicating outside of a screen.

Google – turning news into art

Google is becoming a bit like an Iain M. Banks’ Culture Mind, sucking in information at such an incomprehensible rate, and adding to a database of unsearchable proportions. Perfect terrain for art to make sense out of. There have been lots of net-art projects recently that use Google’s ecosystem as the data source, and several of them focus exclusively on the news feeds where relevance is the poor second child to quantity.

newsmap won the ars electronica International Competition for CyberArts this year.

Newsmap is an application that depicts the permanently changing map of Google news. A visualization algorithm produces a visual representation of the enormous quantity of information gathered by Google. Beyond that, the graphic depiction enables users to get a quick overview of breaking international news and, in doing so, provides background information on its distribution and how it is processed further, and sheds light on the mechanisms of news production on a global level.

TooGle is similar but with pictures.

TooGle is a parser over the net focused on the most famous search engine of the net. When it runs, it gets the latest news of the biggest aggregator of news and it uses each word of the title to retrieve an images sequence from the largest container of images over the net. The result will be a short movie that use the words of the news, the images gotten and the log generated during the parsing.
The animated sequence is not time based; due random algorithm, the original frames will become remixed even more.
After a bit of time the result will become an unreadable sequence, an unreadable news… there will be news no more.

googlehouse makes information overload a house you can come home to. The house real-time construction with walls, rooms, drawn from images sucked from the google image search engine.