Let’s Encrypt the Web, For Free

Getting a SSL/TLS-certificate for your website has always been a hassle as well as costly. However increasing the amount of default encrypted web traffic is important both in terms of website safety as well as in terms of privacy (when you submit information to websites). The cost and hassle kept most non-commercial websites from using certificates however. Until now. Because now there is Let’s Encrypt, which makes it very easy to add certificates to your website. For free.

When I started using a VPS two years ago to serve as my cloud and as a Dropbox replacement, I needed a certificate to make sure the traffic to my cloud was encrypted. The VPS originally came with one, but that expired after a year. Since then I’ve added a renewing certificate from Comodo (the largest provider at the moment), which I got for a one-time payment as a lifetime service from my VPS provider. But for a range of other domains I use, both hosted on my VPS as well as in various hosting packages with a Dutch hosting provider, I never bothered getting a https certificate, because it was too much work and too expensive to keep up. There already were free certificates available, such as through the Israeli StartCom which I used for one or two domains, but I never felt certain it was secure as a service (it turns out it’s small buth 7th globally, and has received some serious criticism).

Symantec has a certificate problem...
Arranging and renewing certificates can be a pain, even if you’re Symantec, the world’s second certificate provider. (image Lars K. Jensen, CC-BY)

Let’s Encrypt changes all that. Because they are strongly community driven, amongst other with support by the Electronic Frontier Foundation, and because they are going the route of getting their root certificate independently recognized and be a full certificate authority. Currently they use IdenTrust’s (5th globally) existing trusted root certificates, but the Let’s Encrypt root certificate has now been recognized by Mozilla, and they’re working to get it recognized by Google, Apple, Microsoft, Oracle et al. This would increase the independency of Let’s Encrypt. Let’s Encrypt says the growth rate of https traffic has quadrupled since the end of 2015, in part through their efforts. Their certificates are used at over 8 million websites now.

I’ve added a range of my own sites to those 8 million. For the domains on my own VPS that didn’t have valid certificates yet, they were easy to install. I used SSLforFree to generate the Let’s Encrypt certificates, based on me providing proof I have full control over the domains I seek to protect. Then I added the certificates to the domains using the WHM control panel of my server. Certificates are valid for 90 days, but I can set them to auto-renew, although I haven’t done that yet.

For the domains not hosted on my VPS, such as this one for my blog, I depend on my Dutch hosting provider (as I don’t have root access to install certificates myself, although I have full control over the domains such as its DNS settings.) Luckily recently they have started offering auto-renewing Let’s Encrypt certificates (link in Dutch) as a free service for each of the domains you host with them, because they recognize the importance of secure web traffic. All it took was opening a ticket with them, listing the domains I was requesting certificates for. Within two hours eleven certificates were created and installed.

So, from now on you can get my blogpostings from https://zylstra.org/blog.

this blog now with https

On Agency pt. 2: The Elements of Networked Agency

Earlier this year I wrote a 1st posting of 3 about Agency, and I started with describing how a key affordance is the distributedness that internet and digitisation brings. A key affordance we don’t really fully use or realize yet.
I am convinced that embracing distributed technology and distributed methods and processes allows for an enormous increase in agency. A slightly different agency though: networked agency.

Lack of agency as poverty and powerlesness
Many people currently feel deprived of agency or even powerless in the face of the fall-out of issues originating in systems or institutions over which they have no influence. Things like the financial system and pensions, climate change impact, affordable urban housing, technology pushing the less skilled out of jobs etc. Many vaguely feel there are many things wrong or close to failing, but without an apparant personal path of action in the face of it.

In response to this feeling of being powerless or without any options to act, there is fertile ground for reactionary and populist movements, that promise a lot but are as always incapable of delivering at best and a downright con or powerplay at worst. Lashing out that way at least brings a temporary emotional relief, but beyond that is only making things worse.

In that sense creating agency is the primary radical political standpoint one can take.
Lack of agency I view as a form of poverty. It has never been easier to create contacts outside of your regular environment, it has never been easier to tap into knowledge from elsewhere. There are all kinds of technologies, initiatives and emerging groups that can provide new agency, based on those new connections and knowledge resources. But they’re often invisible, have a barrier to entry, or don’t know how to scale. It means that many suffering from agency poverty actually have a variety of options at their fingertips, but without realizing it, or without the resources (albeit time, tools, or money) to embrace it. That makes us poor, and poor people make poor choices, because other pathways are unattainable. We’re thirsty for agency, and luckily that agency is within our grasp.

Agency in the networked age is different in two ways
The agency within our grasp is however slightly different in two ways from what I think agency looked like before.

Different in what the relevant unit of agency is
The first way in which it is different is what the relevant unit of agency is.
Agency in our networked age, enabling us to confront the complexity of the issues we face, isn’t just individual agency, nor does it mean mass political mobilisation to change our institutions. Agency in a distributed and networked complex world comes from the combination of individuals and the social contexts and groupings they are part of, their meaningful relations in a context.

It sees both groups and small scale networks as well as each individual that is a node in them as the relevant units to look at. Individuals can’t address complexity, mass movements can’t address it either. But you and I within the context of our meaningful relationships around us can. Not: how can I improve my quality of life? Not: how can I change city government to improve my neighborhood? But: what can I do with my neighbours to improve my neighborhood, and through that my own quality of life?
There are many contexts imaginable where this notion of me & my relevant group simultaneously as the appropiate unit of scale to look at agency exists:

  • Me and my colleagues, me and my team
  • Me and my remote colleagues
  • Me on my street, on my block
  • Me in my part of town
  • Me and the association I am a member of
  • Me and the local exchange trading group
  • Me and my production coop
  • Me and my trading or buying coop
  • Me and my peer network(s)
  • Me and my coworking space
  • Me in an event space
  • Me and my home
  • Me in my car on the road
  • Me traveling multi-modal
  • Me and my communities of interest
  • Me and my nuclear family
  • Me and my extended (geographically distributed) family
  • Me and my dearest
  • Me and my closest friends

agency comes from both the individual and immediate group level (photo JD Hancock, CC-BY)

For each of these social contexts you can think about which impact on which issues is of value, what can be done to create that impact in a way that is ‘local’ to you and the specific social context concerned.

Different in how agency is constituted based on type of impact
Impact can come in different shades and varieties, and that is the second way in which my working definition of agency is different. Impact can be the result of striking power, where you and your social context create something constructively. Impact can take the form of resilience, where you and your social context find ways to mitigate the fall-out of events or emergencies propagating from beyond that social context. Impact can be agility, where you and your social context are able to detect, assess and anticipate emerging change and respond to it.

So agency becomes the aggregate of striking power, resilience and agility that you and your social context individually and collectively can deliver to yourself, by making use of the potential that distributedness and being networked creates.
Whether that is to strengthen local community, acting locally on global concerns, increasing resilience, leverage and share group assets, cooperatively create infrastructure, create mutual support structures, scaffold new systems, shield against broken or failing systems, in short build your own distributed and networked living.

Designing for agency
For each of those contexts and desired impacts you can think about and design the (virtual and real) spaces you need to create, the value you seek, the levels of engagement you can/should accommodate, the balancing of safety and excitement you desire, the balance you need between local network density and long distance connections for exposure to other knowledge and perspectives, the ways you want to increase the likelihood of serendipity or make space for multiple parallel experimenting, the way you deal with evolution in the social context concerned, and the rhythms you keep and facilitate.

The tools that enable agency
To be able to organize and mobilise for this, we need to tap into two types of enabling technology, that help us embrace the distributedness and connectedness I described in part 1. The ‘techie’ technology, which is comprised of hard- and software tools, and the ‘soft’ technology which consists of social processes, methods and attitudes.
What types of technologies fit that description, and what those technologies need to be like to have low enough adoption thresholds to be conducive to increased agency, is the topic of part 3.

Which City To Live and Work for a Month in 2017?

In the past years Elmine and I have visited different cities for a longer time, to experience how it is to live there. For a month, sometimes shorter, sometimes longer, we would stay in a city, and work from there, seeking out local entrepreneurs, while also enjoying the local food, coffee, and art on offer. Exposing ourselves to a different environment but not in a touristic capacity, provides inspiration, and generates new insights and ideas. We spent extended stays in Vancouver, Copenhagen, Helsinki, Berlin, Cambridge and Lucca, and are now exploring which city to set up camp in in the summer or fall of 2017. As I did in 2013 I asked around for suggestions, this time on Facebook. I got a long list of responses, which makes filtering and ultimately choosing likely a project in itself.

For us, for a city to qualify as a candidate it needs to be in Europe (as we want to drive there by car, given we are bringing our young daughter plus all the gear that entails), needs to have something to offer in terms of culture, and food, and good places to hang out in, but above all needs to have a few communities around new tech, start-ups, or other topics that we are interested in. This because we want to seek out new conversations and connections (such as when I organized the first Danish Data Drinks in Copenhagen in 2012).

Here are the (over 50!) suggestions we received, on a map:

Or see the list.

What would you like me to write more about?

Design Museum
Something to aspire to

A few years ago Elmine and I wrote a short e-book on how to organize an unconference as a birthday party (PDF linked on the right). Since then I’ve regularly entertained the idea of writing another e-book, but that never really happened. While I do have some topics I’d like to write about, I find my knowledge of those topics still too limited to be able to come up with a narrative to share anything worthwile. There are also doubts (fears?) about what type of things would have a potential readership

So this week I decided to ask:

What would you like to see me write more or more extensively about?

Already I got a range of responses, and it is an intriguing list. Some suggestions are about aspects of my own journey, others are about topics that I don’t know much (or anything) about, but where apparantly there’s interest in my take on it. Some come close to topics I already want to write more about, but feel I haven’t found an angle yet.

Here’s the list until now. More suggestions and thoughts are welcome.

  • Optimal unfamiliarity (a phrase I coined in 2004 initially to describe what mix of people make a great event audience to be part of, but has become a design principle in how I try to collect information and learn.), suggested by Piers Young
  • An epistolary travel log novella (something that could arise from my 14 years of blogging about my travels and work), suggested by Georges Labreche
  • Open currencies (which Google tells me they have no meaningful results for, but which connects to my experience with LETS, and chimes with free currencies in p2p networks), suggested by Pedro Custodio
  • Moderating sessions with a mix of analog and digital tools (closely connected to my thoughts about fruitful information strategies in social contexts), suggested by Oliver Gassner
  • Fatherhood (as I became one 9 weeks ago, but I don’t think 9 weeks counts as experience), suggested by Dries Krens
  • Motivating others to act on open data (a large chunk of my work), suggested by Gerrit Eicker
  • Being a European in the digital age (which I strongly claim to be), suggest by Alipasha Foroughi
  • Convincing profit oriented organisations of the value of open access and responsible research (comes close to Gerrit’s point), suggested by Johnny Søraker
  • How and why I left my job (being employed by Dries mentioned above), suggested by Rob Paterson
  • The journey from my involvement in knowledge management and early blogging, to where I am now, and how it impacted the way Elmine and I arrange our lives (lots to unpack here!), suggested by Jon Husband (who, like Rob Paterson, has been part and witness of that journey over many years)
  • The proliferation of means of communication versus the quality of communication (for me this points to information strategies on focus, filtering etc.), suggested by Jos Eikhout
  • Personal information strategies and processes using open source tools (something I blogged often about in various shapes and forms), suggested by Terry Frazier, a fellow blogger on knowledge management back when I started blogging in 2002

Looking at who responded is already in a way a manifestation of some of the suggested topics (the journey, the information strategies, the optimal unfamiliarity, facilitating communities).

I can’t promise I’ll write about all of the things suggested, but I appreciate the breadth and scope of this list and the feedback I can unpack from it. More suggestions are very welcome.

Archiving Mail in MySQL with MAMP and Mailsteward

As I am moving out of Gmail, I had to find a way to deal with the 21GB mail archive from the past 12 years.

Google lets you export all your data from its various services, including email. After a day or so you get a download link that contains all your mail in one single file in MBOX format.

MBOX is a text format so it allows itself to be searched, but that would only tell you that what you are looking for is somewhere in that 21GB file.

I could also import it into my mail client as a local archive, by dropping the MBOX file in the Local Folder of Thunderbird with Finder. That provides me with a similar access and search capability as I had for all that mail in Gmail. However, if I would like to do more with my archive, mine it for things, and re-use stuff by piping it into other workflows having it in Thunderbird would not be enough.

Mailsteward puts MBOX into MySQL
So I searched for a way to more radically open my archive up to search. I came across DevonThink, but that seemed a bit overkill as it does so much more than merely digesting a mail archive, and as such provides way too much overlap with my Evernote. (Although I may rethink that in the future, if I decide to also move out of Evernote, as after Gmail it is my biggest third party service that contains lots of valuable information.) I looked for something simpler, that just does what I need, putting e-mail into sql, and that is how I found Mailsteward Pro.

There are three versions of Mailsteward, and I needed the Pro version, as it is the one that works with MySQL and thus can handle the volume of mail in my archive. It costs $99 one time, not cheap, but as I was paying for storage with Google as well, over time it pays for itself.

Installing Mailsteward
When installing Mailsteward it assumes you already have a MySQL server running on your system. I use MAMP Pro on my laptop as a local web and mysql server, on which I run different things locally, like a blog based journal and a self-assessment survey tool. MAMP Pro is very easy to install.

You need to take the following steps to allow Mailsteward access to MySQL. In MAMP Pro you need to allow external access to MySQL, but only from within your own system (this basically means applications other than MAMP can access the MySQL server.

Schermafbeelding 2016-07-19 om 16.37.07

Then you create a new database via the PHP Mysqladmin that comes with MAMP. Mailsteward will populate it with the right tables. In my case I aptly named it mailarchives.

Schermafbeelding 2016-07-19 om 10.48.16

Within Mailsteward you then add a connection, listing the database you created, and adding the right ports etc. Note that the socket it requests isn’t an actual file on your system, but does need to point to the right folder within the MAMP installation, which is the Application/MAMP/tmp/mysql folder.

Schermafbeelding 2016-07-19 om 08.41.51

Importing MBOX files
I first tested Mailsteward with my parents e-mail archive that I kept after they passed away last year, to be able to find contact details of their friends. It imported fine. Then I tried to import my Gmail MBOX file. It turns out 21GB is too large to handle in one go for Mailsteward, as it eats away all memory on your Mac. I concluded that I need to split my Gmail MBOX file into multiple smaller ones.

Luckily there is a working script on GitHub that chops MBOX files up in smaller ones, and that allows you to set the filesize you want. I chopped the Gmail MBOX into 21 smaller files of 1GB each. These imported ok into MailSteward. Mailsteward maintains tags and conversation threads.

To run the script, first open it in a text editor and change the filesize limit to what you want (default is 40MB, I changed it to 1GB). Then open Terminal and run the script by typing the following command, where the destination folder does not need to exist:

sudo php mbox_splitter.php yourarchivename.mbox yourdestinationfolder

terminalcommand

That way you end up with a folder that contains all the smaller MBOX files:

Schermafbeelding 2016-07-22 om 16.06.53
Using Mailstewards import feature you then add each of those files, by hand (but luckily you only need to do that once).

Using the archive
Mailsteward allows you to search the archive through its rather simple and bland interface, but you can also tweak the MySQL queries it creates yourself. The additional advantage of having it in MySQL is that I can also access the archive with other tools to search it.

Schermafbeelding_mailsteward

Adding newer mail to the archive
Thunderbird allows me to export e-mail as MBOX files via the Import/Export add-on, which can then be added to the archive by Mailsteward. So that’s a straightforward operation. Likely I can automate it and schedule it to run every month.

How to leave Gmail

Leaving Gmail, a tough question
In the past two years I have been slowly reconfiguring my online routines to increase privacy safeguards, and bring more of my data under my own control, while avoiding making my work routines more difficult and thus less routine. How to create an e-mail workflow that does not rely on Gmail has been the hardest part of this effort. I think I now finally have figured out how to do it without loss of convenience, and hope to have made the switch after I finish exporting all e-mail data Google has from me.

mailinbox
After 12 years this will no longer be a familiar sight for me

Previous steps I took
Some things I already did to increase my control over my own data are:

Not that I don’t use anything but my own stuff now, I also am still a heavy user of various services, like Evernote for instance, or my Android phone. But the usage of third party services has become more varied and spread-out, reducing the impact of losing any one of them.

Why I want to leave Gmail
The net is a distributed place, and our information strategies and routines should embrace that distributedness. In practice however we often end up in various silos and walled gardens, because they are so very convenient to use, although they actually decrease our own control and/or introduce single points of failure. If your Facebook account gets suspended can you still interact with others? If your Google account gets suspended, do you still know how to reach people? Using Gmail also means all of my stuff resides on servers falling under the not very privacy sensitive US laws.

Since July 2004 I have however completely relied on Gmail. It is an easy way to combine the various e-mail addresses I use into 1 single inbox ( or rather multiple inboxes on the basis of follow-up actions), and it has great tagging, search and filtering so that you never need to file anything or sort into folders. I have used Gmail as my central inbox for everything. Since 2004 I have accumulated about 770.000 emails in 249.000 conversations, for a total of 21GB. Gmail is therefore the largest potential single point of failure in my information processing.

The issues to solve
To wean myself off Gmail there were several things for which I needed a similarly smooth working alternative:

  • All the mail addresses I use need to come together into a single mailbox, and conversations need to be threaded
  • Availability across devices, and via webmail. Especially on the road I use my phone for quick e-mail triage, and as alternative for phone calls. Webmail is my general purpose access point on my laptop while traveling
  • Having access to my full mail archive for search and retrieval
  • Excellent tagging and filtering possibilities

The steps I took to leave Gmail
Finding a path away from Gmail took two realisations, one about process and one about technology.

Changing my process
Concerning process I realized that Gmail allows me, or even invites me, to be very lazy in my e-mail processing routines. Because of the limitless storage I merely needed to be able to find things back (through the use of tags for instance), and never needed to really decide what to do with an e-mail.

This means for instance that lots of attachments only live on in my mailbox, without me adding them to relevant project documentation etc. Likely I spent hours in the past years searching for slide decks in my mountain of e-mail, in stead of spending half a minute once to store and archive an attachment in a more logical place where I’m more likely to find it with desktop search, or serendipitously bump into it, and then throw the mail message out. So mail processing has to become a much less lazy process with a few more active decisions in handling messages. E.g. attachments into a project folder, contact info into contacts, book keeping related messages to bookkeeping (and no longer going through all mail tagged bookkeeping every quarter to do my taxes), tasks and actions to my Things todo application. I already wrote several Apple Scripts to let my todo app and Evernote talk to various other software packages (like Tinderbox), but it is now likely I will write a few more to automate mail message processing further (because I prefer to still keep my process as lazy as possible).

Changing my tools
A second key realization was that my original reasons for staying within webmail had meanwhile been solved with better technology: it used to be that only Gmail provided the cross-device access to all my mail accounts simultaneously, something I could not easily do in 2004 with a desk/laptop mail client in combination with a mobile mail client. Now, with much broader IMAP support (not just by my software tools, but also by hosting companies) this is much easier, increasing the range of possible alternatives. Threading mail conversations is now also a more universal feature.

This now allowed me to start using Thunderbird mail client, including PGP encryption, on my laptop (I never intensively used a mail client before on my laptop), in combination with the open source K9 Android mail app (replacing the Gmail app for me), also with encryption options. Both allow tagging of messages, and Thunderbird allows filtering for not just incoming mail but also when sending and when archiving, which is really useful.

As an alternative to piping all my mail accounts into Gmail, I now use all the real inboxes of those mail accounts where they’re originally hosted, and use IMAP to combine into one user interface on my laptop and mobile. Those separate mailboxes do have lower storage limits (usually 500MB), so it is more likely I bump into limits, and that is the reason I need a much less lazy mail processing routine (especially concerning larger attachments), in which I can regularly archive older mail.

Separately I also now use a different webmail provider, Protonmail in Switzerland, that comes with default encryption. I’ve attached a domain name to it (zylstra.eu).

The archiving issue
The above shows how leaving Gmail moving forward from the here and now, by solving the one-inbox and the multiple device issues can be done by changing process and tools. That leaves the question of how to deal with the 21GB of mail archive from the past 12 years. Leaving it all in Gmail, and use that as archive might be a work-around for old mail, but doesn’t help me for future mail. I could add it as a local folder to the Thunderbird mail client, but that thought did not appeal to me and feels clunky. I find that I never use my mail archive from my mobile, so the archive does not need to be cloud based per se. So, I opted to keep my mail archive local, by storing it in a mysql database. This allows for query based searches, and even text mining, without it clogging up my mail client itself. Gmail can export your archive in a single MBOX file, and I used Mailsteward Pro to transform it into a mysql database. (More on that set-up in the next posting Archiving mail in mysql with MAMP and Mailsteward). With the archive now locally stored, the database is backed up to both my NAS drive and my VPS.

What remains
With the basic set-up for leaving Gmail now in place, there is still work te be done over the coming months. Clearing out the archive at Gmail is one step, once I feel comfortable with searching my new mysql archive. Creating more filters in my mail client, and writing a few scripts to integrate my mail processing with the other tools I use is another. There are also likely a whole bunch of things (accounts, subscriptions etc) that use my gmail address, which I will change as I go along.

My longtime blogging friend Roland Tanglao suggested to mine my mail archive for things that could be published, contact data, harvest old ideas that can feed into my work now etc. This sounds appealing but needs some contemplation and then a plan. Having the archive in mysql makes it a lot easier to come up with a plan though.

Beyond mail, there are of course more Google services I use heavily, especially Calendar, which are tied to my gmail address. I could move that to my Owncloud as well. I will keep my Google account, as this isn’t about ditching Google but about reducing risks and taking more control. Apart from Calendar there are no other single points of failure in the way I use my Google account. Beyond Google, Evernote is another silo I’m heavily invested in, and the content I keep there is arguably more valuable to me than my Gmail. So that is a future change to think about and seek alternatives for.

Inbox 0 is for Losers
I reached Inbox -1 on Gmail once in 2009 🙂

[Find the outline and slides of my Koppelting session on leaving Gmail in the follow-up posting at https://tzyl.eu/leavegarden. You can use the shortlink https://tzyl.eu/gmail to refer to this posting.

Near Future SF Reading List: Explore Emerging Future Together

Gogbot 2015: Google's AI DreamsThe dreams of Google’s artificial intelligence

I read lots of science fiction, because it allows exploring the impact of science and technology on our society, and the impact of our societies on technology development in ways and forms that philosophy of technology usually doesn’t. Or rather SF (when the SF is not just the backdrop for some other story) is a more entertaining and accessible form of hermeneutic exercise, that weaves rich tapestries that include emotions, psychology and social complexity. Reading SF wasn’t always more than entertainment like that for me, but at some point I caught up with SF, or it caught up with me, when SF started to be about technologies I have some working knowledge of.

Bryan Alexander, a long time online peer and friend for well over a decade, likewise sees SF, especially near future SF, as a good way to explore emerging future that already seem almost possible. He writes “In a recent talk at the New Media Consortium’s 2016 conference, I recommended that education and technology professionals pay strong attention to science fiction, and folks got excited, wanting recommendations. So I’ve assembled some (below)“. His list contains a group sourced overview of recent near future SF books, with some 25 titles.

I know and read half of the books on the list, and last night loaded up my e-reader with the other half.

If you want to discuss those books keep an eye on Bryan’s blog, as you’re sure to get some good conversations around these books there.

Gogbot 2015: Google's AI Dreams Gogbot 2015: Google's AI Dreams
The dreams of Google’s artificial intelligence

(photos made during the 2015 Gogbot Festival, the yearly mash up of art, music and technology into a cyberpunk festival in my home town Enschede.)

Related: Enjoying Indie SF, March 2016

Original social media needs still unmet

My friend Peter Rukavina blogged how he will no longer push his blogpostings to Facebook and Twitter. The key reason is that he no longer wants to feed the commercial data-addicts that they are, and really wants to be in control of his own online representation: his website is where we can find him in the various facets he likes to share with us.

Climbing the Wall
Attempting to scale the walls of the gardens like FB that we lock ourselves into

This is something I often think about, without coming to a real conclusion or course of action. Yes, I share Peters sentiments concerning Facebook and Twitter, and how everything we do there just feeds their marketing engines. And yes, in the past two years I purposefully have taken various steps to increase my own control over my data, as well as build new and stronger privacy safeguards. Yet, my FB usage has not yet been impacted by that, in fact, I know I use it more intensively than a few years ago.

Peter uses his blog different from me, in that he posts much more about all the various facets of himself in the same spot. In fact that is what makes his blog so worthwile to follow, the mixture of technology how-to’s, and philosphical musings very much integrated with the daily routines of getting coffee, or helping out a local retailer, or buying a window ventilator. It makes the technology applicable, and turns his daily routines into a testing ground for them. I love that, and the authentic and real impact that creates where he lives. I find that with my blog I’ve always more or less only published things of profession related interests, which because I don’t talk about clients or my own personal life per se, always remain abstract thinking-out-loud pieces, that likely provide little direct applicability. I use Twitter to broadcast what I write. In contrast I use FB to also post the smaller things, more personal things etc. If you follow me on Facebook you get a more complete picture of my everyday activities, and random samplings of what I read, like and care about beyond my work.

To me FB, while certainly exploiting my data, is a ‘safer’ space for that (or at least succeeds in pretending to be), to the extent it allows me to limit the visibility of my postings. The ability to determine who can see my FB postings (friends, friends of friends, public) is something I intensively use (although I don’t have my FB contacts grouped into different layers, as I could do). Now I could post tumblerlike on my own blog, but would not be able to limit visibility of that material (other than by the virtue of no-one bothering to visit my site). That my own blog content is often abstract is partly because it is all publicly available. To share other things I do, I would want to be able to determine its initial social distribution.

That is I think the thing I like to solve: can I shape my publications / sharings in much the same way I shape my feedreading habits: in circles of increasing social distance. This is the original need I have for social media, and which I have had for a very long time, basically since when social media were still just blogs and wikis. Already in 2006 (building on postings about my information strategies in 2005) I did a session on putting the social in social media front and center, together with Boris Mann at Brussels Barcamp on this topic, where I listed the following needs, all centered around the need to let social distance and quality of relationships play a role in publishing and sharing material:

  • tools that put people at the center (make social software even more social)
  • tools that let me do social network analysis and navigate based on that (as I already called for at GOR 2006)
  • tools that use the principles of community building as principles of tool design (an idea I had writing my contribution to BlogTalk Reloaded)
  • tools that look at relationships in terms of social distance (far, close, layers in between) and not in terms of communication channels (broadcasting, 1 to 1, and many to many)
  • tools that allow me to shield or disclose information based on the depth of a relationship, relative to the current content
  • tools that let me flow easily from one to another, because the tools are the channels of communication. Human relationships don’t stick to channels, they flow through multiple ones simultaneously and they change channels over time.

All of these are as yet unsolved in a distributed way, with the only option currently being getting myself locked into some walled garden and running up the cost of moving outside those walls with every single thing I post there. Despite the promise of the distributed net, we still end up in centralized silo’s, until the day that our social needs are finally met in distributed ways in our social media tools.

On the need for distributedness and self-reliance

I came across this Guardian article describing how an American author and artist found his Google account deleted, including his 14 year old blog hosted with Google’s Blogger platform.

Screenshot of removed blog message

To me this incident is notable in a few ways.

  • The author concerned had his blog up for 14 years, and even used it to write and keep manuscripts, so clearly it was of key importance to him as an online asset.
  • For such a key asset, using a free service is a risk, as that doesn’t provide any certainty concerning uptime.
  • Blogger, as a free service, comes with a TOS, allowing Google to withdraw service at any moment. You don’t have a ‘right’ to this service.
  • After the account was closed, it was impossible to actually contact Google to ask about the why and how, or if it can be reinstated
  • The author concerned feels he’s being censored (which in a literal sense is impossible, as only governments can censor), although it is likely the account was closed because of a breach of the terms of service (which are notoriously unevenly enforced in every platform)
  • The author didn’t keep back-ups.

All of this once again highlights the importance of embracing the distributedness of the internet. You have to make sure that you are not just a passive and consuming part of it, but that for things that are important to you, you are also willing to make sure those things are under as much of your own control as possible. Your blog is only yours if you have control over the infrastructure it runs on. The same is true for e-mail, which in the case mentioned above was also lost: you have to make sure you have full control over at least one domain name, at which you can also receive and send e-mail (you@yourdomain.tld).

This in short means you need to make sure you have a claim to the service you actually need. Blogger offers free hosting but can take it away. If you want your blog to exist, make sure you pay for hosting, and make sure you run it on a domain you control. I used Blogger when I started blogging in November 2002 (around the same time in short, as the artist’s blog that was deleted), but once I realized I was likely to continue writing, after a few months, I moved it to a paid hosting package I could more fully control, and on a URL I acquired separately from the hosting, also under my full control. It doesn’t mean nothing can happen (my blog was hacked once), but it does mean I can recover from it.

The web was built in distributed fashion. If you use it in a centralized way, by making use of large centralized services, you expose yourself to vulnerabilities. That is true for centralized free blogging platforms, like Blogger.com or WordPress.com, and all those other services such as Facebook, Flickr and whatnot. Don’t make yourself dependant, don’t put yourself in a position that has a single point of failure.

Arsonists Walk Among Us

Playing politically on base emotions has consequences. Choice of words has consequences. It does not make the fear mongers and populists directly or criminally responsible, but it does come with moral responsibilities. If you consistently fan emotional flames you do bear moral responsibility for the resulting sparks and ‘singular unconnected’ fires. What British radio host James O’Brien says in the fragment embedded above about the UK, is as much true in Germany, France, Netherlands, Belgium, Hungary, Poland, Austria etc. I share his deep frustration.

The arsonists walk among us pretending to bring common sense and empathy, because “one should be allowed to say this after all, and high-time too”. They don’t go by the names of Schmitz or Eisenring, but it doesn’t take Max Frisch to point them out. The arsonists walk among us pretending it is some mythical Other that will take “what is Ours” and who will burn our house and institutions down. The arsonists walk among us, luring us with reactionary nostalgia for a country and a time that has never existed. It will be those arsonists however that end up setting things alight, not any ‘Other’.

The question is how much of a Herr Biedermann I will be, you will be, we will be, before we learn to send the arsonists packing.

Do we even know anymore how to do that?

The Burning of the houses of Parliament, October 16, 1834 by Turner
The Burning of the Houses of Parliament, Oct 16 1834, by J M W Turner. Image by Pete Jelliffe, CC-BY-SA