The Evolution and Role of My Agency Postings: Finding My Unifier

I finally wrote down the full overview of how I look at agency in our networked world, and the role of distributed technology in it, in the past weeks (part 1, part 2, part 3). It had been a long time coming. Here is a brief overview of its origins, and why it matters to me.

Origins
I previously (in the past 18-24 months) wrote down parts of it in rants I shared with others, and as a Manifesto that I wrote in January 2015 to see if I could start a hardware oriented venture with several others. I rewrote it for draft research project proposals (the image below resulted from that in June 2015) that ultimately weren’t submitted, and as a project proposal that resulted in the experiment we will start in the fall to see if we can turn it into a design method, which in itself will become an agency-inducing tool.

But the deeper origins are older, and suffused with everything I over time absorbed from my blogging network and the (un-)conference visits where those bloggers met, such as Reboot in Copenhagen. The first story I created around this was my 2008 presentation at Reboot 10, where I formulated my then thoughts on the type of attitudes, skills and tools we need in the networked age.
There I placed the new networked technology in the context of the social structures it is used in (and compared that to what came before) and what it means for people’s attitudes and skills to be able to use it in response to increased complexity. The bridge between ‘hard’ and ‘soft’ technology I mention in the three blogpostings on Agency, originates there.

The second story is my closing keynote speech at the SHiFT conference in Lisbon in 2010 (where we had to stay on a week because of the Icelandic ash cloud closing down European airspace). I blogged the submitted talk proposal, and video and slides are also available. There I talked about doing things yourself as a literacy (where literacy in the Howard Rheingold sense implies not just a skill but deploying that skill in the context of a community for it to be valuable), on the back of internet as our new infrastructure (an echo of Reboot 2008). I suggested that that socially embedded DIY was not just empowering in itself, but very necessary to deal with a complex networked world. Not just to be able to create value for yourself, but to be resilient in the face of ‘small world syndrom’ (the global networks finally making visible we live on a finite world) and cascading failures that propagate at the speed of light over our networks exposing us to things we would previously be buffered from or would have time to prepare for. I proposed the term Maker Households as the unit where DIY literacy (i.e. skills plus community) and local resilience meet, to create a new abundance based on the technical tools and methods that the networked world brings us. I was much more optimistic then how those tools and methods had already lowered the barrier to entry and merely pointed to the need to better learn to apply what is already there. I called upon the audience to use their skills and tools in the context of community, with the Maker Household as its local unit of expression. From those local units, a new global economy could grow (as the root meaning of the word economy is household).

Since then these notions have been on my mind daily but usually absorbed into every day work. I registered the domain name makerhouseholds.eu with the intention of writing up my SHiFT talk into an e-book, but never sat down to do it and let everyday life get in its way. Over time I became ever more convinced of the importance of these notions, as incumbent institutions started to crumble more and general discontent kept rising. At the same time I more strongly realized that the needed technology was failing to create more agency beyond a circle of power-users, and where broad adoption was taking place it was because key affordances were being dropped in favor of ease of use and ease of business models. Especially when I in 2014 started to explore how to make myself less dependent on tools that were providing convenience, but at the cost of exposing myself to single points of failure in what should be networked and distributed, and realized how much work it is to make the tools work for you (like maintaining your own server, or leaving Gmail). That triggered the ranting I mentioned, solidifying my conviction that Maker Households should be about packaging technology in ways that make it easy for people to increase their agency, without compromising their resilience.

Personal importance: Agency as unifier
Why this long overview? Because it seems it led me to finally finding ways to express what unifies my work of the past almost 20 years. As a kid I felt everything was connected, although everyone seemed to want put everything into discreet boxes. Internet and digitization made the connectedness all true, and I’ve been fascinated with the potential and consequences of that ever since I first went online in 1989, over 25 years ago. That unifier has however been elusive to me, even as all my work has always been about making it possible for others to better understand their situation and by using technology more purposefully act together with their peers based on their own perceptions of needs and wants. That was what drove me towards the change management side of introducing technology in groups and organizations, what drives my interest in dealing with complexity, informal learning networks, and the empowering aspects of various internet- and digitisation driven technologies such as social media, digital maker machines, and open data. That unifier has been elusive to my clients and peers often as well. I regularly have people call me saying something like “I don’t understand what it is you do, but whenever I search for things I think might help, your name comes up, so I thought I’d better call you.” Increasing agency as a unifier, from which different areas of expressing that flow, may put that confusion to rest.

Agency, as unifier, also makes the ‘menu’ below the way for me to explore additional fields and activities.

Agency by Ton Zylstra

On Agency Pt 3: Technology Needs for Increased Agency

This is the last of three postings about how I see agency in our networked era.
In part 1 I discussed how embracing the distributedness that is the core design feature of the internet needs to be an engine for agency. In part 2 I discussed how agency in the networked era is about both the individual and the immediate group she’s part of in the various contexts those groups exist, and consists of striking power, resilience and agility. In this third part I will discuss what we need to demand from our technology.

My perception of agency more or less provides the design brief for the technology that can support it.

Agency as the design brief for technology
If distributed networks are the leading metaphor for agency, then technology needs to be like that too.

If agency is located in both the individual and the social context of an immediate group the individual is functioning in for a given purpose, then technology needs to be able to support both the individual and group level, and must be trustworthy at that level.

If agency consists of local striking power, resilience, and agility, then technology must be able to take in global knowledge and perspective, but also be independently usable, and locally deployable, as well as socially replicable.

If technology isn’t really distributed, than at least it should be easy to avoid it becoming a single point of failure for your and your groups use case.

Two types of tech to consider
This applies to two forms of technology. The ‘hard’ technology, hardware and software, the stuff we usually call technology. But also the ‘soft’ technology, the way we organize ourselves, the methods we use, the attitudes we adopt.

Technology should be ‘smaller’ than us
My mental shorthand for this is that the technology must be smaller than us, if it is to provide us with agency that isn’t ultimately depending on the benevolence of some central point of authority or circumstances we cannot influence. In 2002 I described the power of social media (blogs, wiki’s etc.), when they emerged and became the backbone for me and my peer network, in exactly those terms: publishing, sharing and connecting between publishers became ‘smaller’ than us, so we could all be publishers. We could run our own outlet, and have distributed conversations over it. Over time our blog or rather our writing was supplanted, by larger blogging platforms, and by the likes of Facebook. This makes social media ‘bigger than us’ again. We don’t decide what FB shows us, breaking out of your own bubble (vital in healthy networks) becomes harder because sharing is based on pre-existing ‘friendships’ and discoverability has been removed. The erosion has been slow, but very visible, not only if you were disconnected from it for 6 years.

  • Smaller than us means it is easy enough to understand how to use the technology and has the possibility to tinker with it.
  • Smaller than us means it is cheap (in terms of time, money and effort) to deploy and to replace.
  • Smaller than us means it is as much within the scope of control/sphere of trust of the user group as possible (either you control your tools, or your node and participation in a much wider distributed whole).
  • Smaller than us means it can be deployed limited to the user group, while tapping into the global network if/when needed or valuable.

Striking power comes from the ease of understanding how to use technology in your group, the ability to tinker with it, to cheaply deploy it, and to trust or control it.
Resilience comes from being able to deploy it limited to the user group, even if the wider whole falls down temporarily, and easily replace the technology when it fails you, as well as from knowing the exact scope of your trust or control and reducing dependancy based on that.
Agility comes from being able to use the technology to keep in touch with the global network, and easily alter (tinker), replace or upgrade your technology.

Technology needs an upgrade
Most of the technology that could provide us with new agency however falls short of those demands, so currently doesn’t.

It is mostly not distributed but often centralized, or at best ‘hubs and spokes’ in nature, which introduces trust and control issues and single points of failure. Bitcoins ultimate centralization of the needed computing power in Chinese clusters is one, Facebooks full control over what it shows you is another.

It is often not easy to use or deploy, requiring strong skill sets even when it is cheap to buy or even freely available. To use Liquid Feedback decision making software for instance, you need unix admin skills to run it. To use cheap computing and sensing/actuating hardware like Arduino, you need both software and electronics skills. Technology might also still be expensive to many.

Technologies are often currently deployed either as a global thing (Facebook), or as a local thing (your local school’s activity board), where for agency local with the ability to tap into the global is key (this is part of true distributedness), as well as the ability to build the global out of the many local instances (like mesh networks, or The Things Network). Mimicking the local inside the centralized global is not good enough (your local school’s closed page on FB). We also need much more ability to make distinctions between local and global in the social sense, between social contexts.

There are many promising technologies out there, but we have to improve on them. Things need to be truly distributed whenever possible, allowing local independence inside global interdependence. Deploying something for a given individual/group and a given use needs to be plug and play, and packaging it like that will allow new demographics to adopt it.

The types of technology I apply this to
Like I said I apply this to both ‘hard’ tech, and ‘soft’ tech. But all are technologies that are currently not accessible enough and underused, but could provide agency on a much wider scale with some tweaks. Together they can provide the agency that broad swathes of people seem to crave, if only they could see what is possible just beyond their fingertips.

The ‘hard’ technologies where barriers need to come further down I am thinking about are:

  • Low cost open source hardware
  • Digital making
  • Low cost computing (devices or hosted)
  • (open) data and data-analysis
  • IoT (sensors and actuators)
  • Mesh networking
  • Algorithms
  • Machine learning
  • Blockchain
  • Energy production
  • Agrotech
  • Biotech

The ‘soft’ technologies where barriers need to come further down I am thinking about are:

  • Peer organizing, organisational structures
  • Peer sourcing
  • Open knowledge
  • Iterative processes and probing design
  • Social media / media production
  • Community building practices
  • Networked (mental) models
  • Workflow and decision making tools
  • Community currencies / exchanges
  • Hacking ethics
  • Ethics by design / Individual rights

Putting it all together gives us the design challenge
Putting the list of social contexts (Agency pt 2) alongside the lists of ‘hard’ and ‘soft’ techs, and the areas of impact these techs create agency towards, and taking distributedness (Agency pt 1) and reduced barriers as prerequisites, gives us a menu from which we can select combinations to work on.
If we take a specific combination of individuals in a social context, and we combine one or more ‘hard’ and ‘soft’ technologies while bringing barriers down, what specific impact can the group in that context create for themselves? This is the design challenge we can now give ourselves.

In the coming months, as an experiment, with a provincial library and a local FabLab, we will explore putting this into practice. With groups of neighbours in a selected city we will collect specific issues they want to address but don’t currently see the means to (using a bare bones form of participatory narrative inquiry). Together we will work to lower the barriers to technology that allows the group to act on an issue they select from that collection. A separate experiment doing the same with a primary school class is planned as well.

Agency by Ton ZylstraAgency map, click to enlarge

Let’s Encrypt the Web, For Free

Getting a SSL/TLS-certificate for your website has always been a hassle as well as costly. However increasing the amount of default encrypted web traffic is important both in terms of website safety as well as in terms of privacy (when you submit information to websites). The cost and hassle kept most non-commercial websites from using certificates however. Until now. Because now there is Let’s Encrypt, which makes it very easy to add certificates to your website. For free.

When I started using a VPS two years ago to serve as my cloud and as a Dropbox replacement, I needed a certificate to make sure the traffic to my cloud was encrypted. The VPS originally came with one, but that expired after a year. Since then I’ve added a renewing certificate from Comodo (the largest provider at the moment), which I got for a one-time payment as a lifetime service from my VPS provider. But for a range of other domains I use, both hosted on my VPS as well as in various hosting packages with a Dutch hosting provider, I never bothered getting a https certificate, because it was too much work and too expensive to keep up. There already were free certificates available, such as through the Israeli StartCom which I used for one or two domains, but I never felt certain it was secure as a service (it turns out it’s small buth 7th globally, and has received some serious criticism).

Symantec has a certificate problem...
Arranging and renewing certificates can be a pain, even if you’re Symantec, the world’s second certificate provider. (image Lars K. Jensen, CC-BY)

Let’s Encrypt changes all that. Because they are strongly community driven, amongst other with support by the Electronic Frontier Foundation, and because they are going the route of getting their root certificate independently recognized and be a full certificate authority. Currently they use IdenTrust’s (5th globally) existing trusted root certificates, but the Let’s Encrypt root certificate has now been recognized by Mozilla, and they’re working to get it recognized by Google, Apple, Microsoft, Oracle et al. This would increase the independency of Let’s Encrypt. Let’s Encrypt says the growth rate of https traffic has quadrupled since the end of 2015, in part through their efforts. Their certificates are used at over 8 million websites now.

I’ve added a range of my own sites to those 8 million. For the domains on my own VPS that didn’t have valid certificates yet, they were easy to install. I used SSLforFree to generate the Let’s Encrypt certificates, based on me providing proof I have full control over the domains I seek to protect. Then I added the certificates to the domains using the WHM control panel of my server. Certificates are valid for 90 days, but I can set them to auto-renew, although I haven’t done that yet.

For the domains not hosted on my VPS, such as this one for my blog, I depend on my Dutch hosting provider (as I don’t have root access to install certificates myself, although I have full control over the domains such as its DNS settings.) Luckily recently they have started offering auto-renewing Let’s Encrypt certificates (link in Dutch) as a free service for each of the domains you host with them, because they recognize the importance of secure web traffic. All it took was opening a ticket with them, listing the domains I was requesting certificates for. Within two hours eleven certificates were created and installed.

So, from now on you can get my blogpostings from https://zylstra.org/blog.

this blog now with https

On Agency pt. 2: The Elements of Networked Agency

Earlier this year I wrote a 1st posting of 3 about Agency, and I started with describing how a key affordance is the distributedness that internet and digitisation brings. A key affordance we don’t really fully use or realize yet.
I am convinced that embracing distributed technology and distributed methods and processes allows for an enormous increase in agency. A slightly different agency though: networked agency.

Lack of agency as poverty and powerlesness
Many people currently feel deprived of agency or even powerless in the face of the fall-out of issues originating in systems or institutions over which they have no influence. Things like the financial system and pensions, climate change impact, affordable urban housing, technology pushing the less skilled out of jobs etc. Many vaguely feel there are many things wrong or close to failing, but without an apparant personal path of action in the face of it.

In response to this feeling of being powerless or without any options to act, there is fertile ground for reactionary and populist movements, that promise a lot but are as always incapable of delivering at best and a downright con or powerplay at worst. Lashing out that way at least brings a temporary emotional relief, but beyond that is only making things worse.

In that sense creating agency is the primary radical political standpoint one can take.
Lack of agency I view as a form of poverty. It has never been easier to create contacts outside of your regular environment, it has never been easier to tap into knowledge from elsewhere. There are all kinds of technologies, initiatives and emerging groups that can provide new agency, based on those new connections and knowledge resources. But they’re often invisible, have a barrier to entry, or don’t know how to scale. It means that many suffering from agency poverty actually have a variety of options at their fingertips, but without realizing it, or without the resources (albeit time, tools, or money) to embrace it. That makes us poor, and poor people make poor choices, because other pathways are unattainable. We’re thirsty for agency, and luckily that agency is within our grasp.

Agency in the networked age is different in two ways
The agency within our grasp is however slightly different in two ways from what I think agency looked like before.

Different in what the relevant unit of agency is
The first way in which it is different is what the relevant unit of agency is.
Agency in our networked age, enabling us to confront the complexity of the issues we face, isn’t just individual agency, nor does it mean mass political mobilisation to change our institutions. Agency in a distributed and networked complex world comes from the combination of individuals and the social contexts and groupings they are part of, their meaningful relations in a context.

It sees both groups and small scale networks as well as each individual that is a node in them as the relevant units to look at. Individuals can’t address complexity, mass movements can’t address it either. But you and I within the context of our meaningful relationships around us can. Not: how can I improve my quality of life? Not: how can I change city government to improve my neighborhood? But: what can I do with my neighbours to improve my neighborhood, and through that my own quality of life?
There are many contexts imaginable where this notion of me & my relevant group simultaneously as the appropiate unit of scale to look at agency exists:

  • Me and my colleagues, me and my team
  • Me and my remote colleagues
  • Me on my street, on my block
  • Me in my part of town
  • Me and the association I am a member of
  • Me and the local exchange trading group
  • Me and my production coop
  • Me and my trading or buying coop
  • Me and my peer network(s)
  • Me and my coworking space
  • Me in an event space
  • Me and my home
  • Me in my car on the road
  • Me traveling multi-modal
  • Me and my communities of interest
  • Me and my nuclear family
  • Me and my extended (geographically distributed) family
  • Me and my dearest
  • Me and my closest friends

agency comes from both the individual and immediate group level (photo JD Hancock, CC-BY)

For each of these social contexts you can think about which impact on which issues is of value, what can be done to create that impact in a way that is ‘local’ to you and the specific social context concerned.

Different in how agency is constituted based on type of impact
Impact can come in different shades and varieties, and that is the second way in which my working definition of agency is different. Impact can be the result of striking power, where you and your social context create something constructively. Impact can take the form of resilience, where you and your social context find ways to mitigate the fall-out of events or emergencies propagating from beyond that social context. Impact can be agility, where you and your social context are able to detect, assess and anticipate emerging change and respond to it.

So agency becomes the aggregate of striking power, resilience and agility that you and your social context individually and collectively can deliver to yourself, by making use of the potential that distributedness and being networked creates.
Whether that is to strengthen local community, acting locally on global concerns, increasing resilience, leverage and share group assets, cooperatively create infrastructure, create mutual support structures, scaffold new systems, shield against broken or failing systems, in short build your own distributed and networked living.

Designing for agency
For each of those contexts and desired impacts you can think about and design the (virtual and real) spaces you need to create, the value you seek, the levels of engagement you can/should accommodate, the balancing of safety and excitement you desire, the balance you need between local network density and long distance connections for exposure to other knowledge and perspectives, the ways you want to increase the likelihood of serendipity or make space for multiple parallel experimenting, the way you deal with evolution in the social context concerned, and the rhythms you keep and facilitate.

The tools that enable agency
To be able to organize and mobilise for this, we need to tap into two types of enabling technology, that help us embrace the distributedness and connectedness I described in part 1. The ‘techie’ technology, which is comprised of hard- and software tools, and the ‘soft’ technology which consists of social processes, methods and attitudes.
What types of technologies fit that description, and what those technologies need to be like to have low enough adoption thresholds to be conducive to increased agency, is the topic of part 3.

Which City To Live and Work for a Month in 2017?

In the past years Elmine and I have visited different cities for a longer time, to experience how it is to live there. For a month, sometimes shorter, sometimes longer, we would stay in a city, and work from there, seeking out local entrepreneurs, while also enjoying the local food, coffee, and art on offer. Exposing ourselves to a different environment but not in a touristic capacity, provides inspiration, and generates new insights and ideas. We spent extended stays in Vancouver, Copenhagen, Helsinki, Berlin, Cambridge and Lucca, and are now exploring which city to set up camp in in the summer or fall of 2017. As I did in 2013 I asked around for suggestions, this time on Facebook. I got a long list of responses, which makes filtering and ultimately choosing likely a project in itself.

For us, for a city to qualify as a candidate it needs to be in Europe (as we want to drive there by car, given we are bringing our young daughter plus all the gear that entails), needs to have something to offer in terms of culture, and food, and good places to hang out in, but above all needs to have a few communities around new tech, start-ups, or other topics that we are interested in. This because we want to seek out new conversations and connections (such as when I organized the first Danish Data Drinks in Copenhagen in 2012).

Here are the (over 50!) suggestions we received, on a map:

Or see the list.

What would you like me to write more about?

Design Museum
Something to aspire to

A few years ago Elmine and I wrote a short e-book on how to organize an unconference as a birthday party (PDF linked on the right). Since then I’ve regularly entertained the idea of writing another e-book, but that never really happened. While I do have some topics I’d like to write about, I find my knowledge of those topics still too limited to be able to come up with a narrative to share anything worthwile. There are also doubts (fears?) about what type of things would have a potential readership

So this week I decided to ask:

What would you like to see me write more or more extensively about?

Already I got a range of responses, and it is an intriguing list. Some suggestions are about aspects of my own journey, others are about topics that I don’t know much (or anything) about, but where apparantly there’s interest in my take on it. Some come close to topics I already want to write more about, but feel I haven’t found an angle yet.

Here’s the list until now. More suggestions and thoughts are welcome.

  • Optimal unfamiliarity (a phrase I coined in 2004 initially to describe what mix of people make a great event audience to be part of, but has become a design principle in how I try to collect information and learn.), suggested by Piers Young
  • An epistolary travel log novella (something that could arise from my 14 years of blogging about my travels and work), suggested by Georges Labreche
  • Open currencies (which Google tells me they have no meaningful results for, but which connects to my experience with LETS, and chimes with free currencies in p2p networks), suggested by Pedro Custodio
  • Moderating sessions with a mix of analog and digital tools (closely connected to my thoughts about fruitful information strategies in social contexts), suggested by Oliver Gassner
  • Fatherhood (as I became one 9 weeks ago, but I don’t think 9 weeks counts as experience), suggested by Dries Krens
  • Motivating others to act on open data (a large chunk of my work), suggested by Gerrit Eicker
  • Being a European in the digital age (which I strongly claim to be), suggest by Alipasha Foroughi
  • Convincing profit oriented organisations of the value of open access and responsible research (comes close to Gerrit’s point), suggested by Johnny Søraker
  • How and why I left my job (being employed by Dries mentioned above), suggested by Rob Paterson
  • The journey from my involvement in knowledge management and early blogging, to where I am now, and how it impacted the way Elmine and I arrange our lives (lots to unpack here!), suggested by Jon Husband (who, like Rob Paterson, has been part and witness of that journey over many years)
  • The proliferation of means of communication versus the quality of communication (for me this points to information strategies on focus, filtering etc.), suggested by Jos Eikhout
  • Personal information strategies and processes using open source tools (something I blogged often about in various shapes and forms), suggested by Terry Frazier, a fellow blogger on knowledge management back when I started blogging in 2002

Looking at who responded is already in a way a manifestation of some of the suggested topics (the journey, the information strategies, the optimal unfamiliarity, facilitating communities).

I can’t promise I’ll write about all of the things suggested, but I appreciate the breadth and scope of this list and the feedback I can unpack from it. More suggestions are very welcome.

Archiving Mail in MySQL with MAMP and Mailsteward

As I am moving out of Gmail, I had to find a way to deal with the 21GB mail archive from the past 12 years.

Google lets you export all your data from its various services, including email. After a day or so you get a download link that contains all your mail in one single file in MBOX format.

MBOX is a text format so it allows itself to be searched, but that would only tell you that what you are looking for is somewhere in that 21GB file.

I could also import it into my mail client as a local archive, by dropping the MBOX file in the Local Folder of Thunderbird with Finder. That provides me with a similar access and search capability as I had for all that mail in Gmail. However, if I would like to do more with my archive, mine it for things, and re-use stuff by piping it into other workflows having it in Thunderbird would not be enough.

Mailsteward puts MBOX into MySQL
So I searched for a way to more radically open my archive up to search. I came across DevonThink, but that seemed a bit overkill as it does so much more than merely digesting a mail archive, and as such provides way too much overlap with my Evernote. (Although I may rethink that in the future, if I decide to also move out of Evernote, as after Gmail it is my biggest third party service that contains lots of valuable information.) I looked for something simpler, that just does what I need, putting e-mail into sql, and that is how I found Mailsteward Pro.

There are three versions of Mailsteward, and I needed the Pro version, as it is the one that works with MySQL and thus can handle the volume of mail in my archive. It costs $99 one time, not cheap, but as I was paying for storage with Google as well, over time it pays for itself.

Installing Mailsteward
When installing Mailsteward it assumes you already have a MySQL server running on your system. I use MAMP Pro on my laptop as a local web and mysql server, on which I run different things locally, like a blog based journal and a self-assessment survey tool. MAMP Pro is very easy to install.

You need to take the following steps to allow Mailsteward access to MySQL. In MAMP Pro you need to allow external access to MySQL, but only from within your own system (this basically means applications other than MAMP can access the MySQL server.

Schermafbeelding 2016-07-19 om 16.37.07

Then you create a new database via the PHP Mysqladmin that comes with MAMP. Mailsteward will populate it with the right tables. In my case I aptly named it mailarchives.

Schermafbeelding 2016-07-19 om 10.48.16

Within Mailsteward you then add a connection, listing the database you created, and adding the right ports etc. Note that the socket it requests isn’t an actual file on your system, but does need to point to the right folder within the MAMP installation, which is the Application/MAMP/tmp/mysql folder.

Schermafbeelding 2016-07-19 om 08.41.51

Importing MBOX files
I first tested Mailsteward with my parents e-mail archive that I kept after they passed away last year, to be able to find contact details of their friends. It imported fine. Then I tried to import my Gmail MBOX file. It turns out 21GB is too large to handle in one go for Mailsteward, as it eats away all memory on your Mac. I concluded that I need to split my Gmail MBOX file into multiple smaller ones.

Luckily there is a working script on GitHub that chops MBOX files up in smaller ones, and that allows you to set the filesize you want. I chopped the Gmail MBOX into 21 smaller files of 1GB each. These imported ok into MailSteward. Mailsteward maintains tags and conversation threads.

To run the script, first open it in a text editor and change the filesize limit to what you want (default is 40MB, I changed it to 1GB). Then open Terminal and run the script by typing the following command, where the destination folder does not need to exist:

sudo php mbox_splitter.php yourarchivename.mbox yourdestinationfolder

terminalcommand

That way you end up with a folder that contains all the smaller MBOX files:

Schermafbeelding 2016-07-22 om 16.06.53
Using Mailstewards import feature you then add each of those files, by hand (but luckily you only need to do that once).

Using the archive
Mailsteward allows you to search the archive through its rather simple and bland interface, but you can also tweak the MySQL queries it creates yourself. The additional advantage of having it in MySQL is that I can also access the archive with other tools to search it.

Schermafbeelding_mailsteward

Adding newer mail to the archive
Thunderbird allows me to export e-mail as MBOX files via the Import/Export add-on, which can then be added to the archive by Mailsteward. So that’s a straightforward operation. Likely I can automate it and schedule it to run every month.

How to leave Gmail

Leaving Gmail, a tough question
In the past two years I have been slowly reconfiguring my online routines to increase privacy safeguards, and bring more of my data under my own control, while avoiding making my work routines more difficult and thus less routine. How to create an e-mail workflow that does not rely on Gmail has been the hardest part of this effort. I think I now finally have figured out how to do it without loss of convenience, and hope to have made the switch after I finish exporting all e-mail data Google has from me.

mailinbox
After 12 years this will no longer be a familiar sight for me

Previous steps I took
Some things I already did to increase my control over my own data are:

Not that I don’t use anything but my own stuff now, I also am still a heavy user of various services, like Evernote for instance, or my Android phone. But the usage of third party services has become more varied and spread-out, reducing the impact of losing any one of them.

Why I want to leave Gmail
The net is a distributed place, and our information strategies and routines should embrace that distributedness. In practice however we often end up in various silos and walled gardens, because they are so very convenient to use, although they actually decrease our own control and/or introduce single points of failure. If your Facebook account gets suspended can you still interact with others? If your Google account gets suspended, do you still know how to reach people? Using Gmail also means all of my stuff resides on servers falling under the not very privacy sensitive US laws.

Since July 2004 I have however completely relied on Gmail. It is an easy way to combine the various e-mail addresses I use into 1 single inbox ( or rather multiple inboxes on the basis of follow-up actions), and it has great tagging, search and filtering so that you never need to file anything or sort into folders. I have used Gmail as my central inbox for everything. Since 2004 I have accumulated about 770.000 emails in 249.000 conversations, for a total of 21GB. Gmail is therefore the largest potential single point of failure in my information processing.

The issues to solve
To wean myself off Gmail there were several things for which I needed a similarly smooth working alternative:

  • All the mail addresses I use need to come together into a single mailbox, and conversations need to be threaded
  • Availability across devices, and via webmail. Especially on the road I use my phone for quick e-mail triage, and as alternative for phone calls. Webmail is my general purpose access point on my laptop while traveling
  • Having access to my full mail archive for search and retrieval
  • Excellent tagging and filtering possibilities

The steps I took to leave Gmail
Finding a path away from Gmail took two realisations, one about process and one about technology.

Changing my process
Concerning process I realized that Gmail allows me, or even invites me, to be very lazy in my e-mail processing routines. Because of the limitless storage I merely needed to be able to find things back (through the use of tags for instance), and never needed to really decide what to do with an e-mail.

This means for instance that lots of attachments only live on in my mailbox, without me adding them to relevant project documentation etc. Likely I spent hours in the past years searching for slide decks in my mountain of e-mail, in stead of spending half a minute once to store and archive an attachment in a more logical place where I’m more likely to find it with desktop search, or serendipitously bump into it, and then throw the mail message out. So mail processing has to become a much less lazy process with a few more active decisions in handling messages. E.g. attachments into a project folder, contact info into contacts, book keeping related messages to bookkeeping (and no longer going through all mail tagged bookkeeping every quarter to do my taxes), tasks and actions to my Things todo application. I already wrote several Apple Scripts to let my todo app and Evernote talk to various other software packages (like Tinderbox), but it is now likely I will write a few more to automate mail message processing further (because I prefer to still keep my process as lazy as possible).

Changing my tools
A second key realization was that my original reasons for staying within webmail had meanwhile been solved with better technology: it used to be that only Gmail provided the cross-device access to all my mail accounts simultaneously, something I could not easily do in 2004 with a desk/laptop mail client in combination with a mobile mail client. Now, with much broader IMAP support (not just by my software tools, but also by hosting companies) this is much easier, increasing the range of possible alternatives. Threading mail conversations is now also a more universal feature.

This now allowed me to start using Thunderbird mail client, including PGP encryption, on my laptop (I never intensively used a mail client before on my laptop), in combination with the open source K9 Android mail app (replacing the Gmail app for me), also with encryption options. Both allow tagging of messages, and Thunderbird allows filtering for not just incoming mail but also when sending and when archiving, which is really useful.

As an alternative to piping all my mail accounts into Gmail, I now use all the real inboxes of those mail accounts where they’re originally hosted, and use IMAP to combine into one user interface on my laptop and mobile. Those separate mailboxes do have lower storage limits (usually 500MB), so it is more likely I bump into limits, and that is the reason I need a much less lazy mail processing routine (especially concerning larger attachments), in which I can regularly archive older mail.

Separately I also now use a different webmail provider, Protonmail in Switzerland, that comes with default encryption. I’ve attached a domain name to it (zylstra.eu).

The archiving issue
The above shows how leaving Gmail moving forward from the here and now, by solving the one-inbox and the multiple device issues can be done by changing process and tools. That leaves the question of how to deal with the 21GB of mail archive from the past 12 years. Leaving it all in Gmail, and use that as archive might be a work-around for old mail, but doesn’t help me for future mail. I could add it as a local folder to the Thunderbird mail client, but that thought did not appeal to me and feels clunky. I find that I never use my mail archive from my mobile, so the archive does not need to be cloud based per se. So, I opted to keep my mail archive local, by storing it in a mysql database. This allows for query based searches, and even text mining, without it clogging up my mail client itself. Gmail can export your archive in a single MBOX file, and I used Mailsteward Pro to transform it into a mysql database. (More on that set-up in the next posting Archiving mail in mysql with MAMP and Mailsteward). With the archive now locally stored, the database is backed up to both my NAS drive and my VPS.

What remains
With the basic set-up for leaving Gmail now in place, there is still work te be done over the coming months. Clearing out the archive at Gmail is one step, once I feel comfortable with searching my new mysql archive. Creating more filters in my mail client, and writing a few scripts to integrate my mail processing with the other tools I use is another. There are also likely a whole bunch of things (accounts, subscriptions etc) that use my gmail address, which I will change as I go along.

My longtime blogging friend Roland Tanglao suggested to mine my mail archive for things that could be published, contact data, harvest old ideas that can feed into my work now etc. This sounds appealing but needs some contemplation and then a plan. Having the archive in mysql makes it a lot easier to come up with a plan though.

Beyond mail, there are of course more Google services I use heavily, especially Calendar, which are tied to my gmail address. I could move that to my Owncloud as well. I will keep my Google account, as this isn’t about ditching Google but about reducing risks and taking more control. Apart from Calendar there are no other single points of failure in the way I use my Google account. Beyond Google, Evernote is another silo I’m heavily invested in, and the content I keep there is arguably more valuable to me than my Gmail. So that is a future change to think about and seek alternatives for.

Inbox 0 is for Losers
I reached Inbox -1 on Gmail once in 2009 🙂

[Find the outline and slides of my Koppelting session on leaving Gmail in the follow-up posting at https://tzyl.eu/leavegarden. You can use the shortlink https://tzyl.eu/gmail to refer to this posting.

Near Future SF Reading List: Explore Emerging Future Together

Gogbot 2015: Google's AI DreamsThe dreams of Google’s artificial intelligence

I read lots of science fiction, because it allows exploring the impact of science and technology on our society, and the impact of our societies on technology development in ways and forms that philosophy of technology usually doesn’t. Or rather SF (when the SF is not just the backdrop for some other story) is a more entertaining and accessible form of hermeneutic exercise, that weaves rich tapestries that include emotions, psychology and social complexity. Reading SF wasn’t always more than entertainment like that for me, but at some point I caught up with SF, or it caught up with me, when SF started to be about technologies I have some working knowledge of.

Bryan Alexander, a long time online peer and friend for well over a decade, likewise sees SF, especially near future SF, as a good way to explore emerging future that already seem almost possible. He writes “In a recent talk at the New Media Consortium’s 2016 conference, I recommended that education and technology professionals pay strong attention to science fiction, and folks got excited, wanting recommendations. So I’ve assembled some (below)“. His list contains a group sourced overview of recent near future SF books, with some 25 titles.

I know and read half of the books on the list, and last night loaded up my e-reader with the other half.

If you want to discuss those books keep an eye on Bryan’s blog, as you’re sure to get some good conversations around these books there.

Gogbot 2015: Google's AI Dreams Gogbot 2015: Google's AI Dreams
The dreams of Google’s artificial intelligence

(photos made during the 2015 Gogbot Festival, the yearly mash up of art, music and technology into a cyberpunk festival in my home town Enschede.)

Related: Enjoying Indie SF, March 2016

Original social media needs still unmet

My friend Peter Rukavina blogged how he will no longer push his blogpostings to Facebook and Twitter. The key reason is that he no longer wants to feed the commercial data-addicts that they are, and really wants to be in control of his own online representation: his website is where we can find him in the various facets he likes to share with us.

Climbing the Wall
Attempting to scale the walls of the gardens like FB that we lock ourselves into

This is something I often think about, without coming to a real conclusion or course of action. Yes, I share Peters sentiments concerning Facebook and Twitter, and how everything we do there just feeds their marketing engines. And yes, in the past two years I purposefully have taken various steps to increase my own control over my data, as well as build new and stronger privacy safeguards. Yet, my FB usage has not yet been impacted by that, in fact, I know I use it more intensively than a few years ago.

Peter uses his blog different from me, in that he posts much more about all the various facets of himself in the same spot. In fact that is what makes his blog so worthwile to follow, the mixture of technology how-to’s, and philosphical musings very much integrated with the daily routines of getting coffee, or helping out a local retailer, or buying a window ventilator. It makes the technology applicable, and turns his daily routines into a testing ground for them. I love that, and the authentic and real impact that creates where he lives. I find that with my blog I’ve always more or less only published things of profession related interests, which because I don’t talk about clients or my own personal life per se, always remain abstract thinking-out-loud pieces, that likely provide little direct applicability. I use Twitter to broadcast what I write. In contrast I use FB to also post the smaller things, more personal things etc. If you follow me on Facebook you get a more complete picture of my everyday activities, and random samplings of what I read, like and care about beyond my work.

To me FB, while certainly exploiting my data, is a ‘safer’ space for that (or at least succeeds in pretending to be), to the extent it allows me to limit the visibility of my postings. The ability to determine who can see my FB postings (friends, friends of friends, public) is something I intensively use (although I don’t have my FB contacts grouped into different layers, as I could do). Now I could post tumblerlike on my own blog, but would not be able to limit visibility of that material (other than by the virtue of no-one bothering to visit my site). That my own blog content is often abstract is partly because it is all publicly available. To share other things I do, I would want to be able to determine its initial social distribution.

That is I think the thing I like to solve: can I shape my publications / sharings in much the same way I shape my feedreading habits: in circles of increasing social distance. This is the original need I have for social media, and which I have had for a very long time, basically since when social media were still just blogs and wikis. Already in 2006 (building on postings about my information strategies in 2005) I did a session on putting the social in social media front and center, together with Boris Mann at Brussels Barcamp on this topic, where I listed the following needs, all centered around the need to let social distance and quality of relationships play a role in publishing and sharing material:

  • tools that put people at the center (make social software even more social)
  • tools that let me do social network analysis and navigate based on that (as I already called for at GOR 2006)
  • tools that use the principles of community building as principles of tool design (an idea I had writing my contribution to BlogTalk Reloaded)
  • tools that look at relationships in terms of social distance (far, close, layers in between) and not in terms of communication channels (broadcasting, 1 to 1, and many to many)
  • tools that allow me to shield or disclose information based on the depth of a relationship, relative to the current content
  • tools that let me flow easily from one to another, because the tools are the channels of communication. Human relationships don’t stick to channels, they flow through multiple ones simultaneously and they change channels over time.

All of these are as yet unsolved in a distributed way, with the only option currently being getting myself locked into some walled garden and running up the cost of moving outside those walls with every single thing I post there. Despite the promise of the distributed net, we still end up in centralized silo’s, until the day that our social needs are finally met in distributed ways in our social media tools.