Through a reference by Julian Elvé, I read Doc Searls’ talk that he gave last October and has now published, Saving the Internet – and all the commons it makes possible.

Internet OpenInternet Open, image by Liz Henry, license CC BY ND

First he says of the internet as commons
In economic terms, the Internet is a common pool resource; but non-rivalrous and non-excludable to such an extreme that to call it a pool or a resource is to insult what makes it common: that it is the simplest possible way for anyone and anything in the world to be present with anyone and anything else in the world, at costs that can round to zero.

As a commons, the Internet encircles every person, every institution, every business, every university, every government, every thing you can name. It is no less exhaustible than presence itself. By nature and design, it can’t be tragic, any more than the Universe can be tragic.

He then lists 9 enclosures of that commons currently visible, because enclosure is one of the affordances the internet provides.

See, the Internet is designed to support every possible use, every possible institution, and—alas—every possible restriction, which is why enclosure is possible. People, institutions and possibilities of all kinds can be trapped inside enclosures on the Internet.

  1. service provisioning, for example with asymmetric connection speeds. Asymmetry favours consumption over production. Searls singles out cable companies specifically for wanting this imbalance. I’ve been lucky from early on. Yes until fiber to the home, we had asymmetrical speeds, but I had a fixed IP address from the start and ran a web server under my desk from the mid ’90s until 2007 when I started using a hoster for this blog. I still run little experiments from my own server(s) at home. The web was intentioned to be both read and write even at the level of a page you visited (in short the web as online collaboration tool, in a way like Google documents). For most people the general web is preceived as read-only I assume, even if they participate in silos where they do post stuff themselves.
  2. 5G wireless service, as a way for telco’s to do the same as cable companies did before, in the form of content-defined packages. I am not sure if this could play out this way in the Netherlands or the EU, where net neutrality is better rooted in law, and where, especially after the end of roaming charges in the EU, metered data plans either have become meaningless as unmetered plans are cheap enough, or at least the metered plans themselves are large enough to make e.g. zero-rating a meaningless concept. 5G could however mean households might choose to no longer use a fixed internet subscription for at home, and do away with their own wifi networks, I suspect, and introducing a new dependency where your mobile and at home access are all the same thing and a singular choke point.
  3. government censorship, with China being the most visible one in this space, but many countries do aim to block specific services at least temporarily, and many countries and collections of countries are on the path to realising their own ‘data spaces’. While understandable, as data and networks are strategic resources now, it also carries the risk of fragmentation of the internet (Russia e.g.), motivated ostensibly by safety concerns but with a big dollop of wanting control over citizens.
  4. the advertising-supported commercial Internet. This is the one most felt currently. Adtech that tracks you across your websurfing habits, and not just in the silos you inhabit
  5. protectionism, which Searls ties to EU privacy laws, which I find a very odd remark. While GDPR could be better, it is a quality instrument with a rising floor, that is not designed to protect the EU market, but to encourage global compliance to its standards. A way of shaping instruments the EU uses more often, and has proven to be a succesfull export product. The cookie notices he mentions are a nuisance, but not the result of the GDPR, and in my mind more caused by interpreting the (currently under revision) cookie law in a deliberate cumbersome way. Even then, I don’t see how privacy regulation is protectionism, as it finds its root in human rights, not competition law.
  6. Facebook.org, or digital colonialism. This is the efforts by silos like FB to bring the ‘next billion’ online in a fully walled garden that is free of charge and presented as being the web, or worse the internet itself. I’ve seen this in action in developing countries and it’s unavoidable for most if not all, because it is the only way to access the power of agency that the internet promises, when there’s is no way you can afford connectivity.
  7. forgotten past, caused by the focus on the latest, the newest, while at the same time the old is not only forgotten but also actively lost as it gets taken offline etc. I think this is where strong opportunities are arising for niche search engines and also search engines as a personal tool. You don’t need to build the next Google or be a market player even, to meaningfully erode the position of Google search. For instance it is quite feasible to have my own search engine that only searches all the blogs I subscribe and have subscribed to (I actually should build that). At the same time, there is a slow steady and increasing effort of bringing more of the old, just not the old web, online by the ongoing digitisation of physical archives and collections of artefacts. More of our past, our global cultural heritage, is coming onto the web every day and it is really still only at the start.
  8. algorithmic opacity. This one is very much on the agenda across Europe currently, mainly as part of ethical discussions and right now mostly centered around government transparency. The GDPR contains a clause that automated algorithmic decision making about people is not allowed. At the very least having explainable alogrithms, and transparent usage of them is a likely emerging practice. Asymmetry of decision making also plays a useful role. This one too is closely tied to human rights which will help bring in parties to the discussion that are not of the tech world. At issue with what we currently see of algorithms is that they are used over our heads, and not yet much as personal tool, where it could increase our networked agency.
  9. the one inside our heads, where we accept the internet as it is presented to us by those invested in one or more of the above 8 enclosures. With understanding what the internet is and how it is a commons as a public awareness need.

Go read the entire thing, where Doc Searls describes what the internet is, how it connects to human experience and making the hyper local key again when there is a global commons encompassing everyone, and how it erodes and replaces institutions of the 20th century and earlier. He talks about how the internet “means we are all authors of each other“.

At the end he asks What might be the best way to look at the Internet and its uses most sensibly?, and concludes “I think the answer is governance predicated on the realization that the Internet is perhaps the ultimate commons“, and “There is so much to work on: expansion of agency, sensibility around license and copyright, freedom to benefit individuals and society alike, protections that don’t foreclose opportunity, saving journalism, modernizing the academy, creating and sharing wealth without victims, de-financializing our economies… the list is very long

I’m happy to be working on the first three of those.

Robert Allerton Park in Monticello, Illinois. English Walled Garden.Walled garden, image by Ron Frazier, license CC BY

(via Peter Rukavina and Roland Tanglao)

I really like this metaphor by Robin Sloan. I would never call myself a ‘real’ programmer, or a programmer at all really. Yet, I’ve been programming stuff since I was 12. In BASIC during my school years, in assembly, Pascal, C++ at university, and in Perl, VisualBasic in my early days at work (which included programming the first intranet applications for my then employer), and currently in PHP and Applescript (to get my websites/tools and my laptop to do the things I want). Except for some university assignments all of that programming was and is because I want to, and done in spare time.

I too am the programming equivalent of a home cook (which coincidentally I also am).

Robin Sloan also hits on what irks me about the ‘everyone needs to learn to code’ call to action. “The exhortation “learn to code!” has its foundations in market value. “Learn to code” is suggested as a way up, a way out… offers economic leverage … [it] goes on your resume.”

People don’t only learn to cook so they can become chefs. Some do! But far more people learn to cook so they can eat better, or more affordably, or in a specific way…..

The above is I think an essential observation. Eating better, more affordably or in a specific way, translates to programming with the purpose to hone the laptop as your tool of trade and adapt it to your own personal workflows, making it support and work with your very own quirks. This is precisely what I don’t get from some that I quizz about their tool use, the way they accept the software on their laptop as is, and don’t see it as something you can mould to your own wishes at all.

“And, when you free programming from the requirement to be general and professional and scalable, it becomes a different activity altogether, just as cooking at home is really nothing like cooking in a commercial kitchen.”

Removing the aura of ‘real programming’ from all and any programming except paid for programming, might just break this ‘I’m not a programmer so I better accept the way software works as the vendor delivered it’ effect.

Me and Boris cookingMe and Boris Mann cooking in his kitchen in Vancouver in 2008. Coincidentally we connected through our desire to shape tools to our personal wishes. Programming as a home cook, brought us together to well, home cook. His blog still is a mix of cooking and programming well over a decade on.

Liked An app can be a home-cooked meal (Robin Sloan)

For a long time, I have struggled to articulate what kind of programmer I am. I’ve been writing code for most of my life, never with any real discipline, but/and I can, at this point, make the things happen on computers that I want to make happen. At the same time, I would not last a day as a professional software engineer……
I am the programming equivalent of a home cook.

A client sent me a batch of e-mail messages with attachments as part of collecting material for a study. Those e-mail messages were sent as attachments themselves, as MSG files. This is an Outlook format that can’t easily be opened by other mail clients, not even with Outlook for Mac online discussions suggest. To get at the attachments hidden in those MSG formatted files I looked for a tool to help out.

I found two, where for each I needed the paid version to get at the attachments. One was about a quarter of the costs of the other. So I opted for MSG Viewer Pro, and not MSG Viewer for Outlook. There are a number of online tools, but I don’t want to risk client information that way. There are also one or more open source tools freely available, but it was not immediately obvious I could access and save attachments contained in those MSG files with them. In the end I was glad I opted for the cheapest option, as it turned none of the attachments sent were relevant to my current work.

Wendy Grossman makes a good point. Encrypt, encrypt, encrypt as the way forward, while assuming all tech is ‘dirty’. It will nicely up the price too for dragnet surveillance, pushing the three letter outfits towards focusing on needles again, not ever larger haystacks.

Liked net.wars: Dirty networks

In other words, the essential question is: how do you build trusted communications on an untrusted network? The Internet’s last 25 years have taught us a key piece of the solution: encrypt, encrypt, encrypt. Johnson, perhaps unintentionally, has just made the case for spreading strong, uncrackable encryption as widely as possible. To which we can only say: it’s about time.

The conclusion of a report by the Norwegian consumer association, Forbrukerrådet, minces no words: adtech is systematically in breach of GDPR rules. The report’s title is Out of Control.

The extent of tracking makes it impossible for us to make informed choices about how our personal data is collected, shared and used, Finn Myrstad, director of digital policy in the Norwegian Consumer Council is quoted. This is a key issue. The GDPR demands meaningful consent, not just the token consent that sites and apps still often try to get away with. Earlier a French ruling stated much the same about a boiler plate consent form advocated by IAB and that form has since disappeared, or at least I don’t encounter it anymore during my web surfing.

It reads as if the report is the basis for various GDPR complaints in multiple EU countries, so it will be interesting to see those progress through the system.

I’m very much in agreement with Doc Searls position that GDPR is lethal to AdTech.
I came across a nice illustration of the effect (ht Tomasino). Below is an image that shows you what happens when you visit USAToday on its GDPR compliant version and its non GDPR version. Paul Calvano who made the image says “The US site is 5.5MB and contains 835 requests loaded from 188 hosts. When loaded from France it’s 297KB, 36 requests and contains no 3rd party content.” The image shows what a striking difference that is:

As of our last all hands meeting we have moved our company to using NextCloud on a server in a German data center. This is the second major step in improving on our information hygiene in the company, after adopting RocketChat and leaving Slack.
I had created the cloud already last May, but we had not transitioned everyone in the company and all our work. That transition has now been made.

It allows us to avoid having to work with clients in cloud environments like Google Docs, it has OnlyOffice for online collaboration in documents, it allows to avoid file transfer services in favor of being able to provide (time limited, password protected) download links from our own server, and it has integrated STUN/TURN support so we can do (video)conference calls from within our own environment. It’s a managed server/service for a few hundred Euros per year. A key benefit is being able to nudge our clients to routines less exposed to the data hungry silos, and also to show compliance with (regularly inconsistent and differing) rules regarding which online services they do and don’t allow. Setting an example is in itself a benefit given our work on transparent data governance, data ethics and accountability.

In the coming weeks we’ll aim to get fully accustomed to our new working environment, but so far it has been pretty self-evident.

Screenshot from working with a colleague in OnlyOffice (content blurred obviously)

We are working our way through a list of things to improve our overall information hygiene, a discussion I started last spring. It involves changes at the company level (like Nextcloud and Rocketchat) and changes at the individual level (helping colleagues e.g. with password management. We moved all of us onto the same password manager, that also includes the option to share passwords from a company account). It focuses on tools and technological measures, as well as on behaviour and work routines. And it looks at both laptop and mobile devices. I’ve created a ‘information hygiene ladder’ on those three dimensions, with a different level of information security at each rung, that we can strive for. The upper end, the “I’m being targeted by a three letter agency”, we’ll never address I’m sure. But there is a wealth of opportunities to improve our information security level before that extreme stage.