Through a reference by Julian Elvé, I read Doc Searls’ talk that he gave last October and has now published, Saving the Internet – and all the commons it makes possible.

Internet Open, image by Liz Henry, license CC BY ND

First he says of the internet as commons
In economic terms, the Internet is a common pool resource; but non-rivalrous and non-excludable to such an extreme that to call it a pool or a resource is to insult what makes it common: that it is the simplest possible way for anyone and anything in the world to be present with anyone and anything else in the world, at costs that can round to zero.

As a commons, the Internet encircles every person, every institution, every business, every university, every government, every thing you can name. It is no less exhaustible than presence itself. By nature and design, it can’t be tragic, any more than the Universe can be tragic.

He then lists 9 enclosures of that commons currently visible, because enclosure is one of the affordances the internet provides.

See, the Internet is designed to support every possible use, every possible institution, and—alas—every possible restriction, which is why enclosure is possible. People, institutions and possibilities of all kinds can be trapped inside enclosures on the Internet.

  1. service provisioning, for example with asymmetric connection speeds. Asymmetry favours consumption over production. Searls singles out cable companies specifically for wanting this imbalance. I’ve been lucky from early on. Yes until fiber to the home, we had asymmetrical speeds, but I had a fixed IP address from the start and ran a web server under my desk from the mid ’90s until 2007 when I started using a hoster for this blog. I still run little experiments from my own server(s) at home. The web was intentioned to be both read and write even at the level of a page you visited (in short the web as online collaboration tool, in a way like Google documents). For most people the general web is preceived as read-only I assume, even if they participate in silos where they do post stuff themselves.
  2. 5G wireless service, as a way for telco’s to do the same as cable companies did before, in the form of content-defined packages. I am not sure if this could play out this way in the Netherlands or the EU, where net neutrality is better rooted in law, and where, especially after the end of roaming charges in the EU, metered data plans either have become meaningless as unmetered plans are cheap enough, or at least the metered plans themselves are large enough to make e.g. zero-rating a meaningless concept. 5G could however mean households might choose to no longer use a fixed internet subscription for at home, and do away with their own wifi networks, I suspect, and introducing a new dependency where your mobile and at home access are all the same thing and a singular choke point.
  3. government censorship, with China being the most visible one in this space, but many countries do aim to block specific services at least temporarily, and many countries and collections of countries are on the path to realising their own ‘data spaces’. While understandable, as data and networks are strategic resources now, it also carries the risk of fragmentation of the internet (Russia e.g.), motivated ostensibly by safety concerns but with a big dollop of wanting control over citizens.
  4. the advertising-supported commercial Internet. This is the one most felt currently. Adtech that tracks you across your websurfing habits, and not just in the silos you inhabit
  5. protectionism, which Searls ties to EU privacy laws, which I find a very odd remark. While GDPR could be better, it is a quality instrument with a rising floor, that is not designed to protect the EU market, but to encourage global compliance to its standards. A way of shaping instruments the EU uses more often, and has proven to be a succesfull export product. The cookie notices he mentions are a nuisance, but not the result of the GDPR, and in my mind more caused by interpreting the (currently under revision) cookie law in a deliberate cumbersome way. Even then, I don’t see how privacy regulation is protectionism, as it finds its root in human rights, not competition law.
  6. Facebook.org, or digital colonialism. This is the efforts by silos like FB to bring the ‘next billion’ online in a fully walled garden that is free of charge and presented as being the web, or worse the internet itself. I’ve seen this in action in developing countries and it’s unavoidable for most if not all, because it is the only way to access the power of agency that the internet promises, when there’s is no way you can afford connectivity.
  7. forgotten past, caused by the focus on the latest, the newest, while at the same time the old is not only forgotten but also actively lost as it gets taken offline etc. I think this is where strong opportunities are arising for niche search engines and also search engines as a personal tool. You don’t need to build the next Google or be a market player even, to meaningfully erode the position of Google search. For instance it is quite feasible to have my own search engine that only searches all the blogs I subscribe and have subscribed to (I actually should build that). At the same time, there is a slow steady and increasing effort of bringing more of the old, just not the old web, online by the ongoing digitisation of physical archives and collections of artefacts. More of our past, our global cultural heritage, is coming onto the web every day and it is really still only at the start.
  8. algorithmic opacity. This one is very much on the agenda across Europe currently, mainly as part of ethical discussions and right now mostly centered around government transparency. The GDPR contains a clause that automated algorithmic decision making about people is not allowed. At the very least having explainable alogrithms, and transparent usage of them is a likely emerging practice. Asymmetry of decision making also plays a useful role. This one too is closely tied to human rights which will help bring in parties to the discussion that are not of the tech world. At issue with what we currently see of algorithms is that they are used over our heads, and not yet much as personal tool, where it could increase our networked agency.
  9. the one inside our heads, where we accept the internet as it is presented to us by those invested in one or more of the above 8 enclosures. With understanding what the internet is and how it is a commons as a public awareness need.

Go read the entire thing, where Doc Searls describes what the internet is, how it connects to human experience and making the hyper local key again when there is a global commons encompassing everyone, and how it erodes and replaces institutions of the 20th century and earlier. He talks about how the internet “means we are all authors of each other“.

At the end he asks What might be the best way to look at the Internet and its uses most sensibly?, and concludes “I think the answer is governance predicated on the realization that the Internet is perhaps the ultimate commons“, and “There is so much to work on: expansion of agency, sensibility around license and copyright, freedom to benefit individuals and society alike, protections that don’t foreclose opportunity, saving journalism, modernizing the academy, creating and sharing wealth without victims, de-financializing our economies… the list is very long

I’m happy to be working on the first three of those.

Walled garden, image by Ron Frazier, license CC BY

Earlier this week I wrote how European IPv4 addresses have now all been allocated. The IPv6 address space is extremely bigger than IPv4 is. IPv4 has 2^32 possible addresses, as they have a length of 32 bits. IPv6 is 128 bits long, allowing 2^128 addresses.

We have an IPv6 address with our fiber to the home connection (currently 500Mbit symmetrical, which is actually a step down from the 1Gbit symmetrical we had before). I asked our provider what type of address allocation they use for IPv6. They allocate a (currently recommended) /48 block to us. A /48 IPv6 block contains 2^(128−48) = 2^80 addresses. The total IPv4 address space is 2^32 addresses. So we actually have an available address space at home that is 2^16 (65.536) times larger than the square of the total number of IPv4 addresses (2^16*2^32*2^32=2^80). These are mind bogglingly large numbers.

RIPE announced yesterday that they’ve handed out the last IPv4 address ranges. RIPE is the organisation that allocates IP addresses for Europe, Middle East, Eurasia and Middle Asia.

Time to step up the transition to IPv6, an address space that even at the current available ranges (about 20% of the total), easily allows for the allocation of some 4000 ip addresses to every person on the planet as end-users. IPv6 is what makes it possible to have a much wider range of objects in our environment be connected. Such as ever more sensors in/around your home and in your city. It allows the level of data saturation that is needed to provide self driving cars with rails of data and not have them depend on just their own onboard sensors.

Our home internet connection has an IPv6 address and an IPv4 one, which allows us to reach both types of addresses (you can’t otherwise visit one from the other as they’re different protocols, unless you can use a translation service). I am unsure what type of range, if any, I’m allocated in the IPv6 range though. Will have to ask my provider.

Today in 1971, 48 years ago RFC-287 was published revising the Mail Box Protocol so that you can send messages to a mailbox at a different institution.

The potential utility for the mechanism was confirmed

Basically we’ve been struggling to get to inbox zero ever since. Of those 48 years, I’ve been using mail 30 years almost to the day. The RFC talks about sending messages directly to a printer, as well as to a computer to store. In the early days I would print messages that were sent to me (also so you could delete them from computer storage and especially from the shared mailbox I had on a system), and kept a binder with them. When that binder was full, and I realised what it would mean going forward, I stopped printing mail. It bemuses me how regularly corporate e-mail signatures still ask me to reconsider before printing an e-mail. Over a quarter century later!

I know about this and other RFCs (Request For Comments) because Darius Kazemi has a wonderful project this year, where he reads one RFC per day in chronological order and writes about it. It is an early internet archeology project slowly unfolding in my feed reader day by day, in honour of the 50th anniversary of the very first RFC on April 7th 1969. In these RFCs the early protocols are discussed and born that formed the internet. It is fascinating how some of the names of people coming up still are remembered, and others aren’t. And it has paths that lead to nowhere. It makes clear how so much of human achievement is iterative and incremental steps in the dark with people doing what seems plausible from their current standpoint.

Darius read this particular RFC on October 5th, and I wrote this posting October 8th, setting it to publish today November 17th at its 48th anniversary, with the same timestamp as the original from 1971.

In reply to Tilde.club by Frank Meeuwsen

Ja, een tilde account, dat is een blast from the past. Bij mijn oude studievereniging hadden we de utelscin server staan (Universiteit Twente, ELektrotechniek SCINtilla). Op die utelscin hadden we ook ~ accounts om spullen van actieve leden te delen. Kwam het ook altijd tegen op de servers van andere universiteiten waar je via telnet verbinding mee maakte. Naast de utelscin doos (de servers stonden op de kast in de verenigingskamer) hadden we ook mailserver Betty, een afkorting van Betty Serveert e-mail.

Very unsure what to think about Tim Berners Lee’s latest attempt to, let’s say, re-civilize the web. A web that was lost somewhere along the way.

Now there’s a draft ‘contract for the web‘, with 9 principles, 3 each for governments, companies and citizens.

It’s premise and content aren’t the issue. It reads The web was designed to bring people together and make knowledge freely available. Everyone has a role to play to ensure the web serves humanity. By committing to this Contract, governments, companies and citizens around the world can help protect the open web as a public good and a basic right for everyone., and then goes on to call upon governments to see internet access as a core necessity and a human right that shouldn’t be censored, upon companies to not abuse personal data, and on citizens to actively defend their rights, also by exercising them continuously.

There’s nothing wrong with those principles, I try to adhere to a number of them myself, and have been conveying others to my clients for years.

I do wonder however what this Contract for the Web is for, and what it is intended to achieve.

At the Contract for the Web site it says
Given this document is still in the process of negotiation, at this stage participants have not been asked to formally support or oppose the document in its current form.

Negotiation? What’s there to negotiate? Citizens will promise not to troll online if governments promise not to censor? If a company can’t use your personal data, it will no longer be an internet service provider? Who is negotiating, and on behalf of whom?
Formally support the contract? What does that mean? ‘Formal’ implies some sort of legal status?

There are of course all kinds of other initiatives that have voluntary commitments by various stakeholders. But usually it clearly has a purpose. The Open Government Partnership for instance collects voluntary open government commitments by national governments. Countries you’d wish would actually embark on open government however have left the initiative or never joined, those that are active are a group, (not all), of the willing for whom OGP is a self-provided badge of good behaviour. It provides them an instrument to show their citizens they are trying and doing so in ways that allows citizens to benchmark their governments efforts. Shields them against the notion they’re not doing anything. It does not increase open government above what governments were willing to do anyway, it does provide a clear process to help build continuity, and to build upon other member’s experience and good practices reducing the overall effort needed to attain certain impacts.

Other initiatives of this type are more self-regulatory in a sector, with the purpose of preventing actual regulation by governments. The purpose is to prevent exposing oneself to new legal liabilities.

But what does the Contract for the Web aim for? How is it an instrument with a chance of having impact?
It says “this effort is guided by others’ past work on digital and human rights” such as the Charter of Fundamental Rights of the EU and the EU GDPR. What does it bring beyond such heavy lifting instruments and how? The EU charter is backed up by the courts, so as a citizen I have a redress mechanism. The GDPR is backed up by fines up to 4% of a company’s global annual turnover or 20 million whichever is bigger.

How is it envisioned the Contract for the Web will attract more than those stakeholders already doing what the contract asks?
How is it envisioned it can be a practical instrument for change?

I don’t get a sense of clear purpose from the website. In the section on ‘how will this lead to change’ first much is made of voluntary commitments by governments and companies (i.e. a gathering of the willing, that likely would adhere to the principles anyway), which then ends with “Ultimately it is about making the case for open, universal web that works for everyone“. I have difficulty seeing how a ‘contract’ is an instrument in ‘making a case’.

Why a contract? Declaration, compact, movement, convention, manifesto, agenda all come to mind, but I can’t really place Contract.

What am I missing?


Please sign at the dotted line, before you go online?.
Image ‘untitled forms’ by See-ming Lee, license CC BY SA