Elmine, in her role as resident WordPress savant, pointed me to MainWP. MainWP is a tool that allows one to manage updates of WordPress and plugins, from a separate single WordPress instance.

That separate single WordPress instance doesn’t need to be online, and can be hosted locally. So I installed it on my laptop. That way there is no attack surface for the outside that would risk allowing access to my 6+ sites that run WordPress.

It turns out, an added benefit is that I can also post to any of those sites from this local instance. This has as an advantage that I can draft postings offline on my laptop, and then push them to a website when done. That should help me write more and with a lower threshold. It has a few drawbacks, as offline I don’t have access to some features I use regularly (post kinds, and more importantly previews).

This post serves as a test, posting from my wordpress instance on localhost.

Op de eerste This Happened in Utrecht eind 2008 was een presentatie van Cultured Code, waar ik onder de indruk was van de focus van ze. Sinds die tijd gebruik ik met veel plezier Things, ook al kan ik het niet op mijn Android gebruiken.

Let wel op, volgens mij is het nog altijd zo dat je taken niet in een volgorde kunt zetten / afhankelijk van elkaar kunt maken (zodat taak 2 alleen in een context naar voren komt als taak 1 af is). Het gaat er dus vanuit dat alle taken parallel kunnen worden gedaan, en geen volgordelijkheid kennen. Dit kan een punt zijn als je het voor GTD gebruikt. Ik heb er zelf verder geen last van.

Replied to a post by Frank Meeuwsen

Ik zit nog geen kwartier in een trial van Things 3 te werken en ik voel dat ik weer een smak geld ga uitgeven voor een takenlijst app. Wat zit deze goed en intuïtief in elkaar. Tot op heden nog exact de juiste balans tussen eenvoud en geavanceerde planningsmogelijkheden. Het is bijzonder hoe je ond…

Based on my conversation with Boris Mann about Fission, and visiting a Decentralised Web Meetup in Amsterdam because his Fission co-founder Brooklyn Zelenka, I started exploring the technology they work with and are building. First step was getting access to and understanding the ideas behind IPFS.

What makes IPFS interesting

The IPFS about file (if you click that link, you’re visiting a file on IPFS from your browser) says a variety of things, but a few elements are key imo.

First, it is a peer to peer system, much like we’ve seen many before. When you download a file to your system it will come in bits and pieces from multiple other computers, somewhere in the network, who have that file available. Whatever is the easiest way to get that file to you, is the way followed.

Second there is a key difference in how file addresses work in IPFS, compared to the web or on your local drive. We are used to files having names, and addresses being a representation of the location of that file. The URL for this blog points to a specific server, where in a specific folder, a specific filename resides. That file returns the content. Similarly the address for a file on my drive is based on the folder structure and the name of the file. IPFS addresses files based on their content, and does so with a hash (a cryptographic representation of the content of a file).
Naming things based on a hash of its contents means that if the content of a file changes, the name will change too. For every file the content will match what it says on the tin, and versioning is built in.

Combine that with a peer to peer system, and you have a way of addressing things globally without being tied to location. You also have a way to ensure that whatever you find in a given file is exactly what was originally in the file. https://mydomain.com/catpicture.html may have had a cat picture at the start that later got replaced by malware, but you wouldn’t know. With earlier p2p systems to exchange files like Napster of Bittorrent you always had to be careful about what it was you actually downloaded. Because the content might be very different from what the name suggested. With IPFS those issues are done away with.

Currently (location based) addressing on the web is centralised (through domain registration and DNS), and decoupling addresses from locations like IPFS does allows decentralisation. This decentralisation is important to me, as it helps build agency and make that agency resilient, as decentralisation is much closer to local first principles.

Getting IPFS set-up on my laptop

Boris was helpful in pointing the way for me how to set-up IPFS (and Fission). There is a IPFS desk top client, which makes it very easy to do. I installed that and then you have a basic browser that shows you which of your own files you are sharing, and which ones you are re-sharing. It also when you are looking at a file shows where it comes from.

I uploaded a PDF as a hello world message. In the screenshot above the Qm…… series of characters you see underneath the local file name helloworld.pdf is the hash that is used to identify the file across the IPFS network. If you ‘pin’ a file (or folder), you prevent it from being deleted from your cache, and it stays available to the wider network with that Qm…. string as address. Which also means a drawback of hashed-content addressing is non-human readable addresses, but they’re usually intended for machines anyway (and otherwise, there’s a use case for QR codes here maybe)

With IPFS set-up, I started playing with Fission. Fission builds on IPFS, to allow you to deploy apps or websites directly from your laptop. (“build and go live while on a plane without wifi”). It’s meant as tooling for developers, in other words not me, but I was curious to better understand what it does. More in a next post.

Through a posting of Roel I came across Rick Klau again, someone who like me was blogging about knowledge management in the early ’00s. These days his writing is on Medium it seems.

Browsing through his latest posts, I came across this one about homebrew contact management.

Contact management is one area where until now I mostly stayed away from automating anything.
First and foremost because of the by definition poor initial data quality that you use to set it up (I still have 11 yr old contact info on my phone because it is hard to delete, and then gets put back due to some odd feedback loop in syncing).
Second, because of the risk of instrumentalising the relationships to others, instead of interacting for its own sake.
Third, because most systems I encountered depend on letting all your mail etc flow through it, which is a type of centralisation / single point of failure I want to avoid.

There’s much in Rick’s post to like (even though I doubt I’d want to shell out $1k/yr to do the same), and there are things in there I definitely think useful. He’s right when he says that being able to have a better overview of your network in terms of gender, location, diversity, background etc. is valuable. Not just in terms of contacts, but in terms of information filtering when you follow your contacts in several platforms etc.

Bookmarked to come up with an experiment. Timely also because I just decided to create a simple tool for my company as well, to start mapping stakeholders we encounter. In Copenhagen last September I noticed someone using a 4 question page on her phone to quickly capture she met me, the context and my organisation. When I asked she said it was to have an overview of the types of organisations and roles of people she encountered in her work, building a map as it were of the ecosystem. Definitely something I see the use of.

HandShakeHandshakes and conversations is what I’m interested in, not marketing instruments. Image Handshake by Elisha Project, license CC BY SA

Earlier this week I wrote how European IPv4 addresses have now all been allocated. The IPv6 address space is extremely bigger than IPv4 is. IPv4 has 2^32 possible addresses, as they have a length of 32 bits. IPv6 is 128 bits long, allowing 2^128 addresses.

We have an IPv6 address with our fiber to the home connection (currently 500Mbit symmetrical, which is actually a step down from the 1Gbit symmetrical we had before). I asked our provider what type of address allocation they use for IPv6. They allocate a (currently recommended) /48 block to us. A /48 IPv6 block contains 2^(128−48) = 2^80 addresses. The total IPv4 address space is 2^32 addresses. So we actually have an available address space at home that is 2^16 (65.536) times larger than the square of the total number of IPv4 addresses (2^16*2^32*2^32=2^80). These are mind bogglingly large numbers.

RIPE announced yesterday that they’ve handed out the last IPv4 address ranges. RIPE is the organisation that allocates IP addresses for Europe, Middle East, Eurasia and Middle Asia.

Time to step up the transition to IPv6, an address space that even at the current available ranges (about 20% of the total), easily allows for the allocation of some 4000 ip addresses to every person on the planet as end-users. IPv6 is what makes it possible to have a much wider range of objects in our environment be connected. Such as ever more sensors in/around your home and in your city. It allows the level of data saturation that is needed to provide self driving cars with rails of data and not have them depend on just their own onboard sensors.

Our home internet connection has an IPv6 address and an IPv4 one, which allows us to reach both types of addresses (you can’t otherwise visit one from the other as they’re different protocols, unless you can use a translation service). I am unsure what type of range, if any, I’m allocated in the IPv6 range though. Will have to ask my provider.