RIPE announced yesterday that they’ve handed out the last IPv4 address ranges. RIPE is the organisation that allocates IP addresses for Europe, Middle East, Eurasia and Middle Asia.

Time to step up the transition to IPv6, an address space that even at the current available ranges (about 20% of the total), easily allows for the allocation of some 4000 ip addresses to every person on the planet as end-users. IPv6 is what makes it possible to have a much wider range of objects in our environment be connected. Such as ever more sensors in/around your home and in your city. It allows the level of data saturation that is needed to provide self driving cars with rails of data and not have them depend on just their own onboard sensors.

Our home internet connection has an IPv6 address and an IPv4 one, which allows us to reach both types of addresses (you can’t otherwise visit one from the other as they’re different protocols, unless you can use a translation service). I am unsure what type of range, if any, I’m allocated in the IPv6 range though. Will have to ask my provider.

With my company we now have fully moved out of Slack and into Rocket.Chat. We’re hosting our own Rocket.Chat instance on a server in an Amsterdam data center.

We had been using Slack since 2016, and used it both for ourselves, and with some network partners we work with. Inviting in (government) clients we never did, because we couldn’t guarantee the location of the data shared. At some point we passed the free tier’s limits, meaning we’d have to upgrade to a paid plan to have access to our full history of messages.

Rocket.chat is an open source alternative that is offered as a service, but also can be self-hosted. We opted for a Rocket.chat specific package with OwnCube. It’s an Austrian company, but our Rocket.chat instance is hosted in the Netherlands.

Slack offers a very well working export function for all your data. Rocket.chat can easily import Slack archives, including user accounts, channels and everything else.

With the move complete, we now have full control over our own data and access to our entire history. The cost of hosting (11.50 / month) is less than Slack would already charge for 2 users when paid annually (12.50 / month). The difference being we have 14 users. That works out as over 85% costs saving. Adding users, such as clients during a project, doesn’t mean higher costs now either, while it will always be a better deal than Slack as long as there’s more than 1 person in the company.

We did keep the name ‘slack’ as the subdomain on which our self-hosted instance resides, to ease the transition somewhat. All of us switched to the Rocket.chat desktop and mobile apps (Elmine from Storymines helping with navigating the installs and activating the accounts for those who wanted some assistance).

Visually, and in terms of user experience human experience, it’s much the same as Slack. The only exception being the creation of bots, which requires some server side wrangling I haven’t looked into yet.

The move to Rocket.chat is part of a path to more company-wide information hygiene (e.g. we now make sure all of us use decent password managers with the data hosted on EU servers, and the next step is running our own cloud e.g. for collaborative editing with clients and partners), and more information security.

It’s rather pleasing to see that ‘link rot’ and dead connections have been part of the internet ever since the start. As shown in RFC-315 reporting on a week’s worth of server status reports in February 1972, as read by Darius Kazemi. Of course the underlying notion of the internet as distributed system is that if a node fails, the rest can continue to work. This was fundamental to ARPANET. What’s pleasing to me is the fact that such robustness was needed from the start, that over half of servers could be ‘dead’ and the network still be seen to exist. That intermittence was not just a theoretical requirement but an every day aspect in practice. Intermittence has been on my mind a lot in the past few weeks.

BredaPhoto

I like the notion of cards, that @visakanv describes, and threading them into a bigger whole.

What would be ideal, I think, is if all information could be represented as “cards”, and all cards could be easily threaded. Every book, every blogpost, every video, even songs, etc – all could be represented as “threaded cards”. Some cards more valuable than others.

In a way, a lot of what I’ve been trying to do with my personal knowledge management, notetaking, etc is to assemble an interesting, coherent, useful thread of thread of threads, of everything I care about. A personal web of data, with interesting trails and paths I can share with others.

I have a huge, sprawling junkyard mess of Workflowy notes, Evernote cards, Google keep cards, Notes, blogposts, etc etc ad infinitum. Buried in there are entire books worth of interesting + useful information. But it suffers from bad or non-existent threading, constrained by memory.

I too have a mountain’s worth of snippets, pieces, half sentences. And I have a much lower stack of postings and extended notes. Interesting stuff doesn’t get shared, because I envision a more extensive, a more ‘complete’ write-up that then more often than not never happens. The appeal to PKM above is key here for me. The world isn’t just cards, I agree with Neil, who pointed me to the posting above, fragmentation isn’t everything. Because synthesis and curation are important. However, having that synthesis in a fully different channel than the ‘cards’ from which it is built, or rather not having the cards in the same place, so that both don’t exist in the same web of meaning seems less logical. It’s also a source of hesitance, a threshold to posting.

BredaPhoto

Synthesis and curation presume smaller pieces, like cards. Everything starts out as miscellaneous, until patterns stand out, as small pieces get loosely joined.
I don’t know why Visakanv talks of threading only in the context of Twitter. Almost like he’s reinventing tags (tags are a key organising instrument for me). To me threading sounds a bit like a trail of breadcrumbs, to show from which elements something was created. Or cooking, where the cards are the list of ingredients, resulting in a dish, and dishes resulting in a dinner or a buffet.

More ‘cards’, snippets, I find a useful take on how to post in this space (both the blog part and the wiki part), and also bring more from other channels/tools in here.

BredaPhoto

(I took the photos during Breda Photo Festival, of Antony Cairns IBM CTY1 project, which is photos printed on IBM punch cards and held together with pins.)

This is a somewhat worrying development: the entire .org registry of domain names has been sold to a private equity investor. That basically spells out just one way forward, extraction and rent-seeking. As this step immediately follows from ICANN lifting price increase caps in place earlier this year (against the advise of US competition authorities it appears), and the buyer is a newly established entity it seems to have but that purpose.

“Price hikes in 3, 2, 1, ….” seems to be the consensus.

As this site’s domain is part of the .org TLD (when I registered it in the spring of 2003, it was the one non-country TLD ‘zylstra’ was available for), I briefly looked into my options, to defend against price gouging. My domain name renews on May 3rd 2020, in just under 6 months time. I should be able to renew the domain 60 days before it expires, so by early March, in 4 months. Then I will be able to renew for a 5 year period at once. Which, if it precedes a price hike, means I get to buy myself a few years extra before needing to make a decision.

On a more fundamental level, I am surprised that maintaining a TLD domain name registry is entirely left to market forces by ICANN like that. For instance the Dutch national TLD registry is maintained by a non-profit foundation. Whoever runs a TLD registry has a monopoly by default and the costs of leaving for existing domain holders is very substantial. Combining that, monopoly and lock-in, with private investors whose first commandment is not maintaining the general public service that a domains registry (not: registrar) constitutes is worrisome. E.g. this site has been on this domain name for 16.5 years. This domain name is for all intents and purposes my online unique identifier, and I definitely use it as such. Now, for me personally, moving the entire thing isn’t extremely bothersome in itself. It would sadly cause a major chunk of link-rot, but moving it to e.g. zylstra.eu which I also have can be done without much consequence for myself should the costs of zylstra.org rise uncomfortably. It would however also mean moving my de-facto online identity, which is likely to cause confusion in my networks. That identity confusion, and brand damage will be of an entirely different level if you’re a very established NGO, brand or non-profit on a .org domain. E.g. the World Bank, WordPress or Wikipedia, which coincidentally spells WWW, also hosted on .org. Then leaving is much harder, and you’ll likely go with whatever pricing model gets introduced. If only for after a move someone else will pick your old high-recognition domain up for spoofing and phishing most likely, so you’ll stay put whatever the cost.

It smells like something that should be of interest to competition authorities everywhere.

Bookmarked Breaking: Private Equity company acquires .Org registry – Domain Name Wire | Domain Name News (Domain Name Wire | Domain Name News)

Ethos Capital, led by former ABRY Partners Managing Partner, buys .Org registry. I thought this might happen. And now it has. Fresh off ICANN’s blunder letting Public Interest Registry set whatever price it wants for .org domain names, Internet Society (ISOC) has sold the .org registry Public Interest Registry (PIR) to private equity company Ethos …

It seems sharing play-lists is no longer an innocent behaviour, nor is playing YouTube with the sound on in the presence of automated speech recognition like Google’s, Amazon’s and Apple’s cloud connected microphones in your living room. “CommanderSongs can be spread through Internet (e.g., YouTube) and radio.

The easiest mitigation of course is not having such microphones in your home in the first place. Another is running your own ASR with your 95% standard commands localized in the device. Edge first like Candle proposes, not cloud first.

Bookmarked CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition (arXiv.org)

The popularity of ASR (automatic speech recognition) systems, like Google Voice, Cortana, brings in security concerns, as demonstrated by recent attacks.
The impacts of such threats, however, are less clear, since they are either
less stealthy (producing noise-like voice commands) or requiring the physical
presence of an attack device (using ultrasound). In this paper, we demonstrate
that not only are more practical and surreptitious attacks feasible but they
can even be automatically constructed. Specifically, we find that the voice
commands can be stealthily embedded into songs, which, when played, can
effectively control the target system through ASR without being noticed. For
this purpose, we developed novel techniques that address a key technical
challenge: integrating the commands into a song in a way that can be
effectively recognized by ASR through the air, in the presence of background
noise, while not being detected by a human listener. Our research shows that
this can be done automatically against real world ASR applications. We also
demonstrate that such CommanderSongs can be spread through Internet (e.g.,
YouTube) and radio, potentially affecting millions of ASR users. We further
present a new mitigation technique that controls this threat.