A team of people, including Jeremy Keith whose writings are part of my daily RSS infodiet, have been doing some awesome web archeology. Over the course of 5 days at CERN, they recreated the browser experience as it was 30 years ago with the (fully text based) WorldWideWeb application for the NeXT computer

Hypertext’s root, the CERN page in 1989

This is the type of pages I visited before inline images were possible.
The cool bit is it allows you to see your own site as it would have looked 30 years ago. (Go to Document, then Open from full document reference and fill in your url) My site looks pretty well, which is not surprising as it is very text centered anyway.

Hypertexting this blog like it’s 1989

Maybe somewhat less obvious, but of key importance to me in the context of my own information strategies and workflows, as well as in the dynamics of the current IndieWeb efforts is that this is not just a way to view a site, but you can also edit the page directly in the same window. (See the sentence in all capitals in the image below.)

Read and write, the original premise of the WWW

Hypertext wasn’t meant as viewing-only, but as an interactive way of linking together documents you were actively working on. Closest come current wiki’s. But for instance I also use Tinderbox, a hypertext mindmapping, outlining and writing tool for Mac, that incorporates this principle of linked documents and other elements that can be changed as you go along. This seamless flow between reading and writing is something I feel we need very much for effective information strategies. It is present in the Mother of all Demos, it is present in the current thinking of Aaron Parecki about his Social Reader, and it is a key element in this 30 year old browser.

Help jij ons mee organiseren? We gaan een IndieWebCamp organiseren in Utrecht, een event om het gebruik van het Open Web te bevorderen, en met elkaar praktische zaken aan je eigen site te verbeteren. We zoeken nog een geschikte datum en locatie in Utrecht. Je hulp is dus van harte welkom.

Op het Open Web bepaal jij zelf wat je publiceert, hoe het er uit ziet, en met wie je in gesprek gaat. Op het Open Web bepaal je zelf wie en wat je volgt en leest. Het Open Web was er altijd al, maar in de loop van de tijd zijn we allemaal min of meer opgesloten geraakt in de silo’s van Facebook, Twitter, en al die anderen. Hun algoritmes en timelines bepalen nu wat jij leest. Dat kan ook anders. Bouw je eigen site, waar anderen niet tussendoor komen fietsen omdat ze advertentie-inkomsten willen genereren. Houd je eigen nieuwsbronnen bij, zonder dat andermans algoritme je opsluit in een bubbel. Dat is het IndieWeb: jouw content, jouw relaties, jij zit aan het stuur.

Frank Meeuwsen en ik zijn al heel lang onderdeel van internet en dat Open Web, maar brengen/brachten ook veel tijd in websilo’s als Facebook door. Inmiddels zijn we beiden actieve ‘terugkeerders’ op het Open Web. Afgelopen november waren we samen op het IndieWebCamp Nürnberg, waar een twintigtal mensen met elkaar discussieerde en ook zelf actief aan de slag gingen met hun eigen websites. Sommigen programmeerden geavanceerde dingen, maar de meesten zoals ikzelf bijvoorbeeld, deden juist kleine dingen (zoals het verwijderen van een link naar de auteur van postings op deze site). Kleine dingen zijn vaak al lastig genoeg. Toen we terugreden met de trein naar Nederland waren we het er al snel over eens: er moet ook een IndieWebCamp in Nederland komen. In Utrecht dus, dit voorjaar.

Om Frank te citeren:

Voel je je aangesproken door de ideeën van het open web, indieweb, wil je aan de slag met een eigen site die meer vrij staat van de invloeden sociale silo’s en datatracking? Wil je een nieuwsvoorziening die niet meer primair wordt gevoed door algoritmen en polariserende roeptoeters? Dan verwelkomen we je op twee dagen IndieWebCamp Utrecht.

Laat weten of je er bij wilt zijn.
Laat weten of je kunt helpen met het vinden van een locatie.
Laat weten hoe wij jou kunnen helpen bij je stappen op het Open Web.

Je bent uitgenodigd!

Donald Clark writes about the use of voice tech for learning. I find I struggle enormously with voice. While I recognise several aspects put forward in that posting as likely useful in learning settings (auto transcription, text to speech, oral traditions), there are others that remain barriers to adoption to me.

For taking in information as voice. Podcasts are mentioned as a useful tool, but don’t work for me at all. I get distracted after about 30 seconds. The voices drone on, there’s often tons of fluff as the speaker is trying to get to the point (often a lack of preparation I suppose). I don’t have moments in my day I know others use to listen to podcasts: walking the dog, sitting in traffic, going for a run. Reading a transcript is very much faster, also because you get to skip the bits that don’t interest you, or reread sections that do. Which you can’t do when listening, because you don’t know when a uninteresting segment will end, or when it might segue into something of interest. And then you’ve listened to the end and can’t get those lost minutes back. (Videos have the same issue, or rather I have the same issue with videos)

For using voice to ask or control things. There are obvious privacy issues with voice assistants. Having active microphones around for one. Even if they are supposed to only fully activate upon the use of the wake-up word, they get triggered by false positives. And don’t distinguish between me and other people that maybe it shouldn’t respond to. A while ago I asked around in my network how people use their Google and Amazon microphones, and the consensus was that most settle on a small range of specific uses. For those it shouldn’t be needed to have cloud processing of what those microphones tape in your living room, those should be able to be dealt with locally, with only novel questions or instructions being processed in the cloud. (Of course that’s not the business model of these listening devices).

A very different factor in using voice to control things, or for instance dictate is self-consciousness. Switching on a microphone in a meeting has a silencing effect usually. For dictation, I won’t dictate text to software e.g. at a client’s office, or while in public (like on a train). Nor will I talk to my headset while walking down the street. I might do it at home, but only if I know I’m not distracting others around me. In the cases where I did use dictation software (which nowadays works remarkably well), I find it clashes with my thinking and formulation. Ultimately it’s easier for me to shape sentences on paper or screen where I see them take shape in front of me. When dictating it easily descends into meaninglessness, and it’s impossible to structure. Stream of thought dictation is the only bit that works somewhat, but that needs a lot of cleaning up afterwards. Judging by all podcasts I sampled over the years, it is something that happens to more people when confronted with a microphone (see the paragraph above). Maybe if it’s something more prepared like a lecture, or presentation, it might be different, but those types of speech have been prepared in writing usually, so there is likely a written source for it already. In any case, dictation never saved me any time. It is of course very different if you don’t have the use of your hands. Then dictation is your door to the world.

It makes me wonder how voice services are helping you? How is it saving you time or effort? In which cases is it more novelty than effectiveness?

Alan Levine recently posted his description of how to add an overview to your blog of postings from previous years on the same date as today. He turned it into a small WordPress plugin, allowing you to add such an overview using a shortcode wherever in your site you want it. It was something I had on my list of potential small hacks, so it was a nice coincidence my feedreader presented me with Alan’s posting on this. It has become ‘small hack’ 4.

I added his WP plugin, but it didn’t work as the examples he provided. The overview was missing the years. Turns out a conditional loop that should use the posting’s year, only was provided with the current year, thus never fulfilling the condition. A simple change in how the year of older postings was fetched fixed it. Which has now been added to the plugin.

In the right hand sidebar you now find a widget listing postings from earlier years, and you can see the same on the page ‘On This Blog Today In‘. I am probably my own most frequent reader of the archives, and having older postings presented to me like this adds some serendipity.

From todays historic postings, the one about the real time web is still relevant to me in how I would like a social feed reader to function. And the one about a storm that kept me away from home, I still remember (ah, when Jaiku was still a thing!).

Adding these old postings is as simple as adding the shortcode ‘postedtoday’:

No posts were found published on February 20
Replied to Noise cancelling for cars is a no-brainer
We’re all familiar with noise cancelling headphones. I’ve got some that I use for transatlantic trips, and they’re great for minimising any repeating background noise...It doesn’t surprise me, therefore, to find that BOSE, best known for its headphones, are offering car manufacturers something similar

Noise cancelling in cars isn’t a no brainer I think. When I first got my noise cancelling headphones and had put them to good use on trains and airplanes, I tried to use them while driving in my car as well. I took them off again rather quickly, once I noticed that I actually use the car’s noises as feedback, e.g. for shifting gears, to determine road conditions, and other things. So with noise cancelling active I felt that part of my sensorium was cut off. It will take replacing those observations by ear for ones by other senses, actively rewiring entrained behaviour. For passengers it’s likely a different thing.

A while ago Peter wrote about energy security and how having a less reliable grid may actually be useful to energy security.

This the difference between having tightly coupled systems and loosely coupled systems. Loosely coupled systems can show more robustness because having failing parts will not break the whole. It also allows for more resilience that way, you can locally fix things that fell apart.

It may clash however with our current expectations of having electricity 24/7. Because of that expectation we don’t spend much time about being clever in our timing and usage of energy. A long time ago I provided training to a group of some 20 Iraqi water provision managers, as part of the rebuilding efforts after the US invasion of Iraq. They had all kinds of issues obviously, and often issues arising in parallel. What I remember connected to Peter’s post is how they described Iraqi citizens had adapted to the intermittent availability of electricity and water. How they made things work, at some level, by incorporating the intermittent availability of things into their routines. When there was no electricity they used water for cooling, and vice versa for instance. A few years ago at a Border Sessions conference in The Hague, one speaker talked about resilience and intermittent energy sources too. He mentioned the example that historically Dutch millers had dispensation of visiting church on Sundays if it was windy enough to mill.

The past few days in Dutch newspapers a discussion is taking place that some local solar energy plans can’t be implemented because the grid maintainers can’t deal with the inputs. Now this isn’t necessarily true, but more the framing that comes with the current always on macro-grid. Tellingly any mention of micro grids, or local storage is absent from that framing.

In a different discussion with Peter Rukavina and with Peter Bihr, it was mentioned that resilience is, and needs to be, rising on the list of design principles. It’s also the reason why resilience is one of three elements of agency in my networked agency thinking.

Line 'Em Up
Power lines in Canada, photo Ian Muttoo, license CC BY SA

Today I made my first Open Street Map edit. Open Street Map is a global map, created by its users (which includes lots of open government geographic data). My first edit was triggered by Peter Rukavina’s call to action. He wrote how he wants to add or correct Open Street Map data for a location when he mentions that location or business in his blogposts. He also calls upon others to do the same thing.

I don’t think I mention locations such as restaurants often or even at all in my blog, so it’s an easy enough promise for me to make. However, I did read and copy the steps Peter describes. First installing Alfred on my laptop. Alfred is a workflow assistant basically. I know Peter uses it a lot, and I looked at it before, and until now concluded that the Mac’s standard Spotlight interface and Hazel work well enough for me. But the use case he describes for quickly searching in a map through Alfred made sense to me: it’s a good way to make Open Street Map my default search option, and foregoing Google Maps. So I installed Alfred, and made a custom search to use Open Street Map (OSM).

The next step was seeing if there was something small I could do in OSM. Taking a look on the map around our house, I checked the description of the nearest restaurant and realised most meta-data (such as opening hours, cuisine, etc) were missing. I registered my account on OSM, and proceeded to add the info. As Peter mentions, such edits immediately get passed on to applications making use of OSM. One of those applications is a map layer showing restaurants that are currently open, and my added opening hours show up immediately:

My first edit also resulted in being contacted by a OSM community member, as they usually review the early edits any new user makes. It seems I inadvertently did something wrong regarding the address (OSM in the Netherlands makes use of the government data on addresses, BAG, and I entered an address by hand. As it came from a pick-up list I assumed it was sourced from the BAG, but apparently not). So that’s something to correct, after I find out how to do that.

[UPDATE: The fix was simple to do. The issue was that in the Netherlands the convention is to add meta data about stores to its corresponding address node (not as a separate node, unless there are more businesses at the same address). So the restaurant node I amended should not have been there. I copied all the attributes (tags) over to the address node, and then deleted the original node I edited. The information about the restaurant is now available from the address node itself. If you follow the link to the earlier node, you will now see it says that I deleted it.

I think it’s also great that within minutes of my original edit I had a message from a long time community member, Eggie. He welcomed me, pointed me to some resources on good practice and conventions, before providing some constructive criticism and nudge me in the right direction. Not by fixing what I did wrong, but by explaining why something needed improvement, and linking to where I could find out how to fix it myself, and saying if I had any questions to message him. After my correction I messaged him to check if everything was up to standard now which he acknowledged, ending with ‘happy mapping’. This is the type of welcoming and guidance that healthy communities provide. My Wikipedia experiences have been different I must say.
/UPDATE]

Mozilla fellow, artist and distributed web promoting coder Darius Kazemi has launched the 365 RFCs project. For each day of 2019 he will post and discuss a RFC, request for comment, of the Network Working Group. Starting with the very first RFC that was published in April 1969, 50 years ago, and continuing to RFC 365 (published in July 1972).

Darius writes “In honor of [the 50th] anniversary [of RFC1], I figured I would read one RFC each day of 2019, starting with RFC 1 and ending with RFC 365. I’ll offer brief commentary on each RFC. I’m interested in computer history and how organizations communicate so I think this should prove pretty interesting even though RFCs themselves can be legendarily dry reading (the occasional engineering humor RFC notwithstanding).

I think it’s good to bring the early internet (or arpanet) history to attention. This because I think having a basic understanding of how the internet works is a civic requirement for the 21st century. So add Kazemi’s project to your RSS reader (here’s the feed), and follow 3 years of internet history in 2019. (found via Frank Meeuwsen and Jeremy Keith)

Replied to Sticking With WP 4.x For Now by Ton ZijlstraTon Zijlstra (Interdependent Thoughts)
With WordPress 5.0 Gutenberg now launched, I think I will wait until the dust settles a little bit. Most of the few plugins I use haven’t been updated to WP5 yet, and some of its authors write how Gutenberg breaks them. For now I’ll stick with WP4...

Well, that ‘for now’ was rather short lived. Thinking I was updating a plugin, I accidentally pushed the button to update WP itself. So now I’m at WP5 regardless of my original intentions. Quickly installed the classic editor. My site still works it seems, but it might be some of the plugins actually now don’t. Hopefully I’ll be able to sort things out.

Replied to Some quick quotes on #edu106 and the power of #IndieWeb #creativity #edtechchat #mb by Greg McVerryGreg McVerry
....fun to figure out everything I wanted to do with my website,....gained a sense of voice...,...I’m so tired of all the endless perfection I see on social media......my relationship with technology changed....

After my initial posting on this yesterday, Greg shares a few more quotes from his students. It reminds me of the things both teachers and students said at the end of my 2008 project at Rotterdam university for applied sciences. There, a group of teachers explored how to use digital technology, blogs and the myriad of social web tools, to both support their own learning and change their teaching. The sentiments expressed are similar, if you look at the quotes in the last two sections (change yourself, change your students) of my 2009 posting about it. What jumps out most for me, is the sense of agency, the power that comes from discovering that agency.

Next week it is 50 years ago that Doug Engelbart (1925-2013) and his team demonstrated all that has come to define interactive computing. Five decades on we still don’t have turned everything in that live demo into routine daily things. From the mouse, video conferencing, word processing, outlining, drag and drop, digital mind mapping, to real time collaborative editing from multiple locations. In 1968 it is all already there. In 2018 we are still catching up with several aspects of that live demonstrated vision though. Doug Engelbart and team ushered in the interactive computing era to “augment human intellect”, and on the 50th anniversary of The Demo a symposium will ask the question what augmenting the human intellect can look like in the 21st century.


A screenshot of Doug Engelbart during the 1968 demo

The 1968 demo was later named ‘the Mother of all Demos‘. I first saw it in its entirety at the 2005 Reboot conference in Copenhagen. Doug Engelbart had a video conversation with us after the demo. To me it was a great example, not merely of prototyping new tech, but most of all of proposing a coherent and expansive vision of how different technological components and human networked interaction and routines can together be used to create new agency and new possibilities. To ‘augment human intellect’ indeed. That to me is the crux, to look at the entire constellation of humans, our connections, routines, methods and processes, our technological tools and achieving our desired impact. Likely others easily think I’m a techno-optimist, but I don’t think I am. I am generally an optimist yes, but to me what is key is our humanity, and to create tools and methods that enhance and support it. Tech as tools, in context, not tech as a solution, on its own. It’s what my networked agency framework is about, and what I try to express in its manifesto.

Paul Duplantis has blogged about where the planned symposium, and more importantly us in general, may take the internet and the web as our tools.

Doug Engelbart on video from Calif.
Doug Engelbart on screen in 2005, during a video chat after watching the 1968 Demo at Reboot 7

This is an interesting article on how the drop in Bitcoin (BTC) versus US dollar rate may mean that a 51% attack on the bitcoin network is getting easier. According to the article 90% of mining capacity has gone offline, as it is no longer profitable at the current BTC price. It argues that if you buy just part of that now worthless (because single purpose) equipment cheap, you can effectively double the mining capacity of the network in a way that gives you more than 51% of the capacity (at least temporarily). Then that entity would be able to influence the ledger.

Of course the big Asian elephant in the room that is left unmentioned is that such a 51% attack is likely to have taken place already. As the article itself states ‘most miners’ were in China, where you can now get all that mining equipment cheap ‘by the pound’. As most mining was already running on a handful of Chinese superclusters, and given what we know about the data driven authoritarian model China geopolitically pursues, the conclusion is rather obvious: The 51% threshold had been reached in China already. So it’s not an emerging 51% attack risk, it’s just that there now may be a window of opportunity for somebody else to do it.

I still wonder in what instances blockchain is actually really useful, meaning that having a distributed database/ledger AND a transparant log of transactions AND a permanent immutable record are needed for such a use. But where that is the case, I am convinced it is, other than maybe for public records, not needed and even risky (see above) to have it run on a global network or platform. As then others, not invested in your actual use case, may have influence on the validity and stability of your use case.

It makes much more sense to me to have use case specific blockchains where the needed computing nodes are distributed across the network of people invested in that specific use case. For instance I can easily imagine a local currency or exchange trading system (LETS), to use blockchain. But only if it is run by the members of that LETS for the members of that LETS. Meaning a small computing node attached to your home router as part of your membership contribution.

Technology needs to be ‘smaller’ than us, run and controlled by the socially cohesive group that uses it for some specific purpose, otherwise it more likely undermines agency than provides agency.