Wat is een #indiewebcamp ? Frank Meeuwsen, legt onze verwachtingen voor #indieweb camp Utrecht (#iwcutrecht) uit. Ben jij er bij op 18 en 19 mei? https://diggingthedigital.com/wat-is-indiewebcamp/
It seems I can now indeed post to my site from Quill and use the syndication list, having solved the header authorization issues I had from running php as fastcgi.
[UPDATE: indeed, testing the IndieAuth plugin, now gives positive results, as opposed to when I tried to do this at IndieWebCamp last fall.
Help jij ons mee organiseren? We gaan een IndieWebCamp organiseren in Utrecht, een event om het gebruik van het Open Web te bevorderen, en met elkaar praktische zaken aan je eigen site te verbeteren. We zoeken nog een geschikte datum en locatie in Utrecht. Je hulp is dus van harte welkom.
Op het Open Web bepaal jij zelf wat je publiceert, hoe het er uit ziet, en met wie je in gesprek gaat. Op het Open Web bepaal je zelf wie en wat je volgt en leest. Het Open Web was er altijd al, maar in de loop van de tijd zijn we allemaal min of meer opgesloten geraakt in de silo’s van Facebook, Twitter, en al die anderen. Hun algoritmes en timelines bepalen nu wat jij leest. Dat kan ook anders. Bouw je eigen site, waar anderen niet tussendoor komen fietsen omdat ze advertentie-inkomsten willen genereren. Houd je eigen nieuwsbronnen bij, zonder dat andermans algoritme je opsluit in een bubbel. Dat is het IndieWeb: jouw content, jouw relaties, jij zit aan het stuur.
Frank Meeuwsen en ik zijn al heel lang onderdeel van internet en dat Open Web, maar brengen/brachten ook veel tijd in websilo’s als Facebook door. Inmiddels zijn we beiden actieve ‘terugkeerders’ op het Open Web. Afgelopen november waren we samen op het IndieWebCamp Nürnberg, waar een twintigtal mensen met elkaar discussieerde en ook zelf actief aan de slag gingen met hun eigen websites. Sommigen programmeerden geavanceerde dingen, maar de meesten zoals ikzelf bijvoorbeeld, deden juist kleine dingen (zoals het verwijderen van een link naar de auteur van postings op deze site). Kleine dingen zijn vaak al lastig genoeg. Toen we terugreden met de trein naar Nederland waren we het er al snel over eens: er moet ook een IndieWebCamp in Nederland komen. In Utrecht dus, dit voorjaar.
Om Frank te citeren:
Voel je je aangesproken door de ideeën van het open web, indieweb, wil je aan de slag met een eigen site die meer vrij staat van de invloeden sociale silo’s en datatracking? Wil je een nieuwsvoorziening die niet meer primair wordt gevoed door algoritmen en polariserende roeptoeters? Dan verwelkomen we je op twee dagen IndieWebCamp Utrecht.
Laat weten of je er bij wilt zijn.
Laat weten of je kunt helpen met het vinden van een locatie.
Laat weten hoe wij jou kunnen helpen bij je stappen op het Open Web.
Je bent uitgenodigd!
Dries Buytaert, the originator of the Drupal CMS, is pulling the plug on Facebook. Having made the same observations I did, that reducing FB engagement leads to more blogging. A year ago he set out to reclaim his blog as a thinking-out-loud space, and now a year on quits FB.
I’ve seen this in a widening group of people in my network, and I welcome it. Very much so. At the same time though, I realise that mostly we’re returning to the open web. As we were already there for a long time before the silo’s Sirens lured us in, silos started by people who like us knew the open web. For us the open web has always been the default.
Returning to the open web is in that sense not a difficult step to make. Yes, you need to overcome the FOMO induced by the silo’s endless scrolling timeline. But after that withdrawal it is a return to the things still retained in your muscle memory. Dusting off the domain name you never let lapse anyway. Repopulating the feed reader. Finding some old blogging contacts back, and like in the golden era of blogging, triangulate from their blog roll and published feeds to new voices, and subscribe to them. It’s a familiar rhythm that never was truly forgotten. It’s comforting to return, and in some ways privilege rather than a risky break from the mainstream.
It makes me wonder how we can bring others along with us. The people for whom it’s not a return, but striking out into the wilderness outside the walled garden they are familiar with. We say it’s easy to claim your own space, but is it really if you haven’t done it before? And beyond the tech basics of creating that space, what can we do to make the social aspects of that space, the network and communal aspects easier? When was the last time you helped someone get started on the open web? When was the last time I did? Where can we encounter those that want and need help getting started? Outside of education I mean, because people like Greg McVerry have been doing great work there.
As a long time netizen it is easy to forget that for many now online their complete experience of the internet is within the web silos. I frequent silos, but I’ve always kept a place well outside of it for over two decades. When you’ve never ‘played outside’, building your own space beyond the silos can be an eye-opener. Greg McVerry pointed to the blog of one of his students, who described the experience of stepping outside the silos (emphasis mine):
The fact that I now have a place where I can do that, where I can publish my thoughts whenever I want in a place open for people to read and to not be afraid of doing so, is liberating. I’ve always wanted a space online to call my own. I’m so tired of all the endless perfection I see on social media. My space, “Life Chapter by Chapter” is real. It’s me, personified by a website. And though this post is not digitally enhanced in any way, I love it because it’s representative of the bottom line of what I’ve learned in EDU 106. I’m my own person on this site, I’m not defined by Instagram, Facebook, or Twitter. I can post what I want, when I want, how I want. It’s a beautiful thing.
That’s a beautiful thing, indeed. Maybe this is the bit that Frank Meeuwsen and I need to take as the key to the story when writing a book, as Elja challenged us today (in Dutch).
Next week it is 50 years ago that Doug Engelbart (1925-2013) and his team demonstrated all that has come to define interactive computing. Five decades on we still don’t have turned everything in that live demo into routine daily things. From the mouse, video conferencing, word processing, outlining, drag and drop, digital mind mapping, to real time collaborative editing from multiple locations. In 1968 it is all already there. In 2018 we are still catching up with several aspects of that live demonstrated vision though. Doug Engelbart and team ushered in the interactive computing era to “augment human intellect”, and on the 50th anniversary of The Demo a symposium will ask the question what augmenting the human intellect can look like in the 21st century.
A screenshot of Doug Engelbart during the 1968 demo
The 1968 demo was later named ‘the Mother of all Demos‘. I first saw it in its entirety at the 2005 Reboot conference in Copenhagen. Doug Engelbart had a video conversation with us after the demo. To me it was a great example, not merely of prototyping new tech, but most of all of proposing a coherent and expansive vision of how different technological components and human networked interaction and routines can together be used to create new agency and new possibilities. To ‘augment human intellect’ indeed. That to me is the crux, to look at the entire constellation of humans, our connections, routines, methods and processes, our technological tools and achieving our desired impact. Likely others easily think I’m a techno-optimist, but I don’t think I am. I am generally an optimist yes, but to me what is key is our humanity, and to create tools and methods that enhance and support it. Tech as tools, in context, not tech as a solution, on its own. It’s what my networked agency framework is about, and what I try to express in its manifesto.
Paul Duplantis has blogged about where the planned symposium, and more importantly us in general, may take the internet and the web as our tools.
Some links I thought worth reading the past few days
- A brief overview of how the GDPR and EU PSI-Directive interplay : The PSI directive and GDPR – European Data Portal
- Some discussion on how blockchain and GDPR go together. Says it’s not about the tech but about use case implementation. That emphasises my position that GDPR is a quality assurance tool, not a gate with a sign ‘forbidden’ : EU Blockchain Forum says blockchain, GDPR compatible – Ledger Insights
- A statement like “Therefore, our primary focus is to get millions of Q members registered” makes this initiative sound very spammy and pyramid like, banking like they do on FOMO. Having everyone wait for whatever they plan until they have millions of users is an odd way of getting those users. Why not have something of value now, so that it brings users in? Anyway I have an account, and you are invited. More info on: Initiative Q
- An old posting, although still worth reading (about the need for your own webspace if only to tinker) mostly bookmarked because of the still very useful video of Amber Case outlining the reasoning behind the IndieWeb (independent web): The IndieWeb, Revolution, and Other Reasons You Should Learn to Code
- In terms of privacy it really is not a good idea to use smart home devices that have a centralised web service / data store behind them: Smart Home Surveillance: Governments Tell Google’s Nest To Hand Over Data 300 Times
Last weekend during the Berlin IndieWeb Camp, Aaron Parecki gave a brief overview of where he/we is/are concerning the ‘social reader’. This is of interest to me because since ever I have been reading RSS, I’m doing by hand what he described doing more automatically.
These are some notes I made watching the live stream of the event.
Compared to the algorithmic timelines of FB, Twitter and Instagram, that show you what they decide to show you, the Social Reader is about taking control: follow the things you want, in the order that you want.
RSS readers were and are like that. But RSS reading never went past linear reading of all the posts from all your feeds in reverse chronological order. No playing around with how these feeds are presented to you. And no possibility to from within the reader take actions based on the things you read (sharing, posting, bookmarking, flagging, storing etc.): there are not action buttons on your feedreader, other than mark as unread or archive.
In the IndieWeb world, publishing works well Aaron said, but reading has been an issue (at least if it goes beyond reading a blog and commenting).
That’s why he built Monocle, and Aperture. Aperture takes all kinds of feeds, RSS, JSON, Twitter, and even scripts pushing material to it. These are grouped in channels. Monocle is a reader on top of that, where he presents those channels in a nice way. Then he added action buttons to it. Like reply etc. Those actions you initiate directly in the reader, and always post to your own site. The other already existing IndieWeb building blocks then send it to the original source of the item you’re responding to. See Aaron’s posting from last March with screenshots “Building an IndieWeb Reader“, to get a feeling for how it all looks in practice.
The power of this set-up is that it separates the layers, of how you collect material, how work on that material, and how you present content. It looked great when Aaron demo’d it when I met him at IWC Nürnberg two weeks earlier.
For me, part of the actions I’d like to take are definitely outside the scope of my own website, or at the very least outside the public part of my website. See what I wrote about my ideal feed reader. Part of automation of actions I’d want to point to different workflows on my own laptop for instance. To feed into desk research, material for client updates, and things like that.
I’m interested in running things like Aperture and Monocle locally, but a first step is exploring them in the way Aaron provides them to test drive. Aperture works fine. But I can’t yet get Monocle to work for me. This is I guess the same issue I ran into two weeks ago with how my site doesn’t support sending authorisation headers.
Today at 14:07, sixteen years ago I published my first blogpost. The first few months I posted on Blogger, but after 6 months, deciding having a blog was no longer just an experiment, I moved to my own domain and where it has since resided. First it was hosted at a server I ran from my home, later I moved to a hosting package for more reliability.
Interestingly in that first blogpost only the links to personal domains still work, all the others have since become obsolete. Radio Userland no longer exists, nor does the Knowledge Board platform that I mention and even refer to as a place to find out more about me. In my first blogpost I also link to an image that was hosted on my server at home, using the subdomain name my internet provider gave me back then. That provider was sold in 2006 and that subdomain name no longer exists either. Blogger itself does still exist and even keeps my old Blogger.com blog alive. But Google has of course shown frequently they can and do kill services at short notice, or suspend your account.
The only original link in that first posting that still works is the one to David Gurteen’s blog hosted on his own domain gurteen.com, and his blogpost actually preserves some of the things I wrote at the now gone Knowledge Board. Although the original link to Lilia’s blogpost on Radio Userland no longer works, I could repair the link because she moved to her own domain in the same week I launched my blog. The link to Seb’s Radio Userland site has been preserved in archive.org. Which goes to say: if you care about your own data, your own writing, your own journal of thoughts, you need to be able to control the way your creative output can be accessed online. Otherwise it’s just a bit of content that serves as platform fodder.
So in a sense my very first blogpost in hindsight is a ringing endorsement for the IndieWeb principle of staying in control of your stuff. That goes further than having your own domain, but it’s a key building block.
Last year the anniversary of this blog coincided with leaving Facebook and returning to writing in this space more. That certainly worked out. Maybe I should use this date to yearly reflect on how my online behaviours do or don’t aid my networked agency.
As a first step to better understand the different layers of adding microformats to my site (what is currently done by the theme, what by plugins etc.), I decided to start with: what is supposed to go where?
I made a post-it map on my wall to create an overview for myself. The map corresponds to the front page of my blog.
Green is content, pink is h- elements, blue u- elements, and yellow p- elements, with the little square ones covering dt- and rel’s. All this is based on the information provided on the http://microformats.org/wiki/Main_Page, and not on what my site actually does. So next step is a comparison of what I expect to be there from this map, to what is actually there. This map is also a useful step to see how to add microformats to the u-design theme for myself.
This weekend the IndieWeb Camp Berlin is taking place. I attended the Nuremberg edition two weeks ago. Remotely following the livestream for a bit this morning, while I work on a few website related things in the spirit of IndieWeb.
Putting the livestream up on the wall, to easier follow what is demo’d on screen, and keeping my standing desk screen(s) focused on the work in front of me.
From the recent posting on Mastodon and it currently lacking a long tail, I want to highlight a specific notion, and that’s why I am posting it here separately. This is the notion that tool usage having a long tail is a measure of distribution, and as such a proxy for networked agency. [A long tail is defined as the bottom 80% of certain things making up over 50% of a ‘market’. The 80% least sold books in the world make up more than 50% of total book sales. The 80% smallest Mastodon instances on the other hand account for less than 15% of all Mastodon users, so it’s not a long tail].
To me being able to deploy and control your own tools (both technology and methods), as a small group of connected individuals, is a source of agency, of empowerment. I call this Networked Agency, as opposed to individual agency. Networked also means that running your own tool is useful in itself, and even more useful when connected to other instances of the same tool. It is useful for me to have this blog even if I am its only reader, but my blog is even more useful to me because it creates conversations with other bloggers, it creates relationships. That ‘more useful when connected’ is why distributed technology is important. It allows you to do your own thing while being connected to the wider world, but you’re not dependent on that wider world to be able to do your own thing.
Whether a technology or method supports a distributed mode, in other words is an important feature to look for when deciding to use it or not. Another aspect is the threshold to adoption of such a tool. If it is too high, it is unlikely that people will use it, and the actual distribution will be very low, even if in theory the tools support it. Looking at the distribution of usage of a tool is then a good measure of success of a tool. Are more people using it individually or in small groups, or are more people using it in a centralised way? That is what a long tail describes: at least 50% of usage takes place in the 80% of smallest occurrences.
In June I spoke at State of the Net in Trieste, where I talked about Networked Agency. One of the issues raised there in response was about scale, as in “what you propose will never scale”. I interpreted that as a ‘centralist’ remark, and not a ‘distributed’ view, as it implied somebody specific would do the scaling. In response I wrote about the ‘invisible hand of networks‘:
“Every node in a network is a scaler, by doing something because it is of value to themselves in the moment, changes them, and by extension adding themselves to the growing number of nodes doing it. Some nodes may take a stronger interest in spreading something, convincing others to adopt something, but that’s about it. You might say the source of scaling is the invisible hand of networks.”
In part it is a pun on the ‘invisible hand of markets’, but it is also a bit of hand waving, as I don’t actually had precise notions of how that would need to work at the time of writing. Thinking about the long tail that is missing in Mastodon, and thus Mastodon not yet building the distributed social networking experience that Mastodon is intended for, allows me to make the ‘invisible hand of networks’ a bit more visible I think.
If we want to see distributed tools get more traction, that really should not come from a central entity doing the scaling. It will create counter-productive effects. Most of the Mastodon promotion comes from the first few moderators that as a consequence now run large de-facto centralised services, where 77% of all participants are housed on 0,7% (25 of over 3400) of servers. In networks smartness needs to be at the edges goes the adagium, and that means that promoting adoption needs to come from those edges, not the core, to extend the edges, to expand the frontier. In the case of Mastodon that means the outreach needs to come from the smallest instances towards their immediate environment.
Long tail forming as an adoption pattern is a good way then to see if broad distribution is being achieved.
Likely elements in promoting from the edge, that form the ‘invisible hand of networks’ doing the scaling are I suspect:
- Show and tell, how one instance of tool has value to you, how connected instances have more value
- Being able to explain core concepts (distribution, federation, agency) in contextually meaningful ways
- Being able to explain how you can discover others using the same tool, that you might want to connect to
- Lower thresholds of adoption (technically, financially, socially, intellectually)
- Reach out to groups and people close to you (geographically, socially, intellectually), that you think would derive value from adoption. Your contextual knowledge is key to adoption.
- Help those you reach out to set up their own tools, or if that is still too hard, ‘take them in’ and allow them the use of your own tools (so they at least can experience if it has value to them, building motivation to do it themselves)
- Document and share all you do. In Bruce Sterling’s words: it’s not experimenting if you’re not publishing about it.
An adoption-inducing setting: Frank Meeuwsen explaining his steps in leaving online silos like Facebook, Twitter, and doing more on the open web. In our living room, during my wife’s birthday party.
Today the 2.6 version of Mastodon has been released. It now has built-in support for “rel=me”, which allows verification. Meaning that I can show on my Mastodon profile a link to my site and can proof that both are under my control.
Rel=me is something you add to a link on your own site, to indicate that the page or site you link to also belongs to you. The page you link to needs to link back to your site and make it reciprocal. This is machine readable, and allows others to establish that different pages are under control of the same person or entity.
On my own site I use ‘rel=me’ in the about section in the right hand column. First, if you check the html source of my page, you’ll see that I say that this site (zylstra.org/blog) is my primary site, by making it the only link that has a ‘u-uid’ class (uid is unique id). It also has rel=”me”, meaning the relationship I have with the linked site, is that it is me:
class="u-url url u-uid uid" rel="me" href='https://www.zylstra.org/blog'
Further down in that About segment you find other links, to my Mastodon and Twitter profiles. If you look at those links you will see it says:
saying my Mastodon profile is also me, and similarly to say that a specific Twitter profile is also me (I maintain other Twitter profiles as well but they’re not me, but my company etc.):
To close the loop that allows verification that that is true, both my Mastodon profile and my Twitter profile need to link back to my site in a way that machines can check. For Twitter that is easiest: it has a specific place in a user profile for a website address. Like in the image below.
In Mastodon I can add multiple URLs to my profile but there was no way for me to explicitly say in my Mastodon profile that a specific link is the one that represents my online identity. But now I can add a rel=me link in my Mastodon profile, so that both my website and my Mastodon profile link to each other in a verifiable way, proving both are under my control. As you can see in the image below it was available on a single instance already for testing purposes (the green mark signifies verification with the linked site), and now with today’s release it is available to all Mastodon instances.
So how is verification of control over different pages by the same person useful? It may be useful to show that another Twitter profile with my name is not me, because there’s no two-way link between that profile and my site. If you have multiple rel=me references it becomes harder for others to fake specific parts of your online identity. Further, it allows additional functionality like logging in on a different site using credentials from another site you control. It also makes it possible to map networks, and discovery. Site X links to profile X with rel=me on Twitter. There X follows Y, and Y’s profile says, site Y is under her control. Now I know that Site X and Site Y’s authors are somehow connected. If I’m following site X, I may find it interesting to also regularly read site Y.
As soon as my Mastodon instance has been updated to the latest version, which will likely be sometime today, I will add the rel=me in my Mastodon profile, making the link between this site and that profile verifiable.
[UPDATE] It now works on my Mastodon instance:
So, I’m posting this using Quill, a micropub client, that I am running on my own laptop (see steps I took). Intending this as a step towards being able to draft postings offline (which I am used to doing, usually in a text editor, or Evernote), as well as posting without using the WordPress back-end.
Aim: run Quill locally, to write draft posts offline (and later maybe see if I can store drafts locally).
(I run MAMP PRO on my Mac, I also run a WordPress install locally, with all IndieWeb plugins enabled and a Sempress theme)
I downloaded, installed in http://localhost:8888/quill
The installation instructions mention using Composer to install a range of dependencies. I did not know what that was, so had to Google around to find out it is a tool to install php dependencies. I followed the instructions at https://getcomposer.org/download/ to install Composer on my Mac.
Then I could call the URL http://localhost:8888/quill/public/index.php ok.
However it doesn’t load images correctly and links don’t work as they are relative to http://localhost:8888/ and not http://localhost:8888/quill/public
Aaron, who created Quill, told me Quill expects to run as a root domain.
So: I added a host quill.test on port 80 in my MAMP set-up, with the /public as root folder. Now Quill loads fine and URLs work.
To get it to work right with mysql on my laptop I added a database called quill. I first had created new user, but that didn’t work. So I used an existing root user for that. I had to also run this sql query to create a table in the database that Quill uses.
After that it worked fine. Next up, thinking about how I’d like to change Quill, as an offline tool for me to prepare postings. Also want to experiment with using it to post to different blogs.