This is a handy little webclipper that grabs a page and downloads it in markdown. Just yesterday evening I thought about making something that simply grabs the content of a page and stores it to an inbox folder on my laptop. So I don’t have to copy paste things myself. But it already exists. The clipper has settings so you can add things like the URL you clip from is incorporated in the saved markdown file.
For years I had been an active user of Delicious, the social bookmarking service. I started using it in 2004, a year after its launch, and stopped using it in 2015. By then the service had been repeatedly sold, and much of its useful social features had been deprecated. It’s one of those great services Yahoo bought and then never did anything with. As I describe in a posting on bookmarking strategies last year, Delicious was useful originally because it showed you who else had bookmarked the same thing as you, and with which tags. It allowed me to find other people with similar interests, and especially if they used very different tags than me for a page they would be outside my own communities and networks (as ‘tribes’ will gravitate to a shared idiom). I’d then start following the blogs of those other people, as a way of widening my ‘very large scale antenna array’ of feed reading. Tags were pivots for triangulation. Delicious is one of those tools that were really social software, as opposed to a social media platform with its now too common self-reinforcing toxicity.
The current owner of Delicious is Pinboard, and according to Wikipedia the Delicious site was officially made inactive last August. That became obvious visiting my Delicious profile in the past weeks (on the original de.licio.us url, not the later delicious.com), as it would regularly result in an internal server error. Today I could access my profile.
My delicious profile
I decided to download my Delicious data, 3851 bookmarks.
After several attempts resulting in internal server errors, I ended up on the export screen which has options to include both notes and tags.
Delicious export screen
The resulting download is a HTML file (delicious.html), which after opening at first glance looked disappointing as it did not show tags, nor the date of bookmarking, just the description. Loosing most context would make the list of bookmarks rather useless.
My delicious html export
However, when I took a look at the source of the HTML file, I found that thankfully tags and dates are included as data attributes of the bookmarks. The HTML is nicely marked up wit DT and DD tags too, so it will be no problem to parse this export automatically.
My delicious html export source showing data attributes
My original notion was to import all bookmarks with their tags and notes, as back dated blog entries here. But randomly clicking on a range of links tells me that many of those bookmarks no longer resolve to an active web page, or redirect to some domain squatting spam outfit. So bringing the bookmarks ‘home’ into my site isn’t useful.
As the export includes tags, I can mine the list for bits of utility though. The collection contains a wide variety of open data usage examples I collected over the years, and that is of interest as a historical library, that I could try and match against the internet archives, using the bookmarking dates. Most other stuff is no longer of interest, or was ephemeral to begin with, so I won’t bother bringing that ‘home’. I will add the delicious export to the other exports of Twitter and Facebook on my NAS drive and cloud as archive. I have now removed my profile from the Delicious website (after several attempts to overcome internal server errors, and it is now verifiably gone).
I made a tiny tool for myself today, to make it easier to display both HTML and PHP code in this site (if you just input it as is it will get filtered, as it would be unsafe otherwise)
It’s a small webform that lives as a shortcut in my browser bar:
Which calls itself when submitted, and if there is some input it will encode it by calling the PHP function htmlentities twice (otherwise it would just show the right output, but not what is needed to get that output). The result is shown above the form. Maybe I’ll add a step to put it on the clipboard instead. That would save another step.
Currently I run it on an internet accessible webserver, I will likely move it to the webserver I run locally on my laptop Moved it my laptop’s local webserver (making it both safer, and independent from having internet access).
<?php echo "<pre>".htmlentities(htmlentities($_POST["input_tekst"], ENT_QUOTES))."</pre>"; ?>
This makes it possible to document code changes much better in my site. Such as my recent language adaptations.
The code snippet above has of course been made with this form.
It doesn’t mention Brewster Kahle’s 2014 exploration of the same notion.
SF writer Charlie Stross referenced Kahle when he called corporations a 19 century form of ‘slow AI’.
Because “corporations are context blind, single purpose algorithms“.
I like that positioning of organisations as slow AI and single purpose algorithms. For two reasons.
First, as it refocuses us on the fact that organisational structures are tools. When those tools get bigger than us, they stop serving us. And it points the way to how we always need to think about AI as tools, with their smallness as design principle.
Second, because it nicely highlights what I said earlier about ethics futurising. Futurising is when ethical questions are tied to AI in ways that puts it well into the future, and not paying attention to how those same ethical issues play out in your current context. All the ethical aspects of AI we discuss apply to our organisations, corporations just as much, but we simply assume that was considered some point in the past. It wasn’t.
The acquisition by Microsoft of Skype hasn’t worked out well for the product itself, judging by the level of sighs and complaints I hear whenever Skype is mentioned. So I was glad when longtime blogging connection Phil Wolff pointed me to Appear.in as an alternative. He said he’d been using it for a year or so, as an alternative to Skype.
Appear.in seems very easy to use, and no account is needed. Simply create a sharable link, and send the link to your conversation partners yourself, and you’re all set to talk with up to 4 people. The paid version allows up to 12 people in one call. I intend to use this more from now on.
Appear.in is a Norwegian company started in 2013 as an intern project at Telenor, which is still a minority shareholder, according to Techcrunch’s Crunchbase.
People often ask me how I stay informed, and always seem to know even about smaller initiatives around the topics I work on. Part of that is what I call ‘Radar’. With Radar I automatically collect all the Twitter messages that mention keywords I am interested in, and detect the web addresses they mention. Those web addresses are evaluated on their type (is it a blog, a video, a general site, a presentation, a photo?) and counted as to how often they are mentioned.
Radar then presents me with overviews of all URLs mentioned on Twitter in the past day, or week, on the key words I follow. This way I find not just the ‘big’ websites, but also the smaller events, initiatives and discussions, that are mentioned by smaller communities. Next to URLs Radar also tracks who is mentioning certain topics, which basically gives me a list of suggestions of who to maybe follow on Twitter, or who’s profile I may want to look at to see if they also blog about the topics I am interested in.
What comes out of my Radar then may get added to my feedreader, or to my bookmark collection, or to my notes collection in Evernote. Radar is the serendipity antenna that scoops up a wide variety of things. To me, whatever is being mentioned on Twitter is like the froth on the waves: it is not all that meaningful by itself, but shows me where there is movement and energy of interaction. That points me to the places and people that make up the wave below the froth. Which is where the significant info is.
Radar at first was a bunch of php scripts I wrote myself that ran on my laptop and which I started manually in sequence. My coding skills aren’t all that great though, so ultimately I asked Flemming Funch to clean things up for me. That meant he coded the scripts from scratch, with only my original outline of what I wanted remaining. Now it runs permanently on my VPS with a basic web front-end for me to explore the output (see screenshots).