Bookmarked Research Rabbit

Research Rabbit is a tool that, when provided with some academic paper you already are familiar with, can suggest other related material as well as provide that material. By looking for material from the same authors, by following the references, and by looking at the topics. This can speed up the discovery phase quite a lot I think. (And potentially also further increases the amount of stuff you haven’t looked at but which sounds relevant, thus feeding the collector’s fallacy.).

I’ve created an account. It can connect to Zotero where you already have your library of papers you are interested in (if you use Zotero with an account. I use Zotero standalone at the moment I added a Zotero account and storage subscription to sync with Research Rabbit).

Looks very useful. HT to Chris Aldrich for in Hypothesis pointing to a blogpost by Dan Alloso which mentioned Research Rabbit.

Yesterday I had a conversation with Andy Sylvester about the tools I use for my personal process for taking in information, learning and working. He posted our conversation today, as episode 8 in his ‘thinking about tools for thought‘ podcast series. In the ‘show notes’ he links to my series on how I use Obsidian that I wrote a year ago. This is still a good overview of how I use mark down files for pkm and work, even if some details have changed in the year since I wrote it. I also mentioned Respec, which is how I directly publish a client website on a series of EU laws from my notes through GitHub. And not in Andy’s show notes but definitely worth a mention is Eastgate’s Tinderbox.

This is a handy little webclipper that grabs a page and downloads it in markdown. Just yesterday evening I thought about making something that simply grabs the content of a page and stores it to an inbox folder on my laptop. So I don’t have to copy paste things myself. But it already exists. The clipper has settings so you can add things like the URL you clip from is incorporated in the saved markdown file.

For years I had been an active user of Delicious, the social bookmarking service. I started using it in 2004, a year after its launch, and stopped using it in 2015. By then the service had been repeatedly sold, and much of its useful social features had been deprecated. It’s one of those great services Yahoo bought and then never did anything with. As I describe in a posting on bookmarking strategies last year, Delicious was useful originally because it showed you who else had bookmarked the same thing as you, and with which tags. It allowed me to find other people with similar interests, and especially if they used very different tags than me for a page they would be outside my own communities and networks (as ‘tribes’ will gravitate to a shared idiom). I’d then start following the blogs of those other people, as a way of widening my ‘very large scale antenna array’ of feed reading. Tags were pivots for triangulation. Delicious is one of those tools that were really social software, as opposed to a social media platform with its now too common self-reinforcing toxicity.

The current owner of Delicious is Pinboard, and according to Wikipedia the Delicious site was officially made inactive last August. That became obvious visiting my Delicious profile in the past weeks (on the original de.licio.us url, not the later delicious.com), as it would regularly result in an internal server error. Today I could access my profile.

My delicious profile

I decided to download my Delicious data, 3851 bookmarks.

After several attempts resulting in internal server errors, I ended up on the export screen which has options to include both notes and tags.

Delicious export screen

The resulting download is a HTML file (delicious.html), which after opening at first glance looked disappointing as it did not show tags, nor the date of bookmarking, just the description. Loosing most context would make the list of bookmarks rather useless.

My delicious html export

However, when I took a look at the source of the HTML file, I found that thankfully tags and dates are included as data attributes of the bookmarks. The HTML is nicely marked up wit DT and DD tags too, so it will be no problem to parse this export automatically.

My delicious html export source showing data attributes

My original notion was to import all bookmarks with their tags and notes, as back dated blog entries here. But randomly clicking on a range of links tells me that many of those bookmarks no longer resolve to an active web page, or redirect to some domain squatting spam outfit. So bringing the bookmarks ‘home’ into my site isn’t useful.
As the export includes tags, I can mine the list for bits of utility though. The collection contains a wide variety of open data usage examples I collected over the years, and that is of interest as a historical library, that I could try and match against the internet archives, using the bookmarking dates. Most other stuff is no longer of interest, or was ephemeral to begin with, so I won’t bother bringing that ‘home’. I will add the delicious export to the other exports of Twitter and Facebook on my NAS drive and cloud as archive. I have now removed my profile from the Delicious website (after several attempts to overcome internal server errors, and it is now verifiably gone).

I made a tiny tool for myself today, to make it easier to display both HTML and PHP code in this site (if you just input it as is it will get filtered, as it would be unsafe otherwise)

It’s a small webform that lives as a shortcut in my browser bar:

Which calls itself when submitted, and if there is some input it will encode it by calling the PHP function htmlentities twice (otherwise it would just show the right output, but not what is needed to get that output). The result is shown above the form. Maybe I’ll add a step to put it on the clipboard instead. That would save another step. Currently I run it on an internet accessible webserver, I will likely move it to the webserver I run locally on my laptop Moved it my laptop’s local webserver (making it both safer, and independent from having internet access).

<?php echo "<pre>".htmlentities(htmlentities($_POST["input_tekst"], ENT_QUOTES))."</pre>"; ?>

This makes it possible to document code changes much better in my site. Such as my recent language adaptations.

The code snippet above has of course been made with this form.

The P2P Foundation reposts an article by Jeremy Lent from late 2017 on how corporations are artificial intelligences.

It doesn’t mention Brewster Kahle’s 2014 exploration of the same notion.
SF writer Charlie Stross referenced Kahle when he called corporations a 19 century form of ‘slow AI’.
Because “corporations are context blind, single purpose algorithms“.

I like that positioning of organisations as slow AI and single purpose algorithms. For two reasons.
First, as it refocuses us on the fact that organisational structures are tools. When those tools get bigger than us, they stop serving us. And it points the way to how we always need to think about AI as tools, with their smallness as design principle.
Second, because it nicely highlights what I said earlier about ethics futurising. Futurising is when ethical questions are tied to AI in ways that puts it well into the future, and not paying attention to how those same ethical issues play out in your current context. All the ethical aspects of AI we discuss apply to our organisations, corporations just as much, but we simply assume that was considered some point in the past. It wasn’t.

Your Slow AI overlords looking down on you, photo Simone Brunozzi, CC-BY-SA