When we send a link to someone, we are trusting that whoever is behind that link will serve that someone the content we actually saw. Give or take an ad or two.
Also, when we bookmark the link for later retrieval. We are trusting that the entity will be serving that content at any time in the future that you may want it.
This may not be a huge problem if the page is merely a list of dead baby jokes [6]. They are objectively funny, of course. But you can also get along without them.
But what of the case of formal, scientific texts we may depend on, and that use citations from the web as part of their source material? Usually, they refer to a source by link and date of retrieval. This is not of much use unless the actual source and/or render they saw at that time also is available.
That may not always be the case.
Take care of your shelf
"No worries, the Wayback Machine has me covered."
Yes. But no. The Wayback Machine is a (thus far) centralized entity that depend on a few idealists and donations to keep going. If they stop to keep going, they depend on passing the buck to someone else. If that someone is evil, they may take it and rewrite history to suit themselves. If that someone else cannot be found, it becomes a garbage collection blip on Bezos' infrastructure monopoly dashboard. [1]
That aside, sources like Wayback Machine are like libraries. Libraries are, of course, essential. Not only because they serve as one of the pillars of democracy, providing free access to knowledge for everyone. They are also essential because it's simply not very practical for you to pre-emptively own all the books that you may at some point want to read. Let alone ask around in your neighborhood if they happen to have a copy (although a crowd-sourced library app sounds like a fun decentralization project to explore).
You may however want to keep a copy of the books you depend on, and the ones you really like. Just to make really sure you have it available when you want it. Then, if some New Public Management clowns get the chance to gut public infrastructure where you live, or someone starts a good old-fashioned fascist book burning, you have yourself covered.
A lack of friction
Yes, stuff may disappear on the web. Just as books may.
On the web that stuff can get rewritten. Books may be rewritten, too. [2] The previous editions of the books will still exist as independent physical objects until they degrade, as long as something is not actively done to them. But so too with data on storage media. If it is not renewed or copied during that lifetime it may degrade until it becomes illegible. And of course, it may also simply be deleted. And without the smell of smoke at that.
That's the difference that seems to leap out when using this imagery. How easy it is to change and destroy stuff at scale on the web compared with the real world of books. And how inconspicuously it can happen, without anyone noticing. And for those who notice, it is very hard to prove what has changed, unless you have a copy. [5]
So what can we do? Copies and proofs are definitely keywords here. Fortunately, copying is what computers are all about. And making a cryptographical proof of what you see is easy enough these days, too. The tricky bit is to build credibility around that proof. But let's stick our head in the sand and start with the easy part and see where it takes us.
Look, no head
A good start is to dump the document source to disk, then calculating and storing the sum of it.
In many cases this will be insufficient, though, as many sites populates the DOM through scripts, either in part or in full. As the use-case here is humans voting on what they see is what they get, human aids will be needed. In other words, we need to render the page to store what we actually see.
Printing to PDF from the browser is an option, but that is really difficult to automate. Fortunately, modern browser engines provide command line access to rendering. Since I mostly use Brave Browser these days, we'll use headless Chromium here.
In addition to source, sum and rendering, we should also include a copy of the request headers for good measure.
Thus, we end up with something like this.
#!/bin/bash f=${WEBSHOT_OUTPUT_DIR:-/tmp} url=$1 title=$2 >&2 echo using outdir $f set +e # prepare d=`TZ=UTC date +%Y%m%d%H%M` t=`mktemp -d` pushd $t # store raw outputs echo $1 > url.txt curl -s -I $1 > headers.txt curl -s -X GET $1 > contents.txt z=`sha256sum contents.txt` echo $z > contents.txt.sha256 h=`echo -n $z | awk '{ print $1; }'` if [ -z "$title" ]; then title=$h fi >&2 echo using title $title # rendered snapshot chromium --headless --print-to-pdf $url n=${d}_${h} mv output.pdf $n.pdf # store result mkdir -p "$f/$title" tar -zcvf "$f/$title/$n.tar.gz" * # clean up popd set -e
What does this mean
Let's sum up what information we've managed to store with this operation.
- We have a copy of the unrendered source. It may or may not include all the information we want to store.
- We have a fingerprint of that source.
- We have a copy of the headers we were served when retrieving the document. [3]
- We have a image copy of what we actually saw when visiting the page.
- We have a date and time for retrieval (file attributes).
To link the headers together with the visual copy, we could sum the header file and image file aswell, put those sums together with the content sum in a deterministic order, and calculate the sum of those sums. E.g. [4]
$ cp contents.txt.sha256 sums.txt
$ sha256sum headers.txt | awk '{ print $1; }' >> sums.txt
$ sha256sum <pdf file> | awk '{ print $1; }' >> sums.txt
$ sha256sum sums.txt | awk '{ print $1; }' > topsum.txt
If we now sign this sum, we are confirming that for this particuar resource:
"This was the source. These were the headers for that source. This is how that source served in that manner looked for ME at the time"
Proving links in this post
This post makes use of several external links to articles. So as a final step, let's eat our own dogfood and add proofs for them.
Clicking on the "image" links, we see that thanks to the recent ubiquity of cookie nag boxes screaming "accept all" at you, those very boxes are now blocking the content we want to get at. So we will need more work to get to where we want by automation. But it's s start.
[1] Early 2021 survey puts Amazon at one-third of the global market share. https://www.statista.com/chart/18819/worldwide-market-share-of-leading-cloud-infrastructure-service-providers/
[2] 2011 sparked a controversy around Astrid Lindgren's Pippi Longstockings. Echoing those viewpoints, the books were edited in 2015 during which some alleged "racist" content was altered. The rabid rightwing media later spun a false tale of mass purges of Pippi books around one single swedish library's decision to throw out copies of the originial "racist" versions. Case-in-point, a public statement in which the library tries to justify its actions is no longer available on their website, and has to be retrieved by the Wayback Machine. https://web.archive.org/web/20170710185409/https://www.botkyrka.se/arkiv/nyhetsarkiv/nyheter-startsida/2017-07-10-angaende-uttalanden-av-journalisten-janne-josefsson-om-bibliotek-botkyrka.html (in swedish)
[3] Well actually, not quite. We did the same request twice, but they were two separate requests. Using a single request would improve the script.
[4] We use the hex representation for clarity here. A proper tool would convert the hex values to bytes before calculating sum on them.
[5] Another important difference is that the book does not need to be interpreted by a machine in order to make sense for a human. Availability of tooling is definitely an equally important topic in this discussion. However this post limits focus to the actual data itself.
[6] How ironic; that site is gone. And I don't have a snapshot. What a shame. I've changed the link to some different baby jokes. The original url was https://dead-baby-joke.com.