QZ qz thoughts
a blog from Eli the Bearded
Tag search results for blog Page 1 of 17

Manual restore of Firefox sessions

Recently I needed to downgrade my Firefox install from 87 to 84 due to a bug. With multiple windows open, I was intermitantly finding some events were being sent to the wrong window. An example is open a new (private) window and start to navigate to a new page, but discover mouse clicks and other events were still being sent to the original window. I started seeing this in FF86 and FF87 did not fix it. By that point the bug was testing my patience, and worse FF87 seemed to have new (unrelated) bugs specifically with one site I use.

So downgrade.

It's been a long time since some Firefox bug has been serious enough to force me to install a different version, particularly a downgrade. So I was surprised to find that Firefox now marks every profile with a browser version and does not let older browsers use profiles from newer ones. Originally when profiles introduced compatibility changes it was somewhat infrequent, and downgrades were usually easy to do.

There are three things I really wanted to preserve:

  1. My open tabs (actual state in the tab, not as critical).
  2. My (few) bookmarks. Of 40 or so bookmarks, about 30 I had created (the others shipped with Firefox), and about half of those I still want.
  3. My preferences.

These have varying degrees of difficulty uncovering from the files in a profile. Session tabs ended up being the most complicated. Working backwards in that list.

Changed preferences are stored one-per-line in prefs.js in the profile directory. The chief complication is the number of entries. Many of them are related to extensions or printing and can just be ignored. Many more are related to internal settings like toolkit.telemetry.* or most of the browser.* ones.

The bookmarks are available in compressed JSON files in the bookmarkbackups profile subdirectory. The compression format is a bit of an oddball. The actual compression type is not so rare, LZ4, but apparently the Mozilla implementation is slightly non-standard. There are a lot of tools out there which can apply the standard LZ4 algorithm to the .jsonlz4 files Firefox makes. I used lz4jsoncat by Andi Kleen. Here ls -rt finds the most recent backup file to operate on.

lz4jsoncat $(ls -rt bookmarkbackups/*.jsonlz4) | jq . | grep '"uri"'

If you are unaware, jq is a JSON Query tool. In the simple usage there, the query is for the whole structure and functions as a JSON pretty printer. Why not use jq to slice and dice the uri entries out? Because they exist in several groups (probably for bookmark folders) and grep was way faster than figuring out the correct jq query to use.

But that brings us to the remaining item of interest for me: the URLs of all my tabs in the session data. There are several files in the sessionstore-backups profile subdirectory. In my inspection they all appeared to be valid sessions from different times, and recovery.jsonlz4 seemed to be the most recent one. No need to resort to time sorted file lists here.

The JSON structure holds a lot of data, from cookies to apparently thumbnail images of the page (base64 encoded for safely storing in JSON). Getting just tab URLs was not easily done with grep, and I needed to work out a jq query. There is a windows[] array with an entry for each window, inside that there is a tabs[] array with a state for each tab, and inside that an entries[] array with the history for each tab. The history is a stack with latest entry first.

lz4jsoncat sessionstore-backups/recovery.jsonlz4 |
    jq -r '.windows[].tabs[].entries[0].url'

For each item ([]) in windows array,
  for each item ([]) in tabs array within it,
    for the first ([0]) item in entries array within it,
      print the url field.
The -r makes the output "raw", which is to say without quotes.

Not as neat and clean as actual session recovery, but a lot better than trying to recover my two dozen tabs from memory.

vi and tags

The vi editor, and the significant vi-clones, as well as Emacs and other editors, support a thing called a "tags file". It's essentially a set of bookmarks or a book's "Index" section fore text files.

The intended use is you run a program that indexes your source code and creates the tag entries for you. For example, with ctags you can run it and have it scan source code in dozens of languages (not just C-like ones, such as Java and Go, but also Postscript, Fortran, and others) and produce a file that tells the editor how to find function and variable declarations. (For Emacs, use the etags program to similar effect.)

The tags functionality is very useful when editing code. I keep tags files in source directories for all of my multi-year projects. You can begin and edit session by running vi -t pickle to open the editor to the file and line that pickle() is defined. Say inside pickle() you find a call to spices() and want to know what it does, position your cursor in the word and hit <ctrl-]>, you jump there. Return to where you were with a :pop. Without the keyword handy to <ctrl-]> upon, you can instead :tag spices as well.

For people using vim as their vi of choice, it might help to know that the entirety of the :help system is built using tags, just slightly tweaked for where to look for the tags file. If you know how to use help in Vim, you know how to use tags. And :help tags can probably teach you something you didn't already know about them.

The tags files are a little more mysterious. Normally people don't create or edit them by hand, but you can or you can create programs to create them for your own special needs.

The basic format, and all I'll cover here, is a text file with three tab separated columns. The first column is the tag name, eg pickle. The second column is the file name, relative to where the tags file is located. The third column is a ex-mode movement within the file. Usually the movement is a search, something like /^int pickle(recipe_t rec)$/ that will unambigously find a single line in the file. But line numbers also work just fine. And search with line number offset works for niche needs, eg /^int pickle(.*)$/+2 to start out on the variable declarations if your syntax looks like the following sample. With :set scrolloff=3 to put some space above the cursor, it may be more useful for you.

#include "brine.h"

int pickle(recipe_t rec)
    long     cuke;
    int      salt;
    spices_t spicing = spices(rec);

    /* ... */

In traditional vi any command you could put on a : line would work in the movement column, including things like :! rm *. I know Vim has tightened that, and I believe the other vi-clones have as well. If Vim doesn't like any movement in a tags file, it will ignore the whole file.

One trick I have found useful is programmatically generated tags files created with a wrapper program. Rather than have a giant file with all tags, I have a database that I can query and then it generates a tags file (with a single entry) and invokes vim -t main to jump to the exact file and line for me. This can make my query easier to form than remembering a specific tag keyword and does not involve a large flat file with huge redundancy.

Deja Google News Groups

Those of us who still read Usenet proper have probably all seen instances of 10+ year old threads getting a new post by someone who found it on Deja News or it's Google successor. Eg, last month I saw this reply to a twenty five year-old post.

Newgroups: comp.editors
Date: Fri, 26 Feb 2021 19:21:33 -0800 (PST)
Injection-Info: google-groups.googlegroups.com; [...]
Message-ID: <42d18ee7-712f-41f4-b4af-dba9988b192an@googlegroups.com>
Subject: Re: Maximum line length in Vi
From: Tejasvi S Tomar <tstomar@outlook...>

On Monday, April 17, 1995 at 12:30:00 PM UTC+5:30, Paul Fox wrote:
> G. Ioannou (gi...@cus.cam.ac.uk) wrote:
> : ed
> : Vim
> : Vile
> : They all worked fine with lines over 2000 characters in length, but I
> : don't know the exact limit.
> lines on vile, at least, are limited by the size of an int.
> anyone know what it is in vim?
> paul
> ---------------------------------
> paul fox, p...@foxharp.boston.ma.us (arlington, ma)

It seems there isn't any limit. https://www.oreilly.com/library/view/learning-the-vi/9780596529833/ch15s10.html

Anyway today I encountered a new twist on this. Instead of replying to decade(s) old post through Deja Google, someone instead hunted down my Wikipedia user page and answered a question of mine about Usenet II (dead for at least ten years) on the User Talk page. Really, I had no pressing care about why 4gh was used for the "Distribution" header. I knew it was a reference to some sci-fi book, but didn't really care more than that. And if I did care, the Usenet II page at Wikipedia has had the answer for a while: Usenet II, diff=prev, oldid=181389883

(The same person who added the reference in 2008 is the person who who answered me today on User Talk.)

It makes me wonder if people used to (or maybe still do) get the problem of people writing letters about long ago published stuff after finding it in a library.

C. R. Boffo
123 Main St
Small Town
14 January 1958
Mark Twain
345 Taylor St
San Francisco
Dear Sir,

I am writing to you in regards your piece about the jumping frog trainer that lived in Calaveras County. I don't know if Jim Smiley is training frogs for use in jumping contests, but he should be aware that California law now has mandates about frogs kept for jumping contests. I would also like to say that feeding lead to frogs as that stranger did to Dan'l is just cruel.

Yours sincerely,
C. R. Boffo

Any relation of "C. R. Boffo" to the .a https://en.wikipedia.org/wiki/Colorado_River_toad "bufo "lick for high" toads" is surely coincidental.

Namecheap, Grr

Yesterday qaz.wtf was briefly unavailable. The domain registration expired. This happened for number of reasons, some mine, some I'm going to blame on Namecheap.

First, what I did wrong: I put an overly strong spam filter rule in place that was marking all mail from them as spam. In particular the rule was indiscriminate about source when the domain was (a) using registrar-services.com for DNS and (b) using DNS with a SOA serial number that parsed as less than a day old. In fairness, it is a very effect rule. Here's a sample of the domains from the last 100 messages that deleted:

mail[.]forhealth[.]bar Oct 2020; medicarepro[.]xyz Oct 2020; diabetesfrepro[.]xyz Oct 2020; mail[.]diabetes2type[.]casa Oct 2020; carbosfix[.]xyz Oct 2020; mail[.]pocketdrone[.]work Oct 2020; goldplatedscoin[.]xyz Oct 2020; mail[.]fevertrhermal[.]casa Oct 2020; whowhpremium[.]xyz Oct 2020; mail[.]dxpgaget[.]work Oct 2020; mail[.]machinepower[.]bar Oct 2020; mail[.]trackerss[.]bid Oct 2020; royalbalance[.]cam Oct 2020; mail[.]pocketdron[.]work Oct 2020; stopsqribble[.]bid Oct 2020; shinehead[.]bid Oct 2020; mail[.]neckmassager[.]casa Oct 2020; audigrow[.]bid Nov 2020; mail[.]learnpiano[.]work Nov 2020; mail[.]heatromwave[.]casa Nov 2020; BayAreaTechSummit[.]com Nov 2020; mail[.]waxremove[.]casa Nov 2020; mail[.]gpstrack1[.]work Nov 2020; mail[.]gps1track[.]work Nov 2020; mail[.]zoom2dipro[.]casa Nov 2020; mail[.]strategys[.]bid Nov 2020; lifeprotect[.]uno Nov 2020; oxyrobot[.]bid Nov 2020; survivlife[.]uno Nov 2020; survivlife[.]uno Nov 2020; mail[.]musichall[.]casa Nov 2020; mail[.]discovery1[.]casa Nov 2020; mail[.]diabetesremedy[.]work Nov 2020; mail[.]zoomzoom[.]work Nov 2020; yourbusinesstips[.]biz Nov 2020; mail[.]goodhealth[.]casa Nov 2020; vanses[.]icu Nov 2020; mail[.]dailmulti[.]bid Nov 2020; mail[.]zoompro[.]casa Nov 2020; mail[.]heatpad[.]co Dec 2020; mail[.]foodgrow[.]bid Dec 2020; mail[.]shinehead[.]bid Dec 2020; mail[.]bagtrack[.]work Dec 2020; rewardtoshop[.]com Dec 2020; rivalo[.]com Dec 2020; rivalo[.]com Dec 2020; mail[.]techvisions[.]bid Dec 2020; mail[.]vestwoding[.]icu Dec 2020; brutualloss[.]icu Dec 2020; drawlace[.]buzz Dec 2020; 247blindco[.]co[.]uk Dec 2020; 247blindco[.]co[.]uk Dec 2020; 247blindco[.]co[.]uk Dec 2020; 247blindco[.]co[.]uk Dec 2020; mail[.]easysurveyrewards[.]best Dec 2020; zind[.]us Dec 2020; zind[.]us Dec 2020; herpesyl[.]icu Dec 2020; mail[.]digitalbrand[.]icu Dec 2020; mail[.]offertrend[.]icu Dec 2020; offerworld[.]icu Dec 2020; offerworld[.]icu Dec 2020; mail[.]offerhad[.]icu Dec 2020; healingsystem[.]icu Dec 2020; myshedesplan[.]icu Dec 2020; boosterday[.]best Dec 2020; brutualloss[.]icu Dec 2020; mail[.]shinehead[.]bid Dec 2020; mail[.]remedypro[.]us Dec 2020; tactnutri[.]icu Dec 2020; mail[.]feverfix[.]icu Dec 2020; mail[.]sugartonic[.]icu Dec 2020; mail[.]mechanism[.]icu Dec 2020; mail[.]ecomdeals[.]icu Dec 2020; mail[.]dealsbreeze[.]icu Jan 2021; mail[.]vestwoding[.]icu Jan 2021; mail[.]heatsawy[.]icu Jan 2021; mail[.]capitalreward[.]icu Jan 2021; mail[.]capitalreward[.]icu Jan 2021; mailserviceemailout1[.]namecheap[.]com Jan 2021; mail[.]energyy[.]bid Jan 2021; mail[.]speechrewards[.]icu Jan 2021; mail[.]usonly[.]bid Jan 2021; mail[.]certifiedstate[.]cam Jan 2021; thecsi[.]com Jan 2021; coloradocareerproject[.]com Jan 2021; mailserviceemailout1[.]namecheap[.]com Jan 2021; mail[.]healthrequired[.]work Jan 2021; mail[.]doorbellclub[.]work Jan 2021; mail[.]hollyb1ook[.]work Jan 2021; mail[.]safehealth[.]work Jan 2021; mail[.]remedypro[.]us Jan 2021; mail[.]yourtone[.]casa Jan 2021; mail[.]healthmonitor[.]casa Jan 2021; mailserviceemailout1[.]namecheap[.]com Jan 2021; mail[.]voidemodems[.]casa Jan 2021; mail[.]entend4ded[.]casa Feb 2021; mailserviceemailout1[.]namecheap[.]com Feb 2021; slotcrime[.]guru Feb 2021; mailserviceemailout1[.]namecheap[.]com Feb 2021

There are six messages, from two domains, that are inappropriately caught there. One of those was quasi-spam. Looking at the last 1000, there's only one more "ham" message caught. Seven out of a thousand is a pretty good track record, but when five of those seven are important that does sting. That's the part that's my fault.

Now onto Namecheap.

Almost all of my hostnames at Namecheap are set up to autorenew, with care selecting the ones that are not. At the same time, I have a credit card on file at Namecheap, and it is selected as my default payment method. I don't remember when, but I likely changed my card on file in late 2020, when I had a bunch of things to renew in the October to November period. I know I have twentish pieces of email from them in that period that didn't get deleted.

So it turns out that having autorenew selected and having a card on-file and "default is not sufficent to actually autorenew stuff. You also need to find the hidden configuration page (it is not the regular "edit card" page) that selects a card as enabled for autorenew. Serious, WTF?. Why isn't it with all the other configuration for a credit card?

Anyway, I use my own site regularly, and noticed the outage quickly. It did take a while for all DNS servers to get the message. I was seeing Cloudflare (, eg, reporting the Namecheap parking page long after Google ( had it restored.