Searching for Creative Domain Wordplay with only a Shell Script

Domain names have always fascinated me somewhat. Not the boring ones like google.com or gov.uk, no, I’m talking about the clever wordplay that people often play with. Sometimes, a domain name even comes before a company is founded!

Good examples of these are lobste.rs, the tech-oriented discussion site, ultralig.ht, the web UI framework, or adf.ly the link monetisation platform. Sometimes, companies use these tricks not as their primary domain, but as a short link URL such as youtu.be or goo.gl. Then there are domains like t.co which Twitter supposedly paid $1.5 million for!

Some registrars even have tools for searching for common words that end with their TLD, such as the .ly registrar.


I wanted to do exactly that. I’m currently using a Mac and I knew that many Linux/UNIX distributions have a list of common words somewhere. In my case, it was at /usr/share/dict/words. This with grep and my suffix gets me the first step:

grep 'rs$' /usr/share/dict/words
abovestairs
afterhours
alexanders
...

Perfect! Now I need to cut off the end (the “rs”) and replace it with a dot followed by “rs” so it’s a proper domain name. sed will help here:

grep 'rs$' /usr/share/dict/words | sed 's/..$/.rs/'
abovestai.rs
afterhou.rs
alexande.rs
...

Great, I now have a list of all the common words that end in “rs” formatted as a domain name. Now, how do I go about checking if they are registered or not?

There’s another useful tool in the Linux/UNIX world called dig which, hilariously, is short for “domain information groper”. Bit of an odd word choice there to be honest…

Anyway, we can use xargs which maps a stream of lines to a set of calls to a command. It’s similar to using [1, 2, 3].map((v) => {...}) in JavaScript, it runs some command for every input item.

grep 'rs$' /usr/share/dict/words | sed 's/..$/.rs/' | xargs dig

The problem with this is dig is outputting a bunch of lines for each call. Some query metadata, the nameserver IP, the date, question data, answer data, et cetera. This is a bit too much to skim through.

dig has a set of flags for controlling the output. These are prefixed by +. The first one I used was +noall which disables all the output modes, we can then selectively add individual sections back. I’ve just used +question +answer so we get all the domains on the output, with those that are active with their respective IP addresses.

grep 'rs$' /usr/share/dict/words | sed 's/..$/.rs/' | xargs dig +noall +question +answer
;abovestai.rs.			IN	A
;afterhou.rs.			IN	A
afterhou.rs.		83822	IN	A	198.50.252.64
...

Now we have a list of all our target domains, with the active IPs listed below each “question” row, which indicates the outbound request for a domain name.

Some extra useful bits: you can omit +question to remove the queried domains from the list so the output will only contain registered domains.

So, for words ending in rs, there are a total of 31 registered domains. That is, domains that actually have an IP associated with them (there may be domains that are registered but do not have an IP yet). Of these 31 domains, only 11 actually lead somewhere interesting:

  • dagge.rs – someone’s instagram
  • dissimila.rs – a documentary filmmaker’s portfolio
  • cracke.rs – a supposedly “hacked” website?
  • hoppe.rs – the finest door accessories?
  • indoo.rs – an indoor mapping system!
  • ma.rs – a mobile website builder straight out of 2006
  • mess.rs – a 3D and interactive design studio
  • plie.rs – a company that makes… pliers!
  • reve.rs – some kind of vehicle physics simulator
  • scisso.rs – a sound production and marketing company
  • some.rs – a fellow developer’s website!

Exporing the internet is fun without Google!

Leave a Reply

Your email address will not be published. Required fields are marked *