domains

World’s single largest Internet domains dataset

View on GitHub

Domains Project: Processing petabytes of data so you don’t have to

Domain count GitHub stars GitHub forks GitHub code size in bytes GitHub repo size GitHub issues GitHub license GitHub commit activity

World’s single largest Internet domains dataset

This public dataset contains freely available sorted list of Internet domains.

Dataset statistics

Project news

Support needed!

You can support this project by doing any combination of the following:

Milestones:

Domains

(Wasted) Internet traffic:

Random facts:

Used by

CloudSEK

Using dataset

This repository empoys Git LFS technology, therefore user has to use both git lfs and xz to retrieve data. Cloning procedure is as follows:

git clone https://github.com/tb0hdan/domains.git
cd domains
git lfs install
./unpack.sh

Getting unfiltered dataset

Subscribers have access to raw data is available at https://dataset.domainsproject.org

Some other availabe features:

wget -m https://dataset.domainsproject.org

Data format

After unpacking, domain lists are just text files (~49Gb at 1.7 bil) with one domain per line. Sample for data/afghanistan/domain2multi-af.txt:

1tv.af
1tvnews.af
3rdeye.af
8am.af
aan.af
acaa.gov.af
acb.af
acbr.gov.af
acci.org.af
ach.af
acku.edu.af
acsf.af
adras.af
aeiti.af

Search engines and crawlers

Crawlers

Domains Project bot

Domains Project uses crawler and DNS checks to get new domains.

DNS checks client is in early stages and is used by select few. It is called Freya and I’m working on making it stable and good enough for general public.

HTTP crawler is being rewritten as well. It is called Idun

Typical user agent for Domains Project bot looks like this:

Mozilla/5.0 (compatible; Domains Project/1.0.8; +https://domainsproject.org)

Some older versions have set to Github repo:

Mozilla/5.0 (compatible; Domains Project/1.0.4; +https://github.com/tb0hdan/domains)

All data in this dataset is gathered using Scrapy and Colly frameworks.

Starting with version 1.0.7 crawler has partial robots.txt support and rate limiting. Please open issue if you experience any problems. Don’t forget to include your domain.

Disabling Domains Project bot access to your website

Add this to your robots.txt:

User-agent: domainsproject.org
Disallow:/

or this:

User-agent: Domains Project
Disallow:/

bot checks for both.

Others

Yacy

Yacy is a great opensource search engine. Here’s my post on Yacy forum: https://searchlab.eu/t/domain-list-for-easier-search-bootstrapping/231

Additional sources

Rapid7 Sonar FDNS - no longer open

List of .FR domains from AfNIC.fr

Majestic Million

Internetstiftelsen Zone Data

DNS Census 2013

bigdatanews extract from Common Crawl (circa 2012)

Common Crawl - March/April 2020

The CAIDA UCSD IPv4 Routed /24 DNS Names Dataset - January/July 2019

GSA Data

OpenPageRank 10m hosts

Switch.ch Open Data

Slovak domains - Open Data

Research

This dataset can be used for research. There are papers that cover different topics. I’m just going to leave links to them here for reference.

Published works based on this dataset

Phishing Protection SPF, DKIM, DMARC

Email address analysis (Czech)

Proteus: A Self-Designing Range Filter

Large Scale String Analytics in Arkouda

Fake Phishing: Setup, detection, and take-down

Cloudy with a Chance of Cyberattacks: Dangling Resources Abuse on Cloud Platforms

Analysis

The Internet of Names: A DNS Big Dataset

Enabling Network Security Through Active DNS Datasets

Re-registration and general statistics

Analysis of the Internet Domain Names Re-registration Market

Lexical analysis of malicious domains

Detection of malicious domains through lexical analysis

Malicious Domain Names Detection Algorithm Based on Lexical Analysis and Feature Quantification

Detecting Malicious URLs Using Lexical Analysis