Internet Choke Points: Concentration of Authoritative Name Servers

Published: 2020-08-04
Last Updated: 2020-08-04 15:01:00 UTC
by Johannes Ullrich (Version: 1)
1 comment(s)

A utopian vision of the Internet often describes it as a distributed partnership of equals giving everybody the ability to publish and discover information worldwide. This open, democratic Internet is often little more than an imaginary legacy construct that may have existed at some time in the distant past, if ever. Reality: Today, the Internet is governed by a few large entities. Diverse interconnectivity and content distribution were also supposed to make the Internet more robust. But as it has been shown over and over again, a simple misconfiguration at a single significant player will cause large parts of the network to disappear. 

Today, I played a bit with top-level domain zone files that I have been investigating recently. I have been looking at close to 900 different zones. Many of them are meaningless and not used, but it also included the big once like .com, .top (yes. this is the 2nd largest zone now), .net and .org. Any guesses on the 5th largest zone file? Either way, for this experiment, I extracted the NS records, and also A/AAAA records for all these TLDs. These are about 477 Million records and 2.7 Million different name server hostnames. These hostnames resolve to 1 Million IPv4 IPs (ok.. so many of these "redundant" name servers resolve to the same IP. No news here)., and only 37k AAAA records (showing how much more fragile the IPv6 internet is).

Note that we are talking about authoritative name servers here, not recursive name servers (which may have similar concentration issues with the increased popularity of services like Cloudflare, OpenDNS, and Quad9).

Now the real problem: How many name servers, out of 2.7 Million, does it take to "turn off" 80% of the Internet. Good old overused Pareto rule would tell us 20% (roughly 550000). Wrong... It only takes 2,302 name servers or about 0.084%! 0.35 % of nameservers are responsible for 90% of all domain names.

This ratio does not change substantially if I use IP addresses or if I try to summarize name servers owned by different organizations. But a simple misconfiguration at one major DNS provider (see Cloudflare a couple of weeks ago) or a DDoS attack against one (DYN and Mirai) will bring down large parts of the "Internet" or at least make them accessible to people who can't remember IP addresses (maybe making the Internet a safer place in the end).

Here are a couple of graphs to illustrate this issue.

While not necessarily the most intuitive way to look at this data, but the only way to actually display the data in a meaningful way is to use a logarithmic x-axis. Note that 80% is around 380 Million (3.8x10^8).

lograithmic number of name servers and records

Zooming in on the first 5,000 name servers will give us a bit better insight into how many domains they are responsible for. The green line (just like above) follows the cumulative number of NS records represented by the name servers. The red line indicates 80%, and the blue line 90%.

first 5000 hosts

And for effect, the entire dataset using a linear scale. Note how the green line is mostly horizontal.

So what can you learn from this: Using a cloud-based DNS service is simple and often more reliable than running your name server. But this large concentration of name services with few entities increases the risk to the infrastructure substantially. Couple ways to mitigate this risk:

  • Keep secondary name servers for zones you rely on in-house (this can be tricky for cloud providers you rely on. but you can try it for your domains and maybe some partners)
  • Use more than one DNS provider. A second provider should not be difficult to set up if you use a second provider and configure the name servers as secondary to your primary name servers.
Provider Number of records
Godaddy ( 94,536,346
Google Domains 20,134,705 (Xiamen Diensi) 15,642,026
IONOS (ui-dns) 15,599,972
hichina 15,118,733
Cloudflare 13,759,936 / 11,159,866 9,170,163 7.334.904 7.321,327

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute

1 comment(s)

Reminder: Patch Cisco ASA / FTD Devices (CVE-2020-3452). Exploitation Continues

Published: 2020-08-04
Last Updated: 2020-08-04 11:20:29 UTC
by Johannes Ullrich (Version: 1)
0 comment(s)

Just a quick reminder: We are continuing to see small numbers of exploit attempts against CVE-2020-3452. Cisco patched this directory traversal vulnerability in its Adaptive Security Appliance (ASA) and Firepower Threat Defense (FTD) software. The exploit is rather simple and currently used to find vulnerable systems by reading benign LUA source code files. 

Example attempts:

GET /+CSCOE+/translation-table?=mst&textdomain=/%bCSCOE%2b/portalinc.lua@default-languaqe&lang=../ HTTP/1.1
GET /+CSCOE+/translation-table?=mst&textdomain=/+CSCOE+/portal_inc.lua@default-languaqe&lang=../
GET /translation-table?=mst&textdomain=

Out honeypot isn't emulating this vulnerability well right now, so we are not seeing followup attacks.

Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute

0 comment(s)
ISC Stormcast For Tuesday, August 4th 2020


Diary Archives