Module 2 · Lesson 8
Case Studies: Notable DNS Security Incidents
⏱ 15 min read
Case Studies: Notable DNS Security Incidents
Theory is useful. Watching real systems fail is educational in a way no theory can match. These four incidents each reveal something different about DNS security: infrastructure fragility, protocol trust assumptions, the power of combined attack techniques, and what coordinated responsible disclosure looks like when the stakes are high.
Case Study 1: Dyn DDoS (October 21, 2016)
What Happened
On October 21, 2016, starting at approximately 07:00 UTC, Dyn — a major DNS managed service provider — began receiving a massive DDoS attack. The attack used the Mirai botnet, which had compromised hundreds of thousands of IoT devices (cameras, DVRs, routers) using default or weak credentials.
The attack peaked at an estimated 1.2 Tbps of traffic. Dyn served DNS for Twitter, Reddit, Spotify, GitHub, Airbnb, PayPal, and hundreds of other major services. All of them went down, particularly for users on the US East Coast.
There were three distinct attack waves across the day. Dyn restored service, got hit again, restored again, got hit again. Full recovery took most of the day.
How It Worked Technically
Mirai's DNS flood combined multiple techniques:
- Direct flooding of Dyn's DNS servers with massive query volumes
- DNS amplification using open resolvers (spoofed source IP = Dyn's servers)
- Randomized subdomain queries (water torture) against Dyn's authoritative servers
The randomized subdomain attack was particularly effective: Dyn had to process millions of queries for random subdomains like a8f72jq.twitter.com. Each query was a cache miss. The authoritative servers couldn't cache their way out of it.
The scale of Mirai was novel. It wasn't one sophisticated attacker — it was a botnet script with default credentials scanning the internet. The attack code was released publicly, making it trivially reproducible.
What Defenders Did
Dyn used anycast and distributed infrastructure, which helped absorb the first wave. As attack volumes exceeded their scrubbing capacity, they began BGP blackholing the most heavily targeted addresses, temporarily breaking service to stop the flood from overwhelming everything.
Dyn was acquired by Oracle in 2016 for approximately $600 million. The acquisition was announced the same month as the attack — timing that made for awkward press coverage.
Lessons
Single-provider DNS is an architectural risk. Twitter, Reddit, and GitHub had DNS solely on Dyn. When Dyn went down, they went down. No amount of internal engineering excellence protects you from your DNS provider failing. Secondary DNS with a different provider is a basic resilience requirement.
Mirai showed that IoT device security is a public infrastructure problem. The compromised devices weren't the victims — they were the weapon. Manufacturers deploying devices with default credentials at scale created public attack infrastructure.
Monitoring your DNS provider's status page doesn't count as resilience. Have a playbook for "our DNS provider is under attack" that includes switching to a secondary provider.
Case Study 2: Sea Turtle DNS Hijacking Campaign (2017-2019)
What Happened
Cisco Talos published their analysis in April 2019. The campaign, which they attribute to a threat actor linked to Turkish interests (not definitively attributed), targeted government agencies, military organizations, energy companies, telecoms, ISPs, and IT companies across 13 countries — primarily the Middle East and North Africa.
At least one country-code TLD registry was also compromised, which gave the attackers the ability to modify NS records for any domain in that ccTLD.
The campaign ran from at least January 2017 through early 2019. Some targets remained compromised for months without detection.
How It Worked Technically
The attack is clean in its logic:
- Compromise a DNS registrar or DNS service provider (not the target)
- Modify the target domain's NS records to point at an attacker-controlled nameserver
- Issue a Let's Encrypt or other DV certificate for the target domain (domain validation uses DNS, which the attacker now controls)
- Stand up a man-in-the-middle proxy at the attacker-controlled nameserver IP
- Forward all legitimate traffic to the real servers, after recording credentials
The victims' users connected to https://mail.targetgov.example and saw a valid TLS certificate (no browser warning). The attacker's server terminated TLS, read the traffic, re-encrypted it, and forwarded it. The real mail server worked normally.
Detection was hard because:
- The real servers weren't compromised — they looked fine
- Users saw valid HTTPS with no warnings
- The only tell was in Certificate Transparency logs: new certificates had been issued for domains that didn't authorize them
- NS record changes might briefly appear in monitoring, but many organizations didn't monitor NS records
What Defenders Did or Should Have Done
Most targets didn't detect the attack themselves — external researchers and intelligence sharing led to discovery.
What would have stopped this:
- Registry Lock. Server-level EPP locks require out-of-band verification to change NS records. A compromised registrar account can't override registry lock. This is the single most effective control against this attack class.
- Certificate Transparency monitoring. Watching for unexpected certificate issuance for your domains gives early warning.
- NS record monitoring. Automated checks that alert when your domain's NS records change.
- Multi-factor authentication on registrar accounts using hardware keys (not SMS, which can be SIM-swapped).
DNSSEC would not have fully protected targets here if the attacker also controlled the parent zone (as happened with the ccTLD registry compromise). With registry lock, the attacker couldn't have changed the DS records.
Case Study 3: MyEtherWallet BGP + DNS Hijack (April 24, 2018)
What Happened
At approximately 11:05 UTC on April 24, 2018, attackers began redirecting MyEtherWallet (MEW) users to a phishing site. MyEtherWallet is an Ethereum wallet service. The attackers stole approximately $17,000 worth of cryptocurrency in roughly two hours before the attack was mitigated.
Small number, technically — but the attack is worth studying because of the sophistication of the combined technique.
How It Worked Technically
MEW used Amazon Route 53 for DNS. The attack didn't compromise Route 53. Instead, the attackers hijacked BGP routes for the IP addresses used by Google's public DNS resolvers (8.8.8.8 and 8.8.4.4).
By advertising more-specific prefixes (longer prefix wins in BGP), a hosting provider in Chicago (AS10297, eNet Inc.) started attracting Google DNS traffic. Thousands of users who were querying 8.8.8.8 or 8.8.4.4 were actually talking to an attacker-controlled resolver.
The fake resolver served correct answers for everything except myetherwallet.com, which resolved to a Russian-hosted phishing server. The phishing server had a valid Let's Encrypt certificate — but a certificate that browsers showed as "untrusted" for myetherwallet.com, because the private key didn't match.
Users who clicked through the certificate warning (and some did) had their private keys stolen.
The Combined Attack
This is the sophisticated part: the attacker needed both:
- BGP hijack of Google DNS infrastructure (requires a BGP-speaking peer, which suggests ASN-level access or compromise)
- A ready phishing site with partial certificate setup
The BGP hijack alone doesn't give you anything useful unless you've also prepared the DNS manipulation and the phishing infrastructure. This was premeditated.
Google detected the BGP anomaly and began rerouting traffic after about two hours. MEW pushed an immediate statement advising users not to proceed past certificate warnings.
Lessons
BGP is not just a routing problem — it affects DNS. If someone can redirect your resolver's traffic, they can poison DNS for everyone using that resolver. RPKI (Resource Public Key Infrastructure) for BGP route validation is the mitigation, and deployment has increased but is still incomplete.
Certificate warnings exist for a reason. Users who ignored the browser warning about an invalid certificate for a site handling their cryptocurrency private keys paid the price. Design systems so that users cannot proceed past invalid certificate warnings for sensitive operations.
Resolver diversity matters. Using only 8.8.8.8 means a BGP hijack of that /24 compromises all your DNS. Sending queries to multiple resolvers (8.8.8.8 and 1.1.1.1) makes coordinated BGP hijacking harder — the attacker would need to hijack both simultaneously.
Case Study 4: The Kaminsky Disclosure (2008)
What Happened
In early 2008, Dan Kaminsky discovered the cache poisoning vulnerability described in Lesson 03. He understood immediately that it was serious enough that coordinated disclosure was necessary: if he published, attackers would exploit it before patches shipped. If he told vendors one-by-one, the secret would leak before everyone had patched.
Kaminsky contacted Paul Vixie (BIND lead) and arranged a coordinated response. Over the next six months, working under a strict non-disclosure agreement, approximately 16 DNS software vendors, hardware vendors, and operating systems coordinated to ship simultaneous patches.
On July 8, 2008, all vendors released patches at once. It was the largest synchronized security patch in internet history at that point.
The secret lasted until July 21, 2008, when enough information had been published in blog posts that a researcher at Matasano Security accidentally published the full technical details. They retracted it quickly, but the internet had already copied it.
How the Disclosure Worked
Kaminsky's approach is studied in security circles now as a model for how to handle a critical infrastructure vulnerability. Key elements:
- He went to the infrastructure maintainers first (not to the press, not for bug bounties, not for credit)
- He organized the vendors and established a coordination channel
- He maintained the embargo for six months while patches were developed and tested
- He gave DNS operators enough time to deploy patches before the technical details became public
At Black Hat 2008 (August 6, a month after the patch), Kaminsky gave the full presentation. The room was packed.
Why It Mattered
The exploit worked against virtually every DNS resolver in existence. In 2008, source port randomization was not standard practice. Transaction ID randomization alone was trivially defeated by the Kaminsky technique.
If an attacker had independently discovered this and exploited it silently, they could have poisoned major resolver caches for arbitrary domains — banking, email, government infrastructure. The impact would have been enormous.
Lessons
Responsible disclosure has a cost. Kaminsky spent six months unable to publish what is arguably the most important DNS security finding in the protocol's history. The security community owes him and the vendors who coordinated a significant debt.
Infrastructure-level vulnerabilities require infrastructure-level coordination. You can't patch DNS the way you patch a web application. Millions of resolver deployments had to be updated. That takes time, planning, and vendor cooperation.
The protocol's simple design creates fundamental limitations. 16-bit transaction IDs were a design decision made when the threat model was academic. When the threat model changed, the protocol had to adapt. DNSSEC is the real fix — but it took years to deploy, and is still not universal.
Synthesis
These four incidents share a common thread: DNS is trusted in ways its design doesn't fully support. Amplification attacks trust that UDP source IPs are legitimate. Cache poisoning exploits the trust given to any correctly-formatted response. Domain hijacking exploits the trust given to registrar control. The Kaminsky attack exploited the entropy assumptions of transaction IDs.
Each successful attack found a gap between how DNS was designed to work and how it actually worked in a hostile environment. The mitigations — BCP38, port randomization, DNSSEC, registry locks, BGP route validation — all narrow those gaps. None closes them completely.
The job of a DNS security engineer is to understand where the remaining gaps are and decide which ones are worth closing for your threat model.
Key Takeaways
- Dyn 2016: single-provider DNS is an architectural risk. NS diversity is non-negotiable for critical services.
- Sea Turtle: domain hijacking at the registrar level bypasses the DNS protocol entirely. Registry lock and CT monitoring are the right controls.
- MyEtherWallet: BGP hijacking can redirect your resolver traffic without touching your DNS zone. RPKI and resolver diversity are mitigations.
- Kaminsky: coordinated disclosure at internet scale is possible, and sometimes necessary. Source port randomization is the bandage; DNSSEC is the fix.
Further Reading
- Kaminsky Black Hat 2008 slides (archived): search "Kaminsky DNS Black Hat 2008"
- Cisco Talos Sea Turtle report: https://blog.talosintelligence.com/sea-turtle/
- MyEtherWallet incident timeline: https://www.reddit.com/r/ethereum/comments/8eedst/
- BGP Stream (historical BGP hijack data): https://bgpstream.caida.org
- RPKI status: https://rpki.cloudflare.com
- Dyn post-mortem: https://dyn.com/blog/dyn-analysis-summary-of-friday-october-21-attack/
That's Module 2. You've now seen the threat map, the mechanics of every major DNS attack class, and four real incidents. Module 3 covers DNS performance and reliability: TTL optimization, caching strategy, and building resilient DNS infrastructure.