The greatest and most valuable general purpose communications and information sharing network mankind has known is surprisingly easy to break.
Maybe the internet can survive nuclear strikes, but it sure as hell can't route around the damage caused by network operators fat-fingering device configurations.
Without getting too deep geek here, the internet is actually a collection of networks, around 70,000 currently, that agree to transmit data to each other depending on location, cost and assumed capacity.
This happens using the Border Gateway Protocol (BGP) that end-users know nothing about usually. BGP lets networks work out a route for the data, by passing announcements that say "we are the best path for Google/Amazon/Netflix" and elsewhere.
Unfortunately, BGP doesn't have any smarts as such. It's easy to make configuration mistakes that cause bogus announcements, or "rumours" as the chief scientist of the Asia Pacific Network Information Centre (APNIC) Geoff Huston calls them, to be sent out.
You and I know that diverting all the traffic on the Southern Motorway down for example Penrose Rd would be madness and cause instant gridlock.
BGP knows no such thing, which means someone could take New Zealand offline with a few keystrokes as data is sent to networks that don't have the capacity for it.
This is happening often, like in June this year when content network Cloudflare lost 15 per cent of internet traffic. This was due to small network belonging to a United States company trying to optimise its internet routes by splitting up traffic among multiple circuits.
Its upstream network, US giant Verizon, passed on those dodgy announcements to the rest of the world and that meant goodbye Facebook and Amazon for lots of people.
Earlier in June, China Telecom picked up a bunch of internal routes from a Swiss provider that it should have ignored, but didn't. All of a sudden, a fat wodge of mobile data traffic in Europe went through China Telecom's network which from a security point of view is not what Western telcos would like to happen.
Juha Saarinen: The annoying quirks of IRD's new tax website
Stars in their eyes: The curious questions around govt-funded Kiwi start-up
When those configuration flubs happen and end-users rage that their internet is gone because Google is no longer reachable, there are no commands to issue to sort things out.
Instead, the magic buttons to press are on the nearest phone as panicked admins try to get hold of their counterparts at the network causing the problems, asking them to please stop routing big chunks of the internet the wrong way.
From a business perspective, these kinds of unpredictable catastrophic failures are unacceptable, of course.
The good news is that clever engineers are working to make the internet more reliable, by making it harder for human errors, witting or unwitting, to kill connectivity for millions of people.
What's not so good is that the fix requires network operators to meet and cryptographically sign digital objects to show that they are authorised to make route announcements. That's about as complex as it sounds, and a new thing to learn for overworked network admin staffers.
Even though many more networks are signing route objects and dropping "rumours" that can't be authenticated, it remains a work in process.
Huston pointed out that BGP is ancient technology from the sixties. He does not think it's fixable.
Another measure involves networks connecting directly to each other as much as possible.
Called peering, it allows for example Trade Me traffic to stay local instead of tromboning outside of New Zealand and potentially hitting misconfigured networks. It's common sense, more secure and since data takes a short path, peering usually boosts performance too.
Telcos hate peering however. Instead of plugging a cable into a network switch at a central location in Auckland and hooking up with the rest of the .nz internet, telcos like to hold their large customer bases hostage for commercial reasons.
"Hey ISPs! We've got lots of customers. You want to reach them? Sure, just buy this really expensive circuit into our network and you can!" is how telcos like it. As a result, smaller internet providers have been forced to pick up telco traffic in Sydney and Los Angeles to avoid being stung by exorbitant circuit charges.
If by now you feel uneasy that the fix for the fragile yet all-important internet presence for your business involves techies around the world getting together for arcane key signing ceremonies, and persuading greedy telcos to do the right thing, you're not alone.
As it happens, the problem might just become engineered out of existence.
Not so long ago, most internet traffic, 80-90 per cent, was fetched from servers overseas. That doesn't work too well for streaming video and content providers, or social networks which need reliable connections with predictable performance.
Now, the internet has been re-engineered with large local caches with speedy network connections. They mean no more going to the US for Netflix or Amazon, as users connect to caches in Auckland or Sydney and directly to servers in other countries for maybe only ten per cent of their data.
Providers can connect directly to those caches and avoid routing rigmaroles and peering palavers. When the internet partly chokes on someone's mistake, users don't even notice it.
"GoogleNet" works like that, a large internal network that punts data worldwide to its internet-connected global cacheing and service nodes that are close-ish to users (google.co.nz is in Sydney).
Spending big on content and service delivery networks is one way to fix a fundamentally broken internet architecture, but few of us saw it coming.
• Juha Saarinen attended APNIC 48 as a guest of APNIC.