I didn't notice anything amiss, but then I'm not a customer of Spark.
Spark said that the issue is due to a handful of customers wanting to ogle nude pictures of Jennifer Lawrence as delivered by phishing emails, but which in actual fact installs some unnamed malware used for a denial of service attack - or, sending large amounts of data or requests, which overwhelms servers trying to keep up.
There's been speculation that it was a Domain Name System (DNS) amplification attack that created huge amounts of traffic.
These attacks abuse the Domain Name System machines that translate numeric addresses like 8.8.4.4 to a host name like google-public-dns-b.google.com which is a bit easier for humans to remember.
Without going into too much technical detail, amplification or reflection attacks as the name implies allows an attacker to send several small queries to servers, demanding a large response with lots of data.
What's more, it's easy to fake the address that the response should go to.
The long and short of this is that with relatively simple means, an attacker can use a relatively low-speed network connection, like ADSL2+ with a maximum of 1 megabit/s upstream, and send fifty or more times the amount of traffic to a victim system.
Spark hasn't so far provided any technical detail on what happened, but as the rest of its network appeared to be working, it's unlikely that a DNS amplification attack was the culprit.
First, despite the amplification factor being large, you need more than just a handful of machines to swamp a modern network with large amounts of capacity.
Second, if Spark was sending out large amounts of data overseas for three days as a spokesperson for the company said, it would've been noticed by international monitoring services like the Digital Attack Map nothing out of the ordinary appears to have been recorded however.
A Spark techie described on the Geekzone forum the telco's DNS infrastructure as being "load balanced in different geographic locations; each instance is connected to the core network by two different paths and each DNS server is connected to the redundant switch (and router) infrastructure by multiple bonded GigE [Gigabit Ethernet network] interfaces."
Translation: that's a serious set up with lots of network capacity. Are we to believe that "a handful" of malware infected users were able to overwhelm that, for three whole days?
An outage notification message sent to providers connected to Spark's network talks about "a total of five domain addresses" having been blacklisted by the telco as part of its technical fix for the problem, along with blocking of certain inbound traffic to its broadband network gateways.
Furthermore, the notification mentions that Spark's Global Gateway international network would continue to identify and block "offending source IP addresses". Both measures suggests that the fault was caused by external factors, and not customers on Spark's own network.
Whatever it was that caused the problems, three days' of service disruption despites the Spark techies' valiant efforts to set things right is not a good look for the country's largest internet provider.
Spark should take a look at its processes around responses to issues like this one, and ensure that they're more flexible and faster when problems such as the recent lot happen.
For instance, during the weekend outage, the telco issued a workaround that involves changing the internet Protocol addresses for the DNS servers on its network - this isn't too hard to do for people with technical nous, but it's not so easy for everyone else.
Having customers changing their DNS entries from the original ones that are automatically allocated when the broadband connection starts up could come back and bite Spark however.
Now that the attack has subsided, Spark now has further support headaches as it would like users to switch back to automatic allocation of DNS servers:
If you changed your DNS settings to Google's we advise you to change back and switch to auto. Details here: http://t.co/CQdQsdiyNs 2/2
Also, given that both the broadband network and Spark's 3G/4G mobile data service are affected by the issue, publishing details about the work around on a web page that won't load with the existing DNS settings seems a suboptimal way to disseminate information.
For the sake of Spark's customers, I hope the "mini XT" event over the weekend is a wake-up call for the incumbent that there are parts of its network that aren't robust enough and needs a makeover.
Blaming customers and "cyber criminals" for problems that appear to be caused by issues on Spark's own network isn't the way to go however.
From the truth will out department: it wasn't a nameless Microsoft developer that wrote the text for the famous Blue Screen of Death (BSoD) in ye olde Windows 3.1.
It can now be revealed that the BSoD author was... Steve Ballmer, in 1992.
Microsoft blogger and principal software engineer Raymond Chen said Ballmer didn't like the original text and a few days later, came back with a version that went into the early version of Windows almost word for word.
Chen doesn't say what the original text was, or if Ballmer went on to write more error messages.
Steve-o is of course no longer with Microsoft, having left the chief executive job, hopped off the company board and become Mr Basket Ballmer as the new owner of the LA Clippers team.