One of the great things about attending deep-geek technology conferences is that you get early insights into what's to come, and how engineers try to second-guess what could possibly go wrong with what seemed like good ideas.

See, the internet isn't a single piece of technology, and the web on which you read this (unless you're one of the many wonderful people who subscribe to or buy the print edition of the Herald, of course) is just one of many different types of data flows that traverse the global network.

Many of our interactions using apps with servers over the internet take place over the Transmission Control and Internet Protocol (TCP/IP) suite. You don't normally have to worry about TCP/IP, which figures out how to send and receive your bits and bytes reliably over long distances and often unreliable networks — or its cousin User Datagram Protocol (UDP), which is used for streaming data when reliability isn't a top priority.

Applications using TCP/IP talk to servers using specific ports. Email is usually sent over TCP port 25 for instance, and web browsing in clear text, with the hyper text transport protocol (HTTP) done over port 80.


Secure, encrypted and authenticated web browsing, the padlocked HTTPS that you see in Chrome, Brave, Safari, Opera, Firefox and Internet Explorer, is done over port 443.

There are 65,536 ports available for both TCP and UDP, with many of the lower ones below 1024 reserved for specific apps. Apps can use many different ports too, when they look up domain names for sites to send and receive traffic from, and other things.

This is a flexible system that has worked very well in the past, but it's been undermined by understandable security paranoia that's seen operators block ports that they don't recognise.

Governments wanting to censor can also order blocks (or subversion) of traffic across specific ports — like TCP/25 for email — to stop or capture data.

Internet technologies and protocols are flexible, as I said, and route around damage.

Providers and operators are putting increasing amounts of traffic across the website protocols HTTP and HTTPS, which is unlikely to be blocked, a development which will reshape the internet as we know it.

Some of the advantages of doing this include having just one or two protocols to optimise instead of thousands to make everyone's internet access much quicker.

It's an all-or-nothing thing for government censors too: don't want people to see certain things? Well, the easy option of blocking traffic won't work too well, as it means nobody will have web access.

The old-timers at the APRICOT technical conference for internet overlords recently remained dubious as to how well stuffing web traffic with data that was separated out to other protocols will work, however.

They point out that the current system of digital certificates for HTTPS is deeply flawed, overly complex and easy to abuse; if HTTPS connections are silently monitored and people wrongly think their subversive opinions, personal data or finances transmitted over them are secure, this could be an information-spillage disaster in the making.

Whatever happens, and this whole thing might seem like esoterica, it represents a fundamental change in how the internet will work.

If your business depends on the internet, and it almost certainly does, pay attention to the engineers debating the pros and cons of the changes, and understand what's coming up, because it is guaranteed to affect everyone.