More and more I'm hearing alarmism about the "takeover of the internet" or the "death of the internet". Some in the survivalist and patriot movements fear that any day now, the Federal Government is going to move to take down half of the sites on the World Wide Web, and restrict access to the other half.
Poppycock.
Let's remember what the "internet" is. It's nothing more than several suites of protocols for transferring data from one computer to another, or from one newtwork to another; and it's any set of (say it with me now) inter-networked computers running those protocols. Thanks to the boys at DARPA, the internet we know and love (based largely around the TCP/IP protocol suite) is desinged to be decentralised, with realtime compensation for changes in the number and positioning of functioning routers and nodes.
If I have a computer publishing a web page using HTTP and another computer across the room reading it, that's the internet. If I maintain a file server that exterior clients can access using VPN, that's the internet. If I work out a means to shift packets using AM radio, and such packets can cross from one network to another, that's the internet.
In essence, the internet is like a road system. Sure, and interested party can put up roadblocks at strategic locations, and they can try to lock down or tear up any routes they don't like. But others can build new roads and new roadnetworks as well, and it's impossible to control the whole thing at once, as long as the equipment and the people who know how to build and use it are scattered throughout the populace.
Now, possibly, the World Wide Web as we know it might be able to be co-opted. But the www is not the whole internet. I recall the salad days of Usenet - a decentralized method of collecting public information. In fact, I recall the atomic bomb board. Either a legitimate means for nuclear engineering students to share infromation or an elaborate joke, the atomic bomb board (Usenet designations varied) held actual real-world information on nuclear weapons design. Not the kind of thing that the Department of Energy, the Secret Service, or the FBI liked to see.
However, they couldn't get rid of it. Not just because of the perfidy of computer savy students, but because of the nature of Usenet. Usenet consisted of directories of files stored at various nodes. Periodically, each node hosting a Usenet directory would enquire if any of the nodes it was connected to lacked any of the files in it's host directory. Any missing files (or the whole directory) would be transferred to the other nodes.
So for instance, the University of Illinois would ask the University of Wisconsin if there were any Usenet files the UofI was lacking, or the UofW was lacking, and they'd copy and transfer files. Simple enough. So if the Secret Service tried to take down all of the atomic bomb board files at the Univeristy of Illinois, the University of Wisconsin would just copy it's files over to UofI, and service would be restored. If the University of Wisconsin suffered the same treatment, then the Univeristy of Illinois would fill in the gap. If, somehow, there was a co-ordinated attack at both of them simultaneously, then it was likely that the Univeristy of Minnesota would update both UofW and UofI after the fact. The atomic bomb board became almost impossible to remove.
Now, interestingly, that was the model of the late 1980's and early 1990's. The hacker/cracker credo was "All information wants to be free". Using a redundant, decentralised, self-correcting model, the information was in a sense free.
But something changed. E-commerce. The World Wide Web came into general use, and with it attempts to sell things online. The need to control access to information (paid sites, credit transfers, personal financial information) led to information no longer being free. Gradually, the server-client structure became commonplace, as it is easier to control information that way. Web-mail has edged aside traditional email, Chat clients have overtaken IRC, Web forums have replaced newsgroups and mailing lists. Websites have replaced gopher sites.
But we remember. We remember Usenet. We remember SMTP and POP3. We remember Gopher. We remember Telnet and the BBS. In the 21st century, the information we need is canaled, damed, and piped. It's controlled because it's easier that way, because money can be made that way, and free information doesn't drive an economy well. But all that can change. If anyone tries to "control the internet" we remember how to set that information free.
Tuesday, March 17, 2009
Subscribe to:
Post Comments (Atom)
10 comments:
And 99% of the people using the Interweb have no idea what any of that means. That is a GOOD thing.
I'm not so sure. I certainly gain comfort in knowing we could push the [reset] button if we needed to. But we need to make sure that enough people in the succeeding generations know how to do this, much like we keep ham radio or blacksmithing alive. So we can go back to it if we need it.
If I run a dissident web site out of a cabin in the wild, the government can track my IP address to a physical address and drag me to jail.
If you keep the infrastructure but shut down the people, you have shut down everything that makes the internet special.
But, the US controls ICANN, which is a critical component of the TCP layer of the internet protocols, the mapping of domain names (www.whatsit) to IP addresses.
One reason why I don't buy all the rubbish about "al-Qaeda websites". Why wouldn't the DNS service be shut down to disable the sites?
If the US shuts down the DNS roots servers, and you don't know the IP address of a site, it's for all intents and purposes off the net.
Take down the root servers, take down the net.
If you and twenty friends use the existing Usenet or email mailing list infrastructure, the files aren't at a single IP address, and are self-correcting and self-propagating.
Convince an offshore node to also carry the newsgroup, and it'll be out of the jurisdiction of any federal agency short of the DoD or the CIA.
Another solution is to keep a machine offline running an automated script allowing it to log on for an hour each day, long enough to re-propagate the newsgroup.
The basic principle of application layer networking is that one need not be physical present at a server's location. Arpanet was created to allow remote access to computers; VPN does much the same thing.
While the internet is quite open source our access to it is not. And the access is controlled by a small number of people.
This article both supports and rebukes this
http://www.theinternetpatrol.com/its-our-internet-and-you-cant-have-it-says-us
I love the irony that what was originally developed as a military concept to prevent US Government computers from all going down in a nuclear strike has now become the beast they can't control, accessible by pretty much everyone. The internet is Hydra, and the more one group tries to cut off some heads, the faster the replacements grow.
Anonymous wrote:"One reason why I don't buy all the rubbish about "al-Qaeda websites". Why wouldn't the DNS service be shut down to disable the sites?
If the US shuts down the DNS roots servers, and you don't know the IP address of a site, it's for all intents and purposes off the net.
Take down the root servers, take down the net."
Sure, I'll agree with you that a concerted attack could remove the root servers, including DNS servers to a certain extent. It's also concievable to shut down the major ISP's.
But if networks can be linked together, then packet exchange is possible. Privately owned routers can be implemented, custom DNS machines can be brought online. It may not be fast, it may not be world wide, it may not be reliable, buch such internetworks can certainly function. Information can still be disseminated.
Sevenmead makes the excellent point that access is controlled by the entities that own that copper and fiber, and the routers and switches, all capital-intensive, private property of which our use is a paid-for-privilege, not a right.
To Lord Carnifex, thanks for the dialogue, and your response to my comments. The point I'm making about ICANN, and IANA for that matter, are that the assignment of IP addresses and domain names is controlled by the organizations that own and administer the machines that run these services. If your domain name and/or IP network address are blocked, you are for all intents off the net.
Yes, there is the possible use of RFC1918 Address Allocation for Private Internets, but these addresses are not broadcast, and are in actual practice denied routing at public gateways as a matter of routine. That's the whole point of 'private networking'. The intention is to save IP space by alleviating the need for assignment of public IP networks, which are finite, allowing private use of the same set of network numbers in closed routing environments. Another application of RFC1918 addressing is security. Again, since these networks are not broadcast, there's no means of public access to private RFC1918-addressed networks; there being no routes, there is no access.
I'll grant that's it's possible to circumvent the centralized control of IP and DNS assignments in a local setting, if you own and administer the equipment running DNS and IP routing services. But I don't know of a way to circumvent the monopoly that the telecoms and cable companies have on physical access. Access lists, IP and MAC address filtering, firewalls, encryption, etc.; there exists ample technological means of controlling access to privately-owned networks.
The WWW is the phenomenon it is due to it's publicly-accessible character, and simply removing or blocking propagation of IP and DNS assignments create an administrative bottleneck which facilitate centralized control of the internet. Taking 'undesirable' domains off the root servers would effectively kill the web as we know it. And such action is well within the abilities of an overreaching federal government.
Free speech is (still) a right. But ownership of a printing press and a distribution center is not, nor is access to privately-owned wires and routers and switches.
One word:
FidoNet
If I chose to run a caching name server, and I chose to cache 10GB of addressing on it, then I, nor those who rely on my caching servers, need the root servers.
It's academic, but it's a theoretical possibility.
I can choose my on netmask, but if my netmask conflicts with a public address, then anyone relying on my netmask cannot access the public address.
FidoNet relied on the copper laid down by the telecoms. In this day and age, that is no longer necessary. The "physical" infrastructure is ethereal. We're moving data wirelessly. If I have a neighbour who will act as a node and so on, and redundancy is put in place, as is the nature of the internetwork, then reliance on some large company is negated.
It's academic, but it's theoretically and practically possible.
Post a Comment