Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Communications The Internet Network Hardware Technology

Middleboxes vs. the Internet's End-to-End Principle 90

arglebargle_xiv writes "The Internet was designed around the end-to-end principle, which says that functionality should be provided by end hosts rather than in the network itself. A new study of the effect of vast numbers of middleboxes on the Internet (PDF) indicates that this is no longer the case, since far too many devices on the Internet interfere with traffic in some way. This has serious implications for network (protocol) neutrality (as well as future IPv6 deployment) since only the particular variations of TCP that they know about will pass through them."
This discussion has been archived. No new comments can be posted.

Middleboxes vs. the Internet's End-to-End Principle

Comments Filter:
  • So no more routers & switches? I mean I see where this is coming from, all these proxies & DNS filters & whatnot, but until a true viable p2p DNS alternative comes around & gets adopted by at least 10% of the connected world or governing bodies...
    • Either way I suppose that has absolutely nothing to do with TCP, I'll be sure to bookmark the report & read it on an off day.
    • Re:What (Score:5, Informative)

      by martin-boundary ( 547041 ) on Tuesday August 02, 2011 @03:49AM (#36957182)
      Routers and switches operate at a lower level in the stack. The end-to-end principle is about user apps, and it makes A LOT of sense.

      Basically, whenever the pipes become smart, things overall become less reliable. It's simple math. Add one component along a network path, and now every app that uses the path may have to be fixed to cope with the new behaviour on that path. For example, if your ISP adds an HTTP filter, every app using HTTP is at risk of breaking the next day.

      With end-to-end, the only correct place to put the new functionality (HTTP filtering in this example) is at an end, namely as a process on a user's computer. Now, the simple math works out: the user is filtered (same outcome as above), but the pipes are dumb and the only thing breaking due to the filter are programs on that user's computer. Everybody else's software works fine just as before.

      • I'm afraid I haven't read up on TCP/IP in a while, but doesn't every device modify the header of every packet while sending it downstream?
        • by Anonymous Coward

          No, because of the layered OSI model. Every (intermediate) device strips off the outermost header, processes it and then adds its own. It should not modify anything in the encapsulated packet. NAT is the usual exception to the rule, because it rewrites the layer 3 (transport) headers while operating on a level 2 (link) layer.

        • Re:What (Score:5, Informative)

          by kasperd ( 592156 ) on Tuesday August 02, 2011 @05:15AM (#36957528) Homepage Journal

          I'm afraid I haven't read up on TCP/IP in a while

          Even the term TCP/IP in itself is misleading. TCP and IP are two separate protocols. TCP is designed to be run on top of IP, and TCP does have some knowledge of the underlying protocol (a bit too much some would say). IP on the other hand has no knowledge of TCP. The IP header contains an 8 bit protocol field, so you can in principle run 256 different protocols on top of it (some values are reserved, and not all the other values are assigned yet). An implementation of IP does not have to implement all the higher level protocols, in fact only ICMP is mandatory.

          A router is supposed to work on the IP layer. A router should not know about TCP, and it shouldn't care what protocols are run over it. In practice routers do tend to have TCP and UDP implementations as well. They have TCP because that is what is typically used to configure the routers, and BGP is run over TCP as well. And in some cases you may want to do DNS lookups, and for that you want UDP.

          When you build a router you have to keep the right layering. The low level should do routing and not care about UDP and TCP. The next level can do TCP and UDP, and on high end routers the separation should be to the point where this is even implemented by a different piece of hardware from the one that does the routing. The next level up the stack can handle configuration and routing protocols. This layer can then push updated routing tables to the actual hardware doing the routing. If the different pieces stick to their area of responsibility, things will work out. All of those higher levels could in principle be implemented by a computer next to the router and leave the router to do only routing.

          Some routers have features to interpret higher level protocol headers such as UDP and TCP and mangle them. Once it starts doing that, it is no longer a correct implementation of IP according to the original spec. The network is supposed to get the higher level payload from source to destination without mangling it. The network as it looks today fails at that task.

          but doesn't every device modify the header of every packet while sending it downstream?

          Routers do. They have to decrement the TTL, and in case of IPv4 adjust the header checksum. But most of the other fields in the IP header are read-only for the routers, and the higher level protocol headers such as UDP and TCP are totally off-limits.

          • by Anonymous Coward

            Yeah, when they were treated as one protocol, it was before either OSI or the the 4-layered models existed. Ever since IPv4, they've been separate.

            But if they're going to overhaul TCP, that would have to wait. If it were piggybacked on the IPv4-v6 migration, then that would introduce even more unknown variables and further delay the migration. I'd say this team missed the bus - TCP already underwent a minor change for IPv6 by the introduction of jumbograms, and the setting of maximum payload sizes to 6553

            • If you read the original article, they say that middleboxes actually do behave for at least 80% of the connections they tested (at least 90% when going to a random high port on the server, not port 80). The other 20% can be detected by the fact that they don't pass new TCP options on the initial SYN packets, and so an extension can fall back to "plain vanilla" TCP if appropriate. Hence, the following two extensions should actually work widely (although not universally) on the current Internet:
              • Multip
      • by smash ( 1351 )

        Thing is, network owners WANT to be able to control what their network is being used for. If you're a carrier and simply just sell pipes, sure - open access from point to point. CHarge by the gig, whatever.

        If you're an endpoint, you sure as shit want to make sure that only the traffic you want reaching your devices is actually allowed through.

      • And that's really annoying for end users. I'm on an ISP that does transparent proxying of some sources, like Wikipedia. Every now and then it stuffs up, and I suddenly can't access Wikipedia, or I suddenly get strange "functionality" that other people don't.

      • by jd2112 ( 1535857 )

        But, but, but... Piracy!

        No, I mean Child Molesters and Terrorists! Think of the children!

        this message brought to you by the MPAA, RIAA, and BSA.

    • So no more routers & switches?

      Switches, sure. Routers, maybe. Proxies, however, no. I think that's the gist of it.

      Not sure I agree that these things should go away, but the point that they alter the way things work is true, especially given that this is the point of their existence. The trick is to cause them to exist in a non-intrusive manner. This is where things often fail.

      At home, I have a wireless network which is mostly transparent (which doesn't mean open . . . it is secured). NAT is involved, of course, and there is availa

  • Internet with middle boxes like sex with a condom. It somehow doesn't feel right and is less satisfying but can protect you from some nasty stuff.
    • ... and they give me an allergy. And like in your analogy, you lose a lot if you can't do it directly.

    • And the most lost on the most people at once analogy award goes to...
      • Re: (Score:3, Funny)

        by jamesh ( 87723 )

        And the most lost on the most people at once analogy award goes to...

        http://en.wikipedia.org/wiki/Condom [wikipedia.org] for anyone who doesn't know. If only i'd known before my 4 kids were born... (hi kids, if you're reading this!)

        • by 1s44c ( 552956 )

          I can't find a mod for '-1 You What?' so I decided to reply instead.

          You can't have seriously got to the age of 16 and not known what a condom is? Were you educated by religious nutcases who never let you have any contact with the world?

          I'm posting this Anonymously so I can mod anyone who replies with 'whoosh' as a troll. 'Whoosh' was never funny.

          • Re: (Score:2, Funny)

            by 1s44c ( 552956 )

            I'm posting this Anonymously so I can mod anyone who replies with 'whoosh' as a troll. 'Whoosh' was never funny.

            I messed that up.

            • Whoosh... ;-)

          • by Chrisq ( 894406 )

            I can't find a mod for '-1 You What?' so I decided to reply instead.

            You can't have seriously got to the age of 16 and not known what a condom is? Were you educated by religious nutcases who never let you have any contact with the world?

            I'm posting this Anonymously so I can mod anyone who replies with 'whoosh' as a troll. 'Whoosh' was never funny.

            Diagnosis: anally retentive.

            • by 1s44c ( 552956 )

              I can't find a mod for '-1 You What?' so I decided to reply instead.

              You can't have seriously got to the age of 16 and not known what a condom is? Were you educated by religious nutcases who never let you have any contact with the world?

              I'm posting this Anonymously so I can mod anyone who replies with 'whoosh' as a troll. 'Whoosh' was never funny.

              Diagnosis: anally retentive.

              That's not a meaningful reply. Are you objecting to the main content of my posting or my anti-whoosh rant?

    • by rts008 ( 812749 )

      I am only replying to correct a modding mistake.
      I tried to mod you +1 Interesting, but somehow it ended -1 Troll.
      My apologies, truly.

      The 'Wild, Wild West' days of the Internet lasted much shorter than the expanding of the US frontier(west).
      It would be interesting to see what could have came about on this digital frontier. *sigh*

      Nowadays, governments see revenue and influence as possibilities, mega-corps/industries with money/political influence see dollar signs, ad nauseum, until the commercialism and gov't

    • It somehow doesn't feel right and is less satisfying but can protect you from some nasty stuff.

      But "nasty stuff" is the reason most people go on the Internet.

  • by ShooterNeo ( 555040 ) on Tuesday August 02, 2011 @03:32AM (#36957120)

    I just moved into an apartment with internet provided by ethernet jacks in the walls. The actual architecture is a major ISP has set up their own routers somewhere, putting me permanently behind a NAT. I cannot open a single port, so no incoming connections can ever reach my computers without one of my machines sending a packet out first.

    This has SIGNIFICANT advantages : most worms cannot spread because my computers cannot receive a packet from any machine without software on my machine actively establishing a connection first. No exceptions. It means that bittorrents and other P2P software barely work at all. And so on.

    For the ISP, this is ideal. And, the ISP offers unheard of speeds in this restricted setup. 4meg upload/4 meg download is free with the apartment rent, and for $40 a month they'll give me 50 meg upload 50 meg download. For a USA ISP, that is crazy fast...but the limitations make the high upload close to useless.

    And, the other interesting thing is that nearly everything I've ever done on the internet still works. My computer is unable to communicate with anyone without the help of a server and is a permanent client, but in today's world that's the norm.

    • Re:Too true (Score:4, Insightful)

      by adolf ( 21054 ) <flodadolf@gmail.com> on Tuesday August 02, 2011 @03:49AM (#36957186) Journal

      most worms cannot spread because my computers cannot receive a packet from any machine without software on my machine actively establishing a connection first. No exceptions.

      No exceptions, except for laptops, netbooks, and other various-and-sundry gear which travels between networks.

      Your walled garden may, indeed, have walls. But it also has unguarded gates through which anything may pass.

      • I'm not sure if the machines inside the walled garden can talk to each other, either. I don't think that they can. (unless I put several of my own computers on the same switch, I don't think they can see each other)

        Please understand...I hate this architecture and I think it stinks, but I can see why the ISP is doing it this way.

        • by adolf ( 21054 )

          I'm not sure if the machines inside the walled garden can talk to each other, either.

          There's no reason to be unsure.

          Just fire up nmap [nmap.org] or Netscan [softperfect.com] and have a peek.

          • by vlm ( 69642 )

            I'm not sure if the machines inside the walled garden can talk to each other, either.

            There's no reason to be unsure.

            Just fire up nmap [nmap.org] or Netscan [softperfect.com] and have a peek.

            Old school, but works. New school is just fire up your bonjour / zeroconf / whatever its called now client and see if its full of other peoples printers, desktops, and airport-extreme type devices.

            Of course a dumb enough admin might filter JUST mdns and forget to filter the actual services, so it's still worthwhile to nmap.

        • by Anonymous Coward
          I agree with the posters that this architecture is a sign of things to come. The famous John Walker cited this architectural change as the main reason he discontinued work on Speak Freely, back in 2004. Note it is possible to 'fix' the architecture by setting up a VPN connection to a real IP address. There are various hosting companies that offer this service. This would allow you to e.g. run a network game server, receive telephony calls without a NAT traversal scheme, et cetera.
        • I'm not sure if the machines inside the walled garden can talk to each other, either. I don't think that they can. (unless I put several of my own computers on the same switch, I don't think they can see each other)

          Please understand...I hate this architecture and I think it stinks, but I can see why the ISP is doing it this way.

          As netadmin who works for an ISP. I understand why they're doing it that way as well. I don't thank that user's should let us get away with it though.

    • Do most worms actually infect home machines by direct connection? WTF is running on people's machines to allow that?

      Personally, I'll refuse any NATed connection if only on principle. Censorship evading technologies like Tor and Freenet depend on people being able to connect directly to each other. That turns the 'net into TV.

      • Do most worms actually infect home machines by direct connection? WTF is running on people's machines to allow that?

        Yes. Otherwise it would be a virus and not a worm. And what to run to allow that: e.g. XP before SP2 without a good firewall (remember "slammer"?). Or Linux with an old and unpached Apache. You would be amazed how many old installations are out there, which are not up-to-date.

        Personally, I'll refuse any NATed connection if only on principle. Censorship evading technologies like Tor and Freenet depend on people being able to connect directly to each other. That turns the 'net into TV.

        I understand the reasoning, but you still need a good firewall. Most people use NAT as a means as a firewall. And, of course, if you have a DSL connection at home, you need a NAT, unless you've found a provider that gives you eno

        • And what to run to allow that: e.g. XP before SP2 without a good firewall (remember "slammer"?).

          Oh sure, but XP pre SP2 installations should be a small number by now, no? I wouldn't expect current Windows to fall if not behind NAT, so I don't see the 'great advantages' parent talked about.

          I understand the reasoning, but you still need a good firewall. Most people use NAT as a means as a firewall. And, of course, if you have a DSL connection at home, you need a NAT, unless you've found a provider that gives you enough IP addresses for all of your devices (or you have only one device).

          Sure, but that's completely different, because I can control that firewall and NAT (including using automated mechanisms like uPnP).

          • by fa2k ( 881632 )
            You don't *need* a good firewall. A firewall is meant to give an extra layer of good security, but it is used in 90 % of the cases as a quick fix for configuration errors. OSes need to close off unnecessary services anyway, because people take laptops to all kinds of scary places. The IP stack is not inherently insecure. It doesn't harm you if your machine sends a TCP RST packet or an ICMP unreachable when something tries to connect. If only people could make transport mode IPSec easy, we could have a "loca
      • by jimicus ( 737525 )

        Windows XP pre-SP2 didn't have any sort of firewall and opened up all sorts of things by default; most ISPs shipped you a USB ADSL/cable modem and explicitly told you not to use a firewall or they wouldn't support you. Lots of things at the time therefore did spread in exactly that fashion; the only sensible option was to tell your ISP to go to hell and install a router with a firewall or a third-party software firewall.

        That said, I'd be surprised if any common malware written in the last 5 years had that

    • Wow, that sounds like hell to me. I wouldn't be able to access most the services I run.

      Also, what worm, made since Windows XP upgraded it's firewall to "barely functional", still relies on direct connections? I call bullshit. That's not a feature. Almost all of them, to get around the advent of NAT and firewalls, call out to a command server. Often encrypted traffic too. So, while you say "most worms", you actually mean "few worms", since most worms have multiple attack method (well, the major ones do), and

      • I think the poster was refering to worms that go for open shares. I have no idea what the percentage of worms do that though.

    • by vlm ( 69642 )

      My computer is unable to communicate with anyone without the help of a server and is a permanent client, but in today's world that's the norm.

      Note that server can be your server, not someone elses server.

      You're about 15 minutes of work away from signing up for a virtual host (I am a happy linode customer) and set up a bridging VPN to the vhost and you're live on the air with a static public address. Or NAT as you please from that public addrs into your internal LAN (thats what I do)

      This has the other interesting side effect of your ip address being as stable as the vhost side can make it... you might have the same addrs for a decade. This is mu

      • Ok, I was trying to google for how to do this. I can't even find the right search phrase - I either get a blizzard of results related to VPNs for a business or a blizzard of results related to people wanting slow and crummy free proxy servers to go to restricted websites.

        As I understand it, with a virtual host I am limited by the upload speed of that host, right? So if I wanted to have a full 50mbps connection to the outside world that I can receive incoming connections to, I would need a virtual host tha

      • Ok, I looked at Linode. TRANSFER LIMITS. Their $40/month plan is limited to 400 gigabytes/month. So if I'm using a program running on a linode box to act as my proxy to the outside world, giving me a publicly accessible IP, I can't transfer very much data.

        I have an Into VPS account which for that price has a 2000 gigabyte monthly limit. Currently I run a minecraft server but I suppose I could set the machine up as my proxy to allow me to play multiplayer games that have port forwarding problems. This j

    • by Cato ( 8296 )

      The ethernet jack and high speed is nothing to do with the NAT. This sounds like a new apartment building with Fibre To The Building (FTTB), hence the high symmetric bandwidth of 50 Mbps.

      This can be done at layer 2 providing an Ethernet demarcation point as the service to the end user (you), or it can be done at Layer 3 (IP) without NAT, or Layer 3 with NAT.

      Unfortunately your ISP doesn't have enough IPv4 space left to do layer 3 without NAT, and since it's an ISP it needs to provide layer 3 somewhere. How

  • by cardpuncher ( 713057 ) on Tuesday August 02, 2011 @04:01AM (#36957230)

    Most of the "routers" (which are really a cross between transport-level and application-level gateways) supplied to domestic customers aren't even capable of the full gamut of IPv4 features: no real hope of extending TCP, transport protocols other than TCP or IPv6.

    TCP/IPv4 is now a living fossil and will persist in its present form as an ISP access protocol, ironically filling exactly the same function that X.25 (so much derided by Internet professionals at the time because it wasn't end-to-end) was designed to provide. Big ISPs have the same business model as the old telcos (and indeed may be the same business) and they need to control access to their network and bill for it. They can't do that without "middleboxes" of some kind. End-to-end was only ever really feasible for closed-user-group networks paid for by third parties.

    On the plus side, a more capable "middlebox" would allow you to negotiate classes of service with your ISP which might obviate the need for the ISP to randomly traffic shape in ways that suit noone.

    • TCP/IPv4 is now a living fossil and will persist in its present form as an ISP access protocol, ironically filling exactly the same function that X.25 (so much derided by Internet professionals at the time because it wasn't end-to-end) was designed to provide.

      The reason why most people derided X.25 wasn't because of any ideological issues, but simply because it sucked so much. I've worked with a huge range of networking technology (everything from Trailblazers over noisy phone lines in China to OC-xx's) and X.25 was by far the most painful technology I've ever dealt with. The worst part was that even if you managed to get the link up, because of all the stoopid^H^H^Hintelligence built into the network, it failed with 100% reliability. In other words all the hand

  • What (Score:5, Insightful)

    by ledow ( 319597 ) on Tuesday August 02, 2011 @04:58AM (#36957442) Homepage

    The "end-to-end" nature of the Internet ended with the first firewall. Not to mention NAT, proxies, etc. To get to the point where I have a transparent squid proxy protecting my workplace (a school) is only a teensy, tiny step.

    "End-to-end" is a pipedream and can't possibly work because of the sheer security and scale of such a network (i.e. there would be nobody on the path able to stop a DDoS against you!). It wouldn't work, and that's why other solutions exist.

    Hell, virtually every device ever sold that handles IP traffic modifies it in some way that defeats this "end-to-end" crap. They have firewalls. They may offer NAT. They might offer ping-blocking. Hell, the first thing any decent firewall does is turn off most of the unsolicited packet access that it receives, whether that be ICMP messages, or packets with fake origin. Without that, you'd have chaos.

    • Re:What (Score:4, Funny)

      by smash ( 1351 ) on Tuesday August 02, 2011 @05:25AM (#36957580) Homepage Journal

      Exactly. End-to-end as a mandatory access scenario is for GNU hippies like RMS who believe in unicorns and that everybody should hold hands an sing.

      The ABILITY to do end to end transit when both parties agree to such is a very good thing to have, yes - but to assume that end-to-end should always work in the real world where we have assholes out there who want to rip you off (money, cpu, bandwidth, etc) and basically fuck you over is never going to be realistic.

    • In addition to the security, doesn't end-to-end assume both endpoints have a decent connection? Would, for example, I want to have hundreds of thousands of requests hitting my mobile phone linked via a crappy cell tower? What about Web 2.0 servers which have millions of hits; don't they have layers of middleware routing all the traffic? What about the Slashdotting effect, wouldn't I want some middleware to handle traffic?

    • That's why it's called the end-to-end principle. It doesn't mean that nothing should ever, ever modify a packet on the network. But, end-to-end communication with functionality pushed to the edges is an ideal for which to strive. The fact that almost any two hosts on the Internet are able to communicate (after jumping through some stupid and mostly unnecessary hoops) is evidence that some people out there still consider it a solid principle. Just because you cannot hit the platonic ideal doesn't mean you sh

  • old devices need upgrading for ipv6. including: desktops, routers and.... *gasp* firewalls.
  • From the report

    TcpCrypt was motivated by the observation that server computing power is the performance bottleneck. To make ubiquitous encryption possible, highly asymmetric public key operations are arranged so that the expensive work is performed by the client which does not need to handle high connection setup rates. This is in contrast to SSL/TLS where the server does more work.

    I think thats a really insightful observation. I'd really like a new version of the HTTPS that takes away the most common objec

    • Maybe in windows....

      But in linux/unix...
      www.linuxsecurity.com.br/info/fw/PacketManglingwithiptables.doc

      In the end the server has to generate the keys, otherwise how do you know who the client is? or for non-clients to spoof?

      PGP seems faster than https though, but https is doing more that causes the overhead.

  • There seems to be confusion in what 'middleboxes' are. I don't believe this term refers to firewalls and NATing devices. It would seem to mean something more like a device that augments the data as it's passing it. Like a web filter that edits HTML on the fly to add, remove, or replace ads. Or an SMTP monitor that captures emails and includes some additional data as its being relayed. Or the Comcast DNS servers that can give you non-authoritative responses sending you to the destination of THEIR choice.

    Fire

    • by jgrahn ( 181062 )

      There seems to be confusion in what 'middleboxes' are. I don't believe this term refers to firewalls and NATing devices.

      It *does* include those. They define it on page 1 of TFA: "... an invasion of middleboxes that *do* care about what the packets contain, and perform processing at layer 4 or higher *within* the network."

      They have to define it that way, because they want the freedom to use IP (the Internet Protocol) between hosts A and B, over the internet. Not unreasonable, I think.

  • TCP Encryption seems to be some of what the article is pointing at (who has time to read this theoretical white paper?) which I think is great. It's kind of implemented with https over SSL, but it's left up to the website owner to implement it at the cost of system performance.

    Factor in extending TCP, I'd like to see the private / public key system implemented in TCP as a standard, rather than an overlay. There is no benevolent reason to anybody that an ISP should be monitoring their traffic (they aren't

  • IP supports a large number of protocols other than TCP, UDP, and ICMP. But how many ISPs still pass them? Can you still send Xerox Network System (XNS) packets (protocol 22)? AX-25 frames (protocol 93)? QNX messaging [qnx.com] (protocol 106)? Fibre Channel (protocol 133)? Can you change the version number on TCP (which is what the people doing the original paper should be doing when they change the protocol)?

    All of these are IP, so the Internet should pass them. I've tried QNX packets, and they at least went thr

  • The original end-to-end paper argues that applications are best implemented at end points rather than in the network - the final application end points (e.g. the receiving end of file transfer) must be aware of failure modes in the network (e.g. errors, security, etc.) therefore the network can never be completely abstracted away nor can the application be mostly implemented in the network. It doesn't sound like anything has has changed today and the original paper even notes that partial optimisations (e.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...