Intel Develops Hardware To Enhance TCP/IP Stacks 271
RyuuzakiTetsuya writes "The Register is reporting that Intel is developing I/OAT, or I/O Acceleration Technology, which allows the CPU, the mobo chipset and the ethernet controller to help deal with TCP/IP overhead."
Interesting (Score:5, Insightful)
Granted, I've never administered a server that was under anywhere remotely near the types of loads we are talking about for this to be useful, but I have a hard time imagining that dealing with the TCP/IP stack would be more intensive than running applications (as the article claims).
So, far all you people out there much more qualified to discuss this than I am, will having some part of the processor dedicated to handling TCP/IP really speed things up, or is this primarily a marketing technology?
Re:White elephant? (Score:1, Insightful)
Re:Ethernet controllers (Score:5, Insightful)
Re:the good, the bad, the ugly? (Score:4, Insightful)
What's new here is that Intel wants to put this in their chipsets everywhere and not just in $700+ NICs. Already this has been happening with checksum offloading, TCP fragmentation, smart interrupts, and so on in most GigE chips.
So yes, people have done this before and have been since at least 2000.
As far a DRM is concerned, look at the NIC market and look at the TCP/IP spec. TCP/IP? Standard and anything non-standard won't work with stuff that's out there. Wierd NICs? I've been getting Linux source-code drivers for even the cheapest of cheap NICs for years now. There's too much competition to sneak in something restrictive.
Re:Interesting (Score:3, Insightful)
Note, this is enterprise-grade hardware hooked up to million-dollar disk arrays.
Now, is that entirely from dealing with the networking stack? No. Not quite. However, consider this. It takes time to checksum headers and data. It takes time unwrap packets. If you have a ton of clients raining requests for data on your server, it's not hard to see that dealing with the networking bookkeeping could impact the throughput of requests. ie: Database servers and web servers are two things that come to mind here in addition to file servers.
Btw, note that this another part of the "platform" initiative/orientation. While Intel's track-record has not been great in many respects, they do have a good track-record of success with "platforms." eg: Centrino was a "platform."
Re:Qlogic TOE cards (Score:1, Insightful)
Re:White elephant? (Score:3, Insightful)
Those little boxes were masters at multi-processing, and they did it right - one processor for pretty much every major peripheral task (disk, graphics, sound, something else I can't remember).
As long as these Intel coprocessors are going to be an open standard (which they almost certainly won't), then I'd welcome this addition to PC architecture.
And the CPU doesn't have other things to do? (Score:4, Insightful)
Not that this is a new idea. It's been done for donkey's years.
Re:Interesting (Score:1, Insightful)
Re:yeah great (Score:2, Insightful)
Parallelism is great. Look the way things are going. Dual CPU motherboards, Dual core CPUs, Cell..
And gnome.. sheesh.. back when I ran a P100 and Gnome was slow, I thought "well one day I'll have a 500Mhz monster and Gnome will be fast". Here I am with a P4-2.6Ghz/1Gb and Gnome is STILL a dog. *sigh*
Re:White elephant? (Score:5, Insightful)
Also, Catalyst switches are not highly parallel. They can be parallel, depending on the exact model and configuration, as well as the exact path inside the switch that the traffic takes, but it's not even remotely the same in execution as having "hundreds of linux routers side by side."
Instead, it is the exacting way in which the various components of the switch pass data, the very specific purpose of each chip and circuit in the device that gives modern routers the speed they do. Special components such as content-addressable memory, tertiary content addressable memory (memory that allows you to store 0s, 1s, and wildcard values instead of just 0s and 1s, allowing for wire-speed match comparisons against ACLs and routing tables), etc. etc. It isn't merely a stack of GP CPUs all running in parallel to achieve a particular task.
Systems guys often mistake routers and switches for computers with a bunch of Ethernet jacks. They're far from it. They are highly specialized pieces of hardware designed from the bottom up to do one thing and do it well -- transport data. Computers are the opposite. They're designed from the bottom up to be able to do whatever you wish them to as fast as possible, but that flexibility comes with a price.
If you ever get the urge, you should read up on Catalyst switching architecture. You'll find it quite interesting.
Re:White elephant - flawed logic (Score:3, Insightful)
With all due respect to Mr. Tannenbaum, but if he stated what you put in your post, his logic is severely flawed.
Let's compare the general CPU/networking CPU combination with a manager/secretary.
The manager has a number of tasks which needs to be done, including scheduling a number of appointments. Without a secretary, he'll be obliged to call/contact the people involved, wait for their responses and note the scheduled appointments in his calendar. Once that is done, he can go about with his other tasks.
When that manager has a secretary, he can just tell the secretery to make the appointments and notify him when they're done. That secretary isn't going to be any faster in time making those appointments (still has to call the same people); but in the mean time the manager can start working on something more useful (in theory).
While the secretary may not be that much faster at scheduling appointments (she probably is, since she knows how to deal with this and who to contact a lot quicker and in a more structured way than the manager), the end result is that the manager can get more work done because he delegated some of it to the secretary.
Note for the Politically Correct: feel free to swap he/she where approriate.
Re:Ethernet controllers (Score:5, Insightful)
In truth, a gigabit ethernet card can saturate a 1X PCI-E link (2Gb/s after the 8B/10B encoding is removed), when sending small packets- basically due to packet overhead.
This old bit of snake-oil... (Score:5, Insightful)
Except:
We've seen successive waves of this concept, none of them have had much success. Graphics processors are one partial exception, and it took almost a decade of mis-designs of those before they became stable enough to be usable.
You speak in jest, but... (Score:3, Insightful)
This USB keyboard I'm typing on involves at least three processors, one to scan the keys, one to do the USB on the peripheral side and the third to do the USB on the motherboard side.
Re:White elephant - flawed logic (Score:3, Insightful)
Next thing you know, the difference between SCSI and IDE are moot because 'for one thread it won't make that much a difference since you'll end up waiting for the data to come of the platters anyway'
There are just not many managers around nowadays that just have one task to do... Why would you think that a network processor would be slower? Just due to the fact that it is a specialized processor you can count on it that it'll do TCP checksumming and all that stuff a lot faster than most (if not all) general purpose CPUs. On top of that, you won't get interrupts/context switches for bad packets...While this all may not seem much, this is definitely a performance improvement for the system as a whole.
Re:White elephant - flawed logic (Score:2, Insightful)
The problem with Toby's argument is that he is fixated on the speed of the CPU. It doesn't matter how much slower or faster the Network CPU is compared to the Main CPU. It is more important to have the Network CPU fast enough to handle to I/O requirements dictated by the network architecture.
With L2 cache and DMA being the norm now a days, I don't see what the problem is. Sure the Main CPU will stall if the cache needs to do fetch something from main memory, but hardware can be adjusted to take these possibilities into account.
Having processors dedicated to tasks, frees the CPU to handle any other tasks on its agenda. I see a network ASIC being able to receive the data payload ready for transmission, and do its thing until it interrupts the CPU to report it is done.
Also, the cpu would not have to wait for the network transmission to complete before sending more data. The network device would keep accepting payloads until the buffer was full.
While the Graphics Card is a good example, a better example would to look at the FPU. Floating Point Arithmetic is more CPU intensive than integer. To speed things up, the CPU submits the desired computation to the FPU and the FPU notifies the CPU when the calculation is complete.
Then there is the other omission made by Toby, the bus does not have a 1:1 speed ratio with the CPU. With this in mind and using Toby's logic, the ASIC would only have to match the bus speed not the CPU's.
Toby keeps mentioning why pay for a dedicated CPU when expensive CPU you have can handle the task. I think most engineers would ask why tie up an expensive CPU when a dedicated CPU can do the task.
In other words, lets free our expensive CPUs to perform general computational tasks by off loading some of the mundane labor to dedicated ASICS.
I will say Toby is correct with one thing. In a personal computer, I don't see the advantage to the Network ASIC (other than API), since the CPU is idle most of the time anyway.
However, in Intel's target market. I would like to have the CPU perform the application logic and offload the networking to dedicated processors. The idea being that if more headroom to the CPU is possible with the Network ASICS, I could see an increase to the maximum number of transactions per second. This increase could be just enough to keep me from investing in another blade or even another server.
Then again.. I may need more sleep.
Best Regards,
Bill
Re:White elephant? (Score:3, Insightful)