Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Remote Direct Memory Access Over IP 166

doormat writes "Accessing another computer's memory over the internet? It might not be that far off. Sounds like a great tool for clustering, especially considering that the new motherboards have gigabit ethernet and a link directly to the northbridge/MCH."
This discussion has been archived. No new comments can be posted.

Remote Direct Memory Access Over IP

Comments Filter:
  • Also (Score:5, Insightful)

    by madcoder47 ( 541409 ) <development@@@madcoder...net> on Sunday April 27, 2003 @01:24PM (#5820080) Homepage Journal
    Not to mention easy access to sensitive information in emails, documents, and PIMs that the user currently is running and are resident in memory.
    • Although I agree, couldn't kernel developers using this map local memory "private", as in not accessible to software using this technology?

      That way, non-remote memory won't be accessible, and your data will stay your data.

    • Re:Also (Score:3, Insightful)

      by Urkki ( 668283 )
      As it is, even programs running on the *same* computer can't access each others memory. I don't see how they could create a network-shared memory that would unintentionally get around this.
    • Re:Also (Score:3, Informative)

      by Rufus211 ( 221883 )
      Erm, read the FAQ. As a previous person said, why would network access to DMA be any worse than local DMA? I mean you could open it strait up and have no memory checks or anything (*cough* win98 *cough*), but why on earth would you do that? Here's what their FAQ says:

      Some Objections to RDMA
      Security concerns about opening
      memory on the network
      - Hardware enforces application buffer
      boundaries
      Makes it no worse than existing security
      problem with a 3rd party inserting data into the
      TCP data stream
      - Buffer ID
  • by sisukapalli1 ( 471175 ) on Sunday April 27, 2003 @01:25PM (#5820086)
    Seriously though... this is where Scott McNealy's vision of "The Network is the Computer" comes even closer to reality.

    S
  • rdma? (Score:5, Funny)

    by CausticWindow ( 632215 ) on Sunday April 27, 2003 @01:25PM (#5820087)

    The security implications are staggering.

    How do we lobby for port number 31337 for the RDMA protocol?

    • Re:rdma? (Score:5, Insightful)

      by astrashe ( 7452 ) on Sunday April 27, 2003 @01:28PM (#5820099) Journal
      You hit the nail on the head -- the security implications of this are staggering.

      And doesn't tcp/ip involve a lot of overhead for memory access?

      • Re:rdma? (Score:3, Interesting)

        by gmkeegan ( 160779 )
        That's where the TCP offload engines come in. It's in the same ballpark as the prestoserv NFS cards that offloaded some of that overhead from the OS.

        Land of the free, void where prohibited.
      • Yes, there are security implications but here's the point: there are security implications for a lot of applications written for your computer. These obviously have to be taken into account but just because there are risks involved doesn't mean that attempting an implementation should entirely be avoided. I mean, there are security implications for running a web server (especially an out of date one with exploits all over the place); does that mean no one should run a web server?

        I think that shared memory a

        • That's a good point -- I don't dispute what you're saying.

          But it's one thing to have this feature in the machines that make up a cluster that runs a big DB, and another thing to have it in every machine. The story said that MS is talking about putting it every version of windows, to help spread the technology's adoption.

        • does that mean no one should run a web server?

          It sure means that MY hosting providers shouldn't be...

    • Re:rdma? (Score:3, Interesting)

      by KagatoLNX ( 141673 )
      Perhaps this is an opportunity to implement something else of use. Right now, OSes implement rough security for memory access (see SIGSEGV). Why not elevate such security to the same level as FS permissions (and simultaneously elevate both to a Kerberos style network auth).
  • Remote shared memory (Score:5, Informative)

    by sql*kitten ( 1359 ) on Sunday April 27, 2003 @01:29PM (#5820106)
    This feature has been available for a while now, but using a dedicated link rather than IP. Sun call it Remote Shared Memory [sun.com] and it's mainly used for database clusters [oracle.com].
    • Doesn't MOSIX already do this? Or does MOSIX just migrate a single process from one host to another without 'sharing' memory?

      Even swapping over NFS could be considered remote memory, although it is not exactly 'shared'.
      • by sql*kitten ( 1359 ) on Sunday April 27, 2003 @01:55PM (#5820231)
        Doesn't MOSIX already do this? Or does MOSIX just migrate a single process from one host to another without 'sharing' memory?

        I'm not familiar with MOSIX, but Oracle uses RSM on the theory that the high-speed RSM link is always faster than accessing the physical disk. So if you have 2 nodes sharing a single disk array, and Oracle on one node knows that it needs a particular block (it can know this because in Oracle you can calculate the physical location of a block from rowid as an offset from the start of the datafile - that's how indexes work) then the first thing it will do is ask the other node if it has it. This is called "cache fusion". If it has, then it is retrieved. Previous versions of Oracle had to do a "block ping" - notify the other node that it wanted the block, the block would then be flushed to disk, and the first node would load it. This guaranteed consistency, but was slow. With RSM, the algorithms that manage the block buffer cache can be applied across the cluster, which is very fast and efficient.

        Speaking of process migration, there is a feature of Oracle called TAF, Transparent Application Failover. Say you are doing a big select, retrieving millions of rows, connected to one node of a cluster, and that machine fails in the middle of the query. Your connection will be redirected to a surviving node, and your statement will resume from where it left off. I'm unaware of an open-source database that can do either of these.
      • I haven't actually used MOSIX, but the impression that I've gotten from reading about it is that MOSIX deals with process migration among nodes. My guess would be that if a process needs to switch nodes, it's memory is probably copies to the new node rather than being used across the link, which would be relatively high latency.
      • No, mosix just migrates the user context of a task to the remote machine.

        There is some primitive distributed shared memory support in OpenMOSIX; no idea how stable it is though. Normal openmosix/mosix won't migrate tasks requiring shared memory (ie: threads)

      • but it can be made to do it using a patch, see the contrubutions on openmosix.org
      • I wrote to the Mosix folks several years ago and asked about using Wine to run old Windows 3D apps on a cluster after getting it set up on RH. At that time, they said the problem with Wine in a cluster was that it required pooled memory resources. . . sound familiar.
        Back then, there were expensive commercial interconnect systems --I don't think Infinniband was around then-- that did the job. But with the costs involved it made Mosix somewhat besides the point.
        I may be wrong, but this could be
    • NUMA (Score:5, Informative)

      by TheRealRamone ( 666950 ) on Sunday April 27, 2003 @01:55PM (#5820228)

      This article [berkeley.edu] defines NUMA [unsw.edu.au] as

      "an acronym for Non-Uniform Memory Access. As its name implies, it describes a class of multiprocessors where the memory latency to different sections of memory are visible to the programmer or operating system, and the placement of pages are controlled by software. This is in contrast to shared memory systems where the memory latency is uniform or appears to be uniform. ...may be further subdivided into subtypes. For example, local/remote and local/global/remote architectures. Local/remote machines have two types of memory: local (fast) and remote (slow). Local/global/remote machines add one more type of memory, global, which is between the local and remote memories in speed."
      which seems to cover all of this.
  • by Anonymous Coward
    I take it that error code 500 will be used when the DIMM or controller is fried?
  • Sharing memory is not necessary in distributed programming if the variables are kept mostly local and a single computer works mainly with what it has stored in its local memory. This is very applicable to renderfarms where the acceleration scheme itself works very well for distributed rendering because methods such as the grid subdivides into cells each of which can be stored on and evaluated on a single computer with its local memory. Only a central computer is needed to control these nodes and store the ouput which is of very limited size and without great computational needs.
  • by fejrskov ( 664451 ) <martin AT fejrskov DOT dk> on Sunday April 27, 2003 @01:32PM (#5820132)
    > Microsoft ultimately is expected to support RDMA
    > over TCP/IP in all versions of Windows

    Can you see it coming? The ultimate Windows root exploit!! Hmm... I guess someone has to go tell them. Othervise they won't notice it until it's too late...

    Seriously, how do you dare to enable this kind of access?!?
    • They've done it - a way has been found to make the Windows environment less secure. I can just see the sales pitch now.

      "With Windows CX, your computer will have the latest in Remote Memory Management. Share your system's power with another Windows PC for added performance. Trusted applications will automatically control your memory remotely, saving you the trouble of worrying about the wrong programs using your PC."

      (Which, in the usual MS doublespeak, means Bill's trusted computer can bork warezed ver
      • Heh, I can sneak in a reply to this without being too offtopic:

        A friend of mine wants to see a web server with an HTML form where you can just paste in some assembley code and the server will just execute it. The ultimate killer app! "Just give me some code and I'll just run it."
    • Security does not mean 100% exploit-proof, it means it secures your information/services given certain desired lengths of protection and certain operating conditions.

      While M$ is probably not going to get this one right, it doesn't mean that someone can't. This *is* a desirable feature for some applications, and it is possible to make a secure environment (where secure is defined for the application), and make it seamless as well. That is the whole goal of network security professionals.

      If anything, the f
  • Prior art ;-) (Score:5, Interesting)

    by hankaholic ( 32239 ) on Sunday April 27, 2003 @01:32PM (#5820133)
    I tried something like this a while ago -- I wanted to mount an NFS-exported file via loopback and use it as swap.

    The file in question actually resided in a RAM drive on another machine on the LAN.

    I couldn't get it to work in the 45 minutes or so I messed around with it. I'm not sure if Linux was unhappy using an NFS-hosted file for swap, or what exactly the problem was, but I did get some funny looks from people to whom I explained the idea (ie, to determine whether the network would be faster than waiting for my disk-based swap).

    Of course, this was back when RAM wasn't cheap...
    • Some machines we used to have at work would swap over the network as soon as they ran out of local swap. I believe they were AIX machines.
    • Re:Prior art ;-) (Score:2, Interesting)

      by redhat421 ( 620779 )
      Linux will not swap over NFS without a patch which you can find Here [rwth-aachen.de]. I use this for diskless workstations, and it works well.... I'm not sure if your application will be faster or not with this.
    • by pr0ntab ( 632466 )
      1. Set up a ramdisk on a machine with lots of RAM.
      2. Set up a network block device [sourceforge.net] to export said ramdisk.
      3. Set up client using nbd-client to talk to server with network block device.
      4. swapon /dev/nd0
      5. profit!!!

      Using NFS for disk-based swap is possible but silly since you incur the extra overhead. NBD works on a plain vanilla TCP connection and avoids touchy issues like memory vs. packet fragmentation. If you have a gigabit ethernet card with zero-copy support in the driver, then you are in business.

      Ha
      • Well, it was more of a half-hearted attempt to piss off my Windows-using roommate.

        He was complaining about slow swapping, and I was like, "hmmm... I can probably swap over the network to the fileserver machine!"

        I'm not sure if NBD was in the kernel at the time, and it definitely wasn't compiled in. This was, umm, '98 or '99, I believe.

        Of course, this was from my P2-200 with 32 MB of RAM. Our file server was a dual 233, IIRC, with like 128 MB, most of which did nothing most of the time.

        Campus network, al
  • iWarp [globecom.net] has been around for a few years and I think is getting deprecated by a newer system. Just a way of getting *that* much more speed by avoiding unnecessary context switches. Datacenter stuff mostly but is general enough that it could be dropped on a lot of current stuff (AFAIK).
  • You read my mind.
  • Virus problems (Score:1, Interesting)

    by Anonymous Coward
    The article says that Microsoft is part of this "consortium".

    What kind of problems will develope once virus & worm writers, and spammers get access to this mechanism?

    Of course, if DRM (digital restriction management) comes along, at least it will give a back door into the system.
  • Yeah... (Score:5, Funny)

    by benntop ( 449447 ) <craigo @ g m a i l.com> on Sunday April 27, 2003 @01:41PM (#5820166) Homepage Journal
    That would be the first port I would firewall off...

    Brings up interesting ideas of ways to prank your friends & enemies though.
    • No kidding. That was my first reaction as well. Although I can see use for this in giant clusters on dedicated networks, it doesn't seem like something I would be implementing for myself, well, ever...
  • by Waffle Iron ( 339739 ) on Sunday April 27, 2003 @01:47PM (#5820187)
    0100 lea edi, dma://foo.example.com:b8000h
    0103 mov al, 65
    0105 mov ecx, 2000
    010a rep stosb
    010b jmp 100

    g=100
  • by Anonymous Coward on Sunday April 27, 2003 @01:48PM (#5820192)
    Microsoft products have had this "feature" for a while now. Esp. IIS.
  • How does this compare to Intel's VI Architecture?

    VI Architecture [intel.com]

  • Bah, old stuff (Score:5, Insightful)

    by Erich ( 151 ) on Sunday April 27, 2003 @01:49PM (#5820201) Homepage Journal
    There's lots of research about network shared memory for use in various things.

    It's very interesting that using memory over the network is very much the same problem as cache coherency amongst processors. If you have multiple processors, you don't want to have to go out to the slow memory when the data you want is in your neighbors cache... so perhaps you grab it from the neighbor's cache.

    Similarly, if you have many computers on a network, and you are out of RAM, and your nighbor has extra RAM, you don't want to page out to your slow disk when you can use your neighbor's memory.

    NUMA machines are somewhere in between these two scenarios.

    There are lots of problems: networks aren't very reliable, there's lots of network balancing issues, etc. But it's certainly interesting research, and can be useful for the right application, I guess.

    Disk is slow, though... memory access time is measured in ns, disk access time is in ms... that's a 1,000,000x difference. So paging to someone else's RAM over the network can be more efficient.

    I don't have any good papers handy, but I'm sure you can google for some.

    • I keep seeing comments about things like bad latency and such (see below), but actually this DECREASES latency on transfers.

      One such implementation allows you to write directly to memory using a message. This bypasses several system calls, several interrupts, and is quite safe as long as bounds are checked properly by the kernel. This type of setup is used in the high-performance networking used on supercomputers, where the bottleneck is the network. (google for "Portals message passing")

      Allowing messa
  • All of this is available in the Infiniband Spec... now if someone would just build it... and then we could all buy it.
    • There are Mellanox (collaborating closely with Intel) Infiniband (IB) adapters available right now, SUN and IBM announced systems with IB support.
      • Yeah but they are just adapters. You really need stuff that embodies the entire spec. So (with increased bandwidth) you replace thw PCI Bus, your Networking, and your storage bus all with infiniband. I think it's a little bit of too little too late though. The concepts however are sound.
        • Re:InfiniBand (Score:3, Interesting)

          by soldack ( 48581 )
          I work at InfiniCon Systems and we do a lot of InfiniBand related things. We have switching, adapters, and connections from IB to gigabit ethernet and IB to fibre channel. See http://www.infinicon.com/.

          The real next steps for IB is 12X (30 Gb) and on mother board IB. 12 X is in development. Currently, IB adapters are limited by the PCI-X slot they sit in. PCI-X DDR and PCI Express should help, but just having it on the mother board and throwing PCI out would be interesting. Small form factor clusters
    • Check Mellanox. I work for MPI-Softtech and we are releasing a product soon for MPI over Infiniband. We get killer speeds out of it. There are at least two vendors still in the Infiniband market.

      Don't lose hope... we are out there... just need to get the word out!
  • by rdorsch ( 132020 ) on Sunday April 27, 2003 @01:50PM (#5820203)
    Servers will very soon be equiped with Infiniband (http://www.infinibandta.org/). Infiniband has dedicated support for RDMA. This includes efficient key mechanisms, which minimize operating system involvement (which would be context switches each time) and low latency. Bandwidth available right now is 2.5 GBit/s and higher bandwidth can be anticipated very soon.
    • Actually, IB is up to 10 Gigabit. I have seen performanc at 800 Megabytes per second using MPI. I work over at InfiniCon Systems on InfiniBand related software. Interesting uses of this include database clustering and high performance computing clusters. Think 4000 node clusters. APIs include MPI for HPC stuff and DAPL for database clustering and RDMA file systems. You can use Sockets Direct Protocol to offload your TCP/IP traffic. IPoIB handles other IP traffic. There are also protocols for connect
  • by imp ( 7585 ) on Sunday April 27, 2003 @01:55PM (#5820224) Homepage
    FreeBSD already supports gdb over firewire using
    the firewire bridge ability to DMA to/from any
    location of memory. Very handy for remote kernel
    debugging.
  • This technology is not what the headline claims.

    First, what the headline would have you believe has been invented is making it appear as though the RAM of one machine is really the RAM of another machine. This technology has been around and used for quite some time in clustered/distributed/parallel computing communities since at least the 1980s.

    If you look at a brief summary of the spec, http://www.rdmaconsortium.org/home/PressReleaseOct 30.pdf [rdmaconsortium.org], you'll find that all that's happening is that more of the network stack's functionality has been pushed into the NIC. This prevents the CPU from hammering both memory and the bus as it copies data between buffers for various layers of the networking stack.

    I'll also note that the networking code in the linux kernel was extensively redesigned to do minimal (and usually no) copying between layers, thereby providing very little advantage of pushing this into hardware.

    Please, folks, don't drink and submit!

    • Not exactly. Most NICs today due DMA without CPU assistance. This talks about writing to another host's memory. Much of it is about removing the TCPI/IP stack to a NIC but also allowing a caller to specify where the data should go in remote memory.
      Offloading the TCP/IP stack will be needed for current servers to push 10 Gb over TCP/IP. It also becomes a big deal for latency reduction and for iSCSI performance. It makes a big difference. Most of todays dual CPU Intel based boxes have trouble going too
      • Most of todays dual CPU Intel based boxes have trouble going too much over 1Gb. 10 Gb is out of the question without offloading.

        I know of very few hardware platforms, Intel-based or otherwise, that can handle 10Gb/s over a single I/O stream. PCI just doesn't go that fast (yet).

        You'll need more than what's in this spec to get to 10Gb/s.

        • by soldack ( 48581 )
          I have seen dell 2650s hit over 800 Megabytes (6.4 Gb) per second running MPI over InfiniBand using large buffer sizes. The limit is pretty much the PCI-X 133 Mhz interface we are on. I suspect that with PCI-X DDR and PCI Express, we will be able to get a lot closer to 10 Gbit.
  • It seems to me that this is all about implementing a few tweaks to the protocol to allow NICs to use DMA to a much more efficient measure. It's not about letting apps coming from the network to use arbitrary memory blocks. It means programs like apache will be a bit faster because one can program the NIC to pull data directly from the buffer set aside from network access rather than having the CPU do such work. This is about UDMA for networks, not an insanely stupid backdoor.
  • by gweihir ( 88907 ) on Sunday April 27, 2003 @02:03PM (#5820278)
    1. ssh root@remote-machine
    2. read from and write to /proc/kcore in remote-machine

    So where is the use of that? And shared memory emulation over a network is also a decades old technology.
    • From the article:"It helps reduce latency in data transfers between systems by directly placing data from one system's main memory to another's without the need for extensive buffering or CPU intervention."

      The approach you describe relies on CPU intervention on both ends of the connection. The article describes an approach that is much closer to the actual hardware than simply opening a ssh connection. I hope this clears the issue up for you!

      • The approach you describe relies on CPU intervention on both ends of the connection.

        Not allways. The ssh example was for show only. But about a decade ago I saw a diploma thesis advertised that should develop a hardware implementation for shared memory that could work without special drivers. True it was SCSI-based and therefore did not allow non-local networking. But with non-local networking the transfer dominates the latency anyway and hardware does not help.

        All I am saying is that the idea is neither
  • More off loading of resources on PDA's and Tablets... Could help reduce the cost to the point they are like stickypads... throwaway.....

  • I was wondering when we would see more of the network becoming the system bus for the computer. Sun, IBM, HP, and others have been working toward this type of architecture where a network serves as the interconnect for cpu's, ram, disks. Was a good read, but left me wanting more... Soon. Why is it so rare to see good stories make the frontpage on /.?
  • by Anonymous Coward
    Allowing one to access the memory of a remote computer over an IP network. Several programs have presented this useful feature including BIND DNS server, Sendmail MTA and of course MS IIS web service. The technology is called "buffer overflow" and has been used by many individuals for "fun and profit"^H^H^H^H^H^H^H^H^H^H their computing needs. The ultimate guide to using this great feature has been seen here
  • Check out http://www.systran.com for their "Shared Common RAM NETwork" products....

    This would only be a slightly different transport...
  • Comment removed based on user account deletion
  • by DrSkwid ( 118965 ) on Sunday April 27, 2003 @02:28PM (#5820380) Journal
    The proc device serves a two-level directory structure. The first level contains numbered directories corresponding to pids of live processes; each such directory contains a set of files representing the corresponding process.

    The mem file contains the current memory image of the process. A read or write at offset o, which must be a valid virtual address, accesses bytes from address o up to the end of the memory segment containing o. Kernel virtual memory, including the kernel stack for the process and saved user registers (whose addresses are machine-dependent), can be accessed through mem. Writes are permitted only while the process is in the Stopped state and only to user addresses or registers.

    The read-only proc file contains the kernel per-process structure. Its main use is to recover the kernel stack and program counter for kernel debugging.

    The files regs, fpregs, and kregs hold representations of the user-level registers, floating-point registers, and kernel registers in machine-dependent form. The kregs file is read-only.

    The read-only fd file lists the open file descriptors of the process. The first line of the file is its current directory; subsequent lines list, one per line, the open files, giving the decimal file descriptor number; whether the file is open for read (r), write, (w), or both (rw); the type, device number, and qid of the file; its I/O unit (the amount of data that may be transferred on the file as a contiguous piece; see iounit(2)), its I/O offset; and its name at the time it was opened.
  • "Imagine a Beo-(clobber mangle clobber mangle)..$%@$%@$@%$!"
  • ... when we can just plant our code in your memory directly.

    (ok, ok, there should be some serious security with remote memory. I couldn't resist.)
  • by johnjaydk ( 584895 ) on Sunday April 27, 2003 @02:52PM (#5820478)
    Now shared memory might be an incredible neat solutions, in theory. In a multi-cpu box with a shared databus the system holds water but not in a hetrogene, lose coupled system.

    The amount of book-keeping required to keep this thing going makes it a non-starter. And as for scale'ing. Forget it.

    The sad truth is that it's common knowledge that this is the least efficient principle for distributed systems. This technique is usually the fall-back position if nothing else works.

  • parts of the internet run over dry copper. With this system you can have the telephone company install a twisted pair at a cost of about $30 bux per link between any _resonable_ pair of locations and then you can hook up whatever you want also _within reason_. This allows one to run say DSL or MVL or whatever you want.

    AFAIK there is not equivalent offering for fibre and one really needs fiber to be able to do anything interesting.

    Now - if dry fiber did exist then it would make a great deal of sense to r
  • ...because haxoring those buffer overflow exploits is just too damn hard.

  • by calica ( 195939 ) on Sunday April 27, 2003 @03:49PM (#5820699) Journal
    First off, this is not a network shared memory scheme. RDMA could be used to implement one very efficently though.

    It will not allow arbitary access to your memory space. In fact, it would prevent a great number of buffer overflow exploits

    The best analogy is the difference between PIO and UDMA modes of your IDE devices (or any device). This is all about offloading work from your CPU. It is moving the TCP/IP stack from the kernel to the network card for a very specific protocol.

    Here's how RDMA would work layered over (under?) HTTP.
    - browser creates GET request in a buffer
    - browser tells NIC address of buffer and who to send it to.
    - NIC does a DMA transfer to get buffer. OS not involved
    - NIC opens RDMA connection to webserver
    - server NIC has already been told by the webserver what buffer it should put incoming data
    - webserver unblocks once data in buffer and parses it.
    - webserver creates HTML page in second buffer.
    - webserver tells server NIC to do a RDMA transfer from buffer to browser host
    - client NIC takes data and puts it in browser buffer
    - browser unblocks parse HTML and displays it.

    All of this with minimal interaction with the TCP/IP stack. RDMA just allows you to move a buffer from one machine to another without alot of memory copying in the TCPIP stack.

    In fact, the RDMA protocol could be emulated completely in software. It would probably have a small overhead verses current techniques but would still be useful. Just imagine real RDMA on the server and emulated RDMA on the clients (cheaper NIC). The server has less overhead and most clients have cycles to spare!
  • by NerveGas ( 168686 ) on Sunday April 27, 2003 @03:52PM (#5820714)
    Sounds like a great tool for clustering, especially considering that the new motherboards have gigabit ethernet and a link directly to the northbridge/MCH.

    There's just one problem with that... ethernet (even GigE) is *not* a good connection for clustering. Sure, the bandwidth is semi-decent, but the *latency* is the killer. Instead of a processer waiting a number of nanoseconds for memory (as with local memory), it'll end up waiting as much as milliseconds. That may not sound like much, but from nano to micro you jump seven orders of mangitude!

    steve
  • by Tokerat ( 150341 ) on Sunday April 27, 2003 @04:01PM (#5820765) Journal

    ..."See, we TOLD you it was a feature!" Microsoft will also sue the researchers working on this project, citing they Innovated this years ago.
  • We can also start wrapping processor instructions in XML and transmit them via SOAP, in order to create more interoperability between different machine architectures! Remember, we already have IP over XML [ietf.org] :-)
    That's what the whole thing sounds like to me...
  • If you get a Myrinet cluster and run IP over it I think it uses the GM kernel driver which does exactly DMA remote access. The NIC has to be smart enough to handle this of course.

    Cplant [sandia.gov] style clusters do this as well. They also provide an API called Portals which revolves around RDMA. Portals, incidentally, is being used in the Lustre cluster filesystem and is implemented in kernel space for that project. It can use TCP/IP I believe but its not real RDMA.

    *sigh* some day all NICs will be smart enoug
  • This gives a whole new meaning to remote exploit.
  • ...a Beowulf cluster of this!
  • Unless they include a simplified kerberos implementation (with sequence numbers making replays impossible) on the NIC, they're in for trouble and lots of it.

    1)Get into one machine behind firewall.

    2)Sniff database's possibly encrypted RDMA setting your account to zero balance.

    3)...

    4)Profit!!! (Replay the message setting your account balance back to zero before you get billed.)

  • by h4x0r-3l337 ( 219532 ) on Monday April 28, 2003 @01:43AM (#5823091)
    Well, sort of...
    "Back in the day", I wrote a virtual memory handler for my Amiga's accelerator card (which had a 68030 and MMU). Meanwhile, some friends of mine had developed this networking scheme that involved wiring the serial ports of our Amiga's together in a ring, which allowed us to have a true network without network cards.
    Then came the true test: I configured my virtual memory to use a swapfile located in a friend's RAM-disk (he had way more memory than I did), fired up an image editor, opened a large image, and lo and behold: I was swapping at a whopping 9600 bytes per second! The fact that every packet had to pass through multiple other machines (because of the ring-nature of the network) didn't make it any faster either...

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...