Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Linux

Writing Linux Kernel Functions In CUDA With KGPU 101

An anonymous reader writes "Until today, GPGPU computing was a userspace privilege because of NVIDIA's closed-source policy and AMD's semi-open state. KGPU is a workaround to enable Linux kernel functionality written in CUDA. Instead of figuring out GPU specs via reverse-engineering, it simply uses a userspace helper to do CUDA-related work for kernelspace requesters. A demo in its current source repository is a modified eCryptfs, which is an encrypted filesystem used by Ubuntu and other distributions. With the accelerated performance of a GPU AES cipher in the Linux kernel, eCryptfs can get a 3x uncached read speedup and near 4x write speedup on an Intel X25-M 80G SSD. However, both the GPU cipher-based eCryptfs and the CPU cipher-based one are changed to use ECB cipher mode for parallelism. A CTR, counter mode, cipher may be much more secure, although the real vanilla eCryptfs uses CBC mode. Anyway, GPU vendors should think about opening their drivers and computing libraries, or at least providing a mechanism to make it easy to do GPU computing inside an OS kernel, given the fact that GPUs are so widely deployed and the potential future of heterogeneous operating systems."
This discussion has been archived. No new comments can be posted.

Writing Linux Kernel Functions In CUDA With KGPU

Comments Filter:
  • by Anonymous Coward on Friday May 06, 2011 @04:07PM (#36051298)

            Hand off encryption routines to a closed source black box. Brilliant.

  • Question: (Score:4, Interesting)

    by Jaqenn ( 996058 ) on Friday May 06, 2011 @04:11PM (#36051338)
    (I have never written kernel level code, and the statement that follows is only from listening to what other people are doing)

    I thought that a tiny bit of kernel code reflecting calls into a user level process was old news, and has become established as the preferred development model. Is there a reason that it's undesirable?

    Because the summary makes it sound like we're sad to be following this model, and we're only doing it because we can't pull NVidia's driver source into the linux kernel.
  • Re:Question: (Score:5, Interesting)

    by PoochieReds ( 4973 ) <[jlayton] [at] [poochiereds.net]> on Friday May 06, 2011 @05:07PM (#36051836) Homepage

    There are also other concerns than the context switch overhead...particularly when dealing with filesystems or data storage devices.

    For instance, suppose part of your userspace daemon gets swapped out, and you now need to upcall to userspace. That part that got paged out then has to be paged back in. If memory is tight, then the kernel may have to free some memory, and it may decide to flush out dirty data to the filesystem or device that is dependent on the userspace daemon. At that point, you're effectively deadlocked.

    Most of those sorts of problems can be overcome with careful coding and making sure the important parts of the daemon are mlocked, but you do have to be careful and it's not always straightforward to do that.

  • by jasonwc ( 939262 ) on Friday May 06, 2011 @05:44PM (#36052172)

    I hope this is just a proof-of-concept design because ECB mode should not be used for this purpose. Wikipedia provides a pretty obvious example of the weakness of ECB mode:

    "The disadvantage of this method is that identical plaintext blocks are encrypted into identical ciphertext blocks; thus, it does not hide data patterns well. In some senses, it doesn't provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all. A striking example of the degree to which ECB can leave plaintext data patterns in the ciphertext is shown below; a pixel-map version of the image on the left was encrypted with ECB mode to create the center image, versus a non-ECB mode for the right image."

    http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Initialization_vector_.28IV.29 [wikipedia.org]

  • Why not OpenCL? (Score:4, Interesting)

    by gerddie ( 173963 ) on Friday May 06, 2011 @06:09PM (#36052388)
    They should go with OpenCL, then there would be a chance that at one point one can use it with free drivers (and other hardware), but I guess that's the prise you pay for a graduate fellowship from NVIDIA.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...