Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Hacking Programming Hardware Build

SRI/Cambridge Opens CHERI Secure Processor Design 59

An anonymous reader writes with some exciting news from the world of processor design: Robert Watson at Cambridge (author of Capsicum) has written a blog post on SRI/Cambridge's recent open sourcing of the hardware and software for the DARPA-sponsored CHERI processor — including laser cutting directions for an FPGA-based tablet! Described in their paper The CHERI Capability Model: Reducing Risk in an age of RISC, CHERI is a 64-bit RISC processor able to boot and run FreeBSD and open-source applications, but has a Clang/LLVM-managed fine-grained, capability-based memory protection model within each UNIX process. Drawing on ideas from Capsicum, they also support fine-grained in-process sandboxing using capabilities. The conference talk was presented on a CHERI tablet running CheriBSD, with a video of the talk by student Jonathan Woodruff (slides).

Although based on the 64-bit MIPS ISA, the authors suggest that it would also be usable with other RISC ISAs such as RISC-V and ARMv8. The paper compares the approach with several other research approaches and Intel's forthcoming Memory Protection eXtensions (MPX) with favorable performance and stronger protection properties.
The processor "source code" (written in Bluespec Verilog) is available under a variant of the Apache license (modified for application to hardware). Update: 07/16 20:53 GMT by U L : If you have any questions about the project, regular Slashdot contributor TheRaven64 is one of the authors of the paper, and is answering questions.
This discussion has been archived. No new comments can be posted.

SRI/Cambridge Opens CHERI Secure Processor Design

Comments Filter:
  • by Anonymous Coward on Wednesday July 16, 2014 @09:06AM (#47465847)

    Capabilities are simpler than "simple" paged memory management. Every pointer carries permissions, enforced in hardware, instead of requiring several layers of software to perform their own ownership management.

  • by TheRaven64 ( 641858 ) on Wednesday July 16, 2014 @11:02AM (#47466733) Journal
    To be fair to the submitter, Theo did some amazing work with the laser cutter to produce the tablets. Mine doesn't have the SRI and Cambridge logos etched into the front (or a battery, actually - it's one of the first models) but the it's still very nice. Not really competitive with the iPad, but definitely something we can plausibly use as a prototype.
  • Any questions? (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Wednesday July 16, 2014 @01:21PM (#47468085) Journal
    I'm one of the authors of the paper (I did the LLVM support, some of the ISA design and the hardware implementation, and a token amount of OS work), so I'm happy to answer questions about it.
  • by TheRaven64 ( 641858 ) on Wednesday July 16, 2014 @02:42PM (#47468895) Journal
    The processor is implemented as a softcore in BlueSpec SystemVerilog, which is a high-level hardware description language (HDL). The source code can be compiled to C for simulation, so you can run it on a general-purpose CPU. We get around 30K instructions per second doing this. It can also be compiled to verilog and then synthesized into gate layouts for an FPGA. We can run at 100MHz (probably 125MHz, but we don't push it) in an Altera Stratix IV FPGA, with around 1 instruction per clock (a bit less), so around 3000 times faster than simulation.

    In theory, you could also take the verilog and generate a custom chip. In practice, you wouldn't want to without some tweaking. For example, our TLB design is based on the assumption that TCAMs are very expensive but RAM is very cheap. This is true in an FPGA, but is completely untrue if you were fabbing a custom chip.

    Although we use the term 'source code', it's perhaps better to think of it as the code for a program that produces a design of a processor, rather than the source code for a processor.

    In terms of software patents, there's some annoying precedent that a software implementation of a architectural patent can be infringing. The MIPS architecture that we implement has LWR and LWL instructions that accelerate unaligned loads and stores. These were patented (the patents have now expired) and the owners of the patent won against someone who created a MIPS implementation where these two instructions caused illegal instruction traps and were emulated in software. The software implementations were found to infringe the hardware patent.

  • by TheRaven64 ( 641858 ) on Wednesday July 16, 2014 @03:21PM (#47469335) Journal

    There's a little bit of comparison to the Burroughs architecture that was one of the forerunners of the Unisys architecture in the paper. I'm not overfly familiar with the later Unisys MCP, so this may be wrong:

    Our approach was explicitly intended to work with language that are not memory safe (i.e. C and friends). If you have a memory-safe language, then there is some cost associated with enforcing the memory safety in software, which CHERI can assist with, but you don't see much of a win.

    As soon as you start mixing languages, you get the worst of all worlds. A typical Android application is written in C and Java (and some other things) and so gets all of the memory safety of C, plus all of the performance of Java. A single pointer error in the C code can corrupt the Java heap. One of my students last year implemented a modified JNI on CHERI that allows this sort of mixing but without the disadvantages. Java references are passed to C code as sealed capabilities, so the C code can pass them back to the JVM, but can't dereference them. The C code itself runs in a sandbox (C0 - the capability register that identifies the region of memory that normal MIPS loads and stores can use - is restricted to a smallish subset of the total [virtual] address space) so any pointer errors inside the C are constrained to only touch the C heap (of that particular sandbox - there can be a lot). He demonstrated running buggy and known-exploitable code in the JNI, without it compromising the Java code, with a very small overhead for calling in and out of the sandbox. Most interestingly, he was also able to enforce the Java security model for native code: the sandboxed code couldn't make system calls directly and had to call back into the currently active Java security monitor to determine whether it was allowed to.

    Another of my students implemented an accurate (copying, generational) garbage collector for capabilities. This can be used with C code, as long as the allocator is outside of the normal C0-defined address space (otherwise pointers can leak as integers into other variables and be reconstructed). In particular, you can use this to track references to high-level language objects as they flow around C code and either invalidate them or treat them as GC roots, so you don't get dangling pointer errors. Or you can just use his allocator in C and have fully GC'd C code...

    My understanding of MCP is that it uses high-level languages in the kernel, but does nothing to protect (for example) typesafe ALGOL code from buggy C code within the same userspace process.

  • Re:Any questions? (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Wednesday July 16, 2014 @03:25PM (#47469365) Journal
    We hope so too. One of the things on my short-term todo list is define a set of requirements for our extensions to C so that we can have source compatibility across different CPUs implementing similar ISA extensions. Of course, this is (optimistically) assuming that other CPUs are going to provide similar extensions...
  • by TheRaven64 ( 641858 ) on Wednesday July 16, 2014 @03:45PM (#47469591) Journal

    Currently, Ohloh.net is aware of around 11 billion lines of C/C++ code. I'm including C++ because, while it's possible to write memory-safe C++, all of the evil pointer tricks that are permitted in C are also allowed in C++. Rewriting 11 billion lines of code, even in a language that is 100 times more expressive than C, would be a daunting task. Note that Ohloh only tracks open source code in public repositories: there's a lot more that's in private codebases.

    For a fully memory-safe language, you need things like bounds checks on array accesses. Some research from IBM about 10 years ago showed that, with JVMs of the time, the cost of bounds checks accounted for the vast majority of the difference in performance between C++ and Java implementations of the same algorithms. They did some nice work on abstractions that allowed most bounds checks to be statically elided, but their approach never made it into the language. The overhead is somewhat hidden by superscalar architectures (the bounds check is done in parallel and the branch is predicted not-taken), but not on low-power in-order chips like the Cortex A7 that you'll find in a lot of modern mobile phones.

  • by TheRaven64 ( 641858 ) on Wednesday July 16, 2014 @04:16PM (#47469899) Journal

    I appreciate your comments, but would like to add that it appears to me that correctness+security has by now become more important for applied computer science than processing throughput.

    Yes, that's one of the motivations for our work. Now seems to be the time when it's possible to persuade CPU vendors that dedicating a little bit more hardware to security is valuable. It isn't quite the dichotomy that you describe though, because you can either implement security features in software or hardware and doing so in hardware can mean that you see a performance improvement for the sorts of use cases that people increasingly care about.

If you have a procedure with 10 parameters, you probably missed some.

Working...