Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Open Source Hardware

Rediscovering RISC-V: Apple M1 Sparks Renewed Interest in Non-x86 Architecture (zdnet.com) 202

"With the runaway success of the new ARM-based M1 Macs, non-x86 architectures are getting their closeup," explains a new article at ZDNet.

"RISC-V is getting the most attention from system designers looking to horn-in on Apple's recipe for high performance. Here's why..." RISC-V is, like x86 and ARM, an instruction set architecture (ISA). Unlike x86 and ARM, it is a free and open standard that anyone can use without getting locked into someone else's processor designs or paying costly license fees...

Reaching the end of Moore's Law, we can't just cram more transistors on a chip. Instead, as Apple's A and M series processors show, adding specialized co-processors — for codecs, encryption, AI — to fast general-purpose RISC CPUs can offer stunning application performance and power efficiency. But a proprietary ISA, like ARM, is expensive. Worse, they typically only allow you to use that ISA's hardware designs, unless, of course, you're one of the large companies — like Apple — that can afford a top-tier license and a design team to exploit it. A canned design means architects can't specify tweaks that cut costs and improve performance. An open and free ISA, like RISC-V, eliminates a lot of this cost, giving small companies the ability to optimize their hardware for their applications. As we move intelligence into ever more cost-sensitive applications, using processors that cost a dollar or less, the need for application and cost-optimized processors is greater than ever...

While open operating systems, like Linux, get a lot of attention, ISAs are an even longer-lived foundational technology. The x86 ISA dates back 50 years and today exists as a layer that gets translated to a simpler — and faster — underlying hardware architecture. (I suspect this fact is key to the success of the macOS Rosetta 2 translation from x86 code to Apple's M1 code.)

Of course, an open ISA is only part of the solution. Free standard hardware designs — with tools to design more — and smart compilers to generate optimized code are vital. That larger project is what Berkeley's Adept Lab is working on. As computing continues to permeate civilization, the cost of sub-optimal infrastructure will continue to rise.

Optimizing for efficiency, long-life, and broad application is vital for humanity's progress in a cyber-enabled world.

One RISC-V feature highlighted by the article: 128-bit addressing (in addition to 32 and 64 bit).
This discussion has been archived. No new comments can be posted.

Rediscovering RISC-V: Apple M1 Sparks Renewed Interest in Non-x86 Architecture

Comments Filter:
  • by BAReFO0t ( 6240524 ) on Sunday January 10, 2021 @03:44PM (#60921796)

    is TSMC-level fabbing capabilities for every makerspace and home.

    It would also mostly solve the security concerns with current hardware manufacturer oligarchies.

    I just don't know how to get that earlier than a pony and my own spaceship ...

    • Why can't we have a multi core cpu that has it all? Or just a motherboard that supports multiple cpu platforms? In this day and age, it seems silly these old discussions are coming back. It's like the fashion industry. Everything old is new again!
      • by Entrope ( 68843 ) on Sunday January 10, 2021 @04:12PM (#60921940) Homepage

        Define "all" first. How many cores do you need? How much cache? Do you need accelerators for X, Y and Z -- presumably including 3D rendering and neural networks? How much memory bandwidth? How many PCIe lanes? Ethernet or wireless interfaces? How many integrated graphics interfaces, and do they need to support 4K and/or 8K?

        All of these drive package size and power dissipation, several drive pinout as well. Different people have different answers for those questions, which is why we cannot have a single platform or CPU "that has it all". Some might have reasonable answers for most desktop platforms -- for example, putting network, audio and graphics interfaces on the far side of one or more PCIe interfaces -- but embedded and very low-power systems prefer most those functions to be integrated with the main system-on-chip.

        • by AmiMoJo ( 196126 ) on Sunday January 10, 2021 @05:18PM (#60922204) Homepage Journal

          AMD and Intel have both bought FPGA manufacturers. I'm guessing soon we will see CPUs with large reconfigurable parts, switchable depending on the application.

          • by Entrope ( 68843 )

            For what kind of applications? I don't see any "killer app" for FPGA-like blocks in a mobile or desktop environment: it takes time and power to reconfigure the fabric, and it needs to be a certain size to offload much from the CPU, but not so large that it displaces gates that are more generally useful. In a data center environment, HPC and security providers can spend the effort to target that kind of reconfigurable fabric, but that still does not make a majority of server type CPUs. High-frequency trad

            • by ShanghaiBill ( 739463 ) on Sunday January 10, 2021 @05:42PM (#60922338)

              For what kind of applications? I don't see any "killer app" for FPGA-like blocks in a mobile or desktop environment

              1. Crypto
              2. DSP
              3. Real-time signal handling
              4. Audio codecs
              5. Pseudo-random number generation
              6. Compression/decompression
              7. Computer vision
              8. AI

              Build it, and they will come.

              • by Entrope ( 68843 ) on Sunday January 10, 2021 @05:53PM (#60922388) Homepage

                The cost/benefit tradeoff typically favors CPUs over embedded FPGA logic for 1, some of 2, 4, 5, 6, except in a few crypto cases that I mentioned above. GPUs are usually better than FPGAs for the rest of 2, 7 and 8. 3 falls into the glue logic case I mentioned above: Unless you have the kind of embedded system with GPIOs going almost straight to the SoC, the CPU doesn't see the signals you want to process with such low latency. Usually, something like an ARM Cortex R-series core gets added instead.

                When very heavy crypto, DSP, or machine learning (vision or AI) is in order, so is a dedicated accelerator card.

                • 1. Crypto
                  2. DSP
                  3. Real-time signal handling
                  4. Audio codecs
                  5. Pseudo-random number generation
                  6. Compression/decompression
                  7. Computer vision
                  8. AI

                  The cost/benefit tradeoff typically favors CPUs over embedded FPGA logic for 1, some of 2, 4, 5, 6, except in a few crypto cases that I mentioned above. GPUs are usually better than FPGAs for the rest of 2, 7 and 8. 3 falls into the glue logic case I mentioned above: Unless you have the kind of embedded system with GPIOs going almost straight to the SoC, the CPU does

                • The cost/benefit tradeoff typically favors CPUs over embedded FPGA logic for 1, some of 2, 4, 5, 6

                  Err what? You realise that for most of those CPU vendors are baking in dedicated hardware modules right now that are limited in scope? Ever wonder why plugins like h264ify exist? Because having the algorithms you listed running on the CPU is very intensive, and having them baked into silicon means they can't meet changing demands such as new codecs or new crypto algorithms.

                  You may not see an application, but I'll trust both AMD and Intel that they had an application in mind before they parted with $35billio

              • For what kind of applications? I don't see any "killer app" for FPGA-like blocks in a mobile or desktop environment

                1. Crypto
                2. DSP
                3. Real-time signal handling
                4. Audio codecs
                5. Pseudo-random number generation
                6. Compression/decompression

                Your current CPU already has all of that built in and it's faster than an FPGA would do it.

          • by tigersha ( 151319 ) on Monday January 11, 2021 @06:33AM (#60924498) Homepage

            I spent the Christmas Holiday lockdown buying a few entry level FPGAs and playing with them. A Spartan 6, 2 Artix-7, an Altera something and a few small Lattice ICE devices. All very interesting, but frankly I see very few applications for me in this except out of pure back-to-the-basics interest. I did manage to make my own CPU, which is really cool, but also much easier than I thought.

            I learnt a few lessons
            a) I was interested in the whole chain of trust. We trust our apps, which trust the compiler and the OS and the network, which trusts the hardware which trusts the Fab and so on and so on. On the basic level it turned out that we trust an extremely complex tool to do chip layout, Vivado/ISE/Quartus. These things weight in at 20GB and take minutes to do simple gates, never mind CPUs. On my 10 core Xeon. So the bottom layer of trust is rather more complicated than the top, which I find interesting

            b) Modern Hardware languages (Verlog/VHDL/Lucid,etc) are truly cool things. I was expecting something else, and learned a lot.

            c) The lower decks of the whole stack is much harder than the top and logic design is hard. But when you abstract the controlling part of a logic design into a CPU suddenly it becomes manageable. So next project I will be back to my trusty nRf52s and Cortex M4 and looking at RISC-V. FPGAs (and GPUs) are good at making the 5% that is truly difficult much faster, but the 95% of the rest a simple CPU is fine.

            d) For some reasons FPGAs are frikking exensive, you can get way more bang for you buck with a microcontroller. Except for that difficult 5%

            e) I now think people who design Floating Point chips are gods.

            f) The whole idea of designing your own SoC and downloading and speccing a CPU of your choice and burning everything onto a chip directly is very cute. You can actually download several different RiscV or MicroBlaze of CortexM0 cores and just use them as pieces of a design. Or make your own CPU.

            g) It is really cool to be able to actually design your own VGA card and get results on a screen.

            For an intro to the field, Look at a Digilent Basys-7, or wait for SparkFun to bring out the new Au+. The original Alchitry Au has a great UI, but is limited if you do not have the IO boards, and the peripherals on that are very limited.

            Spartan based stufff (Mojo, Elbert/Mimas) but they are rather limited and the tools are old. Make sure your entry level FPGA has a lot of IO ports or Digilent PMOD sockets. Another failure on the Alchitry side, no PMod and limited IO. I hope SparkFun does a somewhat better job with the Au+

            As for software, the OSS options are limited, there are free versions of the propertary tool,s but they take 50GB (seriously) to install. The OSS stuff sort of works, but your best bet is Alchitry AU/Cu which has a simple UI that uses Vivados command line to compile and they have a good intro text.

            If you must hobble yourself with an OSS toolchain, look at Alhambra II, which has a Lattice on board, but it rather limited. There are lots of OSS cores out there, but the tools to compile them are not.

            As for books, I can definietly recommend Designing Video Game Hardware in Verilog by Steven Hugg. It has a website called 8bitworkshop attached which is really really sweet to get your feet wet. I know there are others out there too, this is a really deep rabbit hole.

            Next step will be to look at ARM/FPGA combo, Zynq in particular.

            I can definitely recommend the experience to any hacker.

      • Why can't we have a multi core cpu that has it all?

        Sounds like what you want is an FPGA.

        You might want to check out the MiSTer project [github.com].

      • by Stormwatch ( 703920 ) <(rodrigogirao) (at) (hotmail.com)> on Sunday January 10, 2021 @05:31PM (#60922290) Homepage

        Or just a motherboard that supports multiple cpu platforms?

        Back in the 90s, Panda Project's Archistrat [google.com] let you use x86, PowerPC, or Alpha.

    • is TSMC-level fabbing capabilities for every makerspace and home.

      A modern fab costs about $12 Billion. That is out-of-reach for most families.

      Maybe in a few decades.

      In the meantime, families can sit around the dinner table and design FPGA bitstreams.

  • by Narcocide ( 102829 ) on Sunday January 10, 2021 @03:46PM (#60921802) Homepage

    ... the post-Intel computing world wakes up and rediscovers the architectural secrets they buried with Commodore's murdered corpse 30 years ago, touting them like revolutionary breakthroughs.

    • Which "architectural secrets" are you talking about? Surely can't be the CISC-based 68K Amiga line?
      • Which "architectural secrets" are you talking about? Surely can't be the CISC-based 68K Amiga line?

        Case in point. Those who thought a computer was just a CPU and RAM never got the Amiga. The Amiga had a chipset for hardware acceleration of all sorts of functions, such that it took ghoulishly stagnant management holding the engineering team back for years before PCs caught up.

        The Apple M1 architecture is the most coherent to the Amiga hardware philosophy of anything to appear recently. I would love to see a RISC-V counterpart!

    • I think that reports of Intel x86's death are greatly exaggerated.
    • Re:Interesting... (Score:4, Insightful)

      by Luthair ( 847766 ) on Sunday January 10, 2021 @05:57PM (#60922406)

      These things are already in every PC you own. Your CPU has hardware encryption support, your GPU co-processor already supports many codecs, etc. etc.

      The actual article here is clueless tech reporter regurgitates Apple press release, turns out they invented everything.

      • Re:Interesting... (Score:4, Informative)

        by serviscope_minor ( 664417 ) on Sunday January 10, 2021 @06:50PM (#60922620) Journal

        The actual article here is clueless tech reporter regurgitates Apple press release, turns out they invented everything.

        JFC, right? I mean ARM has been a massive player in the embedded space from everything above the ultra low end, right up to high end, slowly pushing out most other architectures, and completely utterly dominant in the mobile space. It's also branched out into replacing things like Sparc64, currently running the #1 supercomputer.

        But oh now Apple released the M1 suddenly people are looking at ARM. WTF.

  • by puddingebola ( 2036796 ) on Sunday January 10, 2021 @03:46PM (#60921806) Journal
    Microsofts attempts to get an ARM version of Windows adopted have gone nowhere. Linux, running on every architecture available and unavailable, continues to grow. Could we one day see a PC industry dominated by an open hardware architecture coupled with an open software architecture. Could it be true?
    • Windows on ARM is just starting out. With intel on the defense, microsoft planning to add (probably pretty crappy) x86 emulation and windows on arm being the inly windows that runs on macs (in a VM) it can still have a future.
      • Windows has been running on ARM since Windows 8. Remember Windows RT? x86 emulation on the Apple ARM side is pretty decent as they baked accelerated instructions into their ARM implementation. Microsoft's is all-software. The only ace they might have up their sleeve might be a WINE style emulation layer, with optimized x86/ARM translation libraries, but that only goes so far.

    • by dfghjk ( 711126 )

      By every measure that is already true, and the very mediocrity that results creates the opportunity for one company to "do it all" and advertise that they are better in every way, whether they are or not.

    • Microsofts attempts to get an ARM version of Windows adopted have gone nowhere.

      I wonder why the ARM-based Surface running Windows couldn't do what the M1 MacBook Air has done? I thought Microsoft had Rosetta technologies as well.

      Apple's relatively seamless shift to the new architecture as far as applications you already have, is a big reason it was so well accepted, even beyond the performance boost...

      But the Windows ARM Surface also didn't seem to have the performance boost the Apple M1 chip is delivering

      • Microsoft is a relative newcomer to ARM. Apple has history with ARM dating back to the Newton MessagePad line, and has been designing ARM based chips for over a decade between iPhone, iPad, and AppleTV. Microsoft co-developed a chip with Qualcomm and has little other history in the space.
        • Microsoft is a relative newcomer to ARM.

          How so? I remember Windows CE For Pocket PC, Windows Mobile 6, Windows Phone 7, Windows Phone 8, Windows RT, Windows Mobile 10... Granted, there were a lot of missteps, such as WP7 apps being .NET Compact Framework-only, and Win32 being inaccessible from even recompiled Windows RT applications.

    • Comment removed (Score:5, Insightful)

      by account_deleted ( 4530225 ) on Sunday January 10, 2021 @05:46PM (#60922356)
      Comment removed based on user account deletion
      • by caseih ( 160668 )

        Mod up. exactly right. I get flack whenever I post about how ARM is a mess of incompatible standards and how frustrating it is. Sure Ubuntu supports the Pi officially. But Ubuntu for the Pi cannot boot on Pine64 for example.

        I really want to love laptops like the PineBook Pro. But I find the experience of getting my distro of choice up and running, as well as supporting even basic GPU acceleration to be very frustrating.

      • Microsoft DID ship a PowerPC (and Alpha and MIPS) version of Windows NT back in the day 20 years ago. No demand.

    • by caseih ( 160668 )

      Problem is MS doesn't really want to support ARM. You can only get a Windows license on an ARM device with particular devices. Well that and you have the same issues that hold back Linux on ARM. ARM devices are all different with different boot loaders, incompatible configurations. If there was a standard ARM configuration, much like how we have UEFI defining a standard boot platform, and if MS supported it with Windows, then I think ARM would start to make real inroads. And Linux support on ARM would be

      • UEFI is defined for arm's Server Base Board Boot Requirements (SBBR) but obviously not on 'embedded' devices such as phones, which seem to have their own proprietary bootloaders to bootstrap Android.

        However, Qualcomm does boot UEFI internally, to support Windows. https://worthdoingbadly.com/qc... [worthdoingbadly.com]

        AFAIK most SBCs have u-boot handing off to Linux to load a complete device tree.

  • A question: What value does a particular ISA provide?

    Is it mostly compatibility (i.e. the ISA is nothing special, but it's important to be compatible with everyone else)?

    Or is there real value is the design of the ARM or x86 ISA that make them valuable outside of simple compatibility?

    In other words, if you are designing a chip from scratch, outside of compatibility, is there any reason to use the ARM or x86 ISA?

    • by sphealey ( 2855 ) on Sunday January 10, 2021 @03:53PM (#60921848)

      The 8088 was a hack on top of a hack (the 8086 was intended as a stopgap to the iAPX architecture). Everyone in the industry at the time was looking at Motorola's move from the 6800 to the 68000, and National Semiconductor's tease of the forthcoming 32032. Then along comes IBM with the 8088-based IBM PC (reputedly because the 68000 was $5 too expensive) and the human race has been stuck with every single weakness and deficiency of that chip since. I have heard it called the single most expensive mistake in human history and I have a hard time disagreeing.

      So yes, something not tied to a 37-year-old mistake would be good.

      • by HanzoSpam ( 713251 ) on Sunday January 10, 2021 @03:59PM (#60921874)

        Actually IBM went with Intel because Motorola couldn't produce the 68000 at the required volumes soon enough, and IBM didn't want to wait to introduce the PC.

        • by dfghjk ( 711126 )

          Correct answer, and the choice to wait may have been very costly. The IBM PC ultimately democratized computing.

      • The generalized use of 32-bit only IPv4 addresses has been probably more expensive than x86. After all, x86 is nowadays a small part in the processor frontend that converts ISA instructions to microoperations (as also occurs in other architectures).

      • by AmiMoJo ( 196126 ) on Sunday January 10, 2021 @05:16PM (#60922198) Homepage Journal

        As much as I despise x86 (I was all about 68k) but modern x86 CPUs have little relation to the 8086.

        AMD64 was a big departure and clean up. The older crud that is still supported is mostly via exception handling ROM code. Very slow but it's only old apps designed for slow CPUs that use them anyway.

        What's left is decently high density instructions that were designed in an age where out of order execution and all the other tricks were well established.

        If you look at the M1 one thing that stands out is the massive amount of cache. ARM code isn't very dense. Creates pressure on memory, so you need a massive cache and a long pipeline to get good performance.

        It remains to be seen if this is going to keep performance behind AMD64 longer term, but my guess would be it will. Power efficiency is less clear.

        • by Bert64 ( 520050 )

          The code density difference between arm64 and amd64 is actually very small:
          http://web.eece.maine.edu/~vwe... [maine.edu]

          On the other hand, support for 32bit ARM32 is optional (Apple don't support 32bit ARM code on their newer chips), and the fixed length instructions provide benefits in terms of scheduling and cache efficiency (ie you know exactly how many instructions are going to fit into the cache etc).

      • by dfghjk ( 711126 ) on Sunday January 10, 2021 @05:31PM (#60922286)

        "I have heard it called the single most expensive mistake in human history and I have a hard time disagreeing."

        Funny, I don't have a hard time disagreeing at all. There is absolutely no argument to support this claim.

        In what possible alternative universe would there be a cost structure that could demonstrate an "expensive mistake" or even that x86 was ultimately a mistake at all? IBM didn't choose x86 because of $5, it chose x86 because Motorola wouldn't commit to the volume that IBM projected and ultimately required. Quite the opposite, the choice of 68K, had it been made, might have been the "most expensive mistake in human history".

        Who did you "hear" this ignorant comment from, your brother?

      • Comment removed (Score:5, Interesting)

        by account_deleted ( 4530225 ) on Sunday January 10, 2021 @06:12PM (#60922480)
        Comment removed based on user account deletion
      • So yes, something not tied to a 37-year-old mistake would be good.

        Nothing is tied to a 37 year old mistake. It was a mistake in the day but the modern CPU is nothing like the one you are comparing it to. The only thing that it shares in common is *some* of the instructions. Underneath they aren't remotely similar anymore, hell technically a modern x86 chip has more in common with RISC based CPUs of yesteryear.

      • by gweihir ( 88907 )

        That was indeed a great tragedy. I was pretty good with 68k assembler, which has a very nice and clean design. (x86 is a horrible mess...)
        To be fair to IBM, they expected to sell 50k units or so, not to define the next industry standard.

    • by Entrope ( 68843 ) on Sunday January 10, 2021 @04:25PM (#60922000) Homepage

      The x86 ISA mostly has longevity and compatibility as draws. The ARM ISA has compatibility, particularly breadth of use, and RISC nature. The RISC-V ISA has RISC nature (with slightly more regular decoding rules than ARM), free licensing, and a broad development community.

      If you are designing a chip from scratch, stop. Go find five experienced chip designers to explain why that is a terrible idea. If you still think you need to, RISC-V is probably as good an ISA as any, unless you are AMD, Apple or Intel.

      In previous analysis, some of the M1's strong performance -- especially performance per watt -- seems to come from being able to simplify the instruction decoders relative to x86, allowing more resources to be used for other purposes and increasing IPC. RISC-V should share that. But a lot of the M1's strong performance also comes from deep familiarity with software behavior and practice tuning CPU details, and Apple probably has more of that than typical RISC-V designer. For example, cache sizes and ways have a huge impact on performance, as do number and types of execution units, branch predictor logic, retire buffer size, and more.

    • In other words, if you are designing a chip from scratch, outside of compatibility, is there any reason to use the ARM or x86 ISA?

      The ARM ISA was designed from the ground up to be high performance at low power. Specifically it was designed to be cheap: they wanted a fast chip with die area and low heat dissipation. The latter was especially important since that allowed them to use cheap plastic packages not expensive ceramic ones which was a big deal, cost wise.

      So you can implement the arm ISA with not much

    • by Bert64 ( 520050 )

      It's purely about compatibility... Retaining compatibility with the ISA is a significant burden when trying to create improved designs.

      For most other hardware, you have an abstraction layer (ie drivers) so that the hardware itself can be totally different while exposing a standard interface to the software.
      This is in contrast to the problems that tripped Commodore up with the Amiga, where there was often no abstraction layer with code written to directly target the custom chips. Having to retain compatibili

  • What is there to rediscover? What does TFA's author think is in most cellphones? A Pentium?

    There is no renewed interest. Risc is known, recognized for what it's worth and widely used. Just not on desktop or laptop machines.

    • by dfghjk ( 711126 )

      Right, and the very comment demonstrates a complete ignorance of the vast majority of all processor applications.

  • by raymorris ( 2726007 ) on Sunday January 10, 2021 @03:52PM (#60921844) Journal

    > One RISC-V feature highlighted by the article: 128-bit addressing

    For when the 8 million terabytes of memory you address with 64 bits isn't enough.

    • Back in the day I wrote a monitor server in C++, covering our DOS, UNIX and Netware systems. The Netware admin was impressed and asked for a copy. I gave him a floopy and he looked at the 50KB executable. He was astonished at the size and seemed to wonder where the rest of it was.

      Moral: 640KB is enough memory for anyone.

    • You jest, but the clowns who developed IPv6 thought 128 bit addressing was a great idea even though its 10^20 times the number of grains of sand on earth. Theres future proofing and then theres just dumb.

      • by Entrope ( 68843 ) on Sunday January 10, 2021 @05:41PM (#60922332) Homepage

        IPv6 does not have 128-bit addresses because anyone thought we need that many endpoint addresses. It is meant to make routing lookups faster and the tables often smaller. With IPv4, there are not enough addresses, so they have to be allocated efficiently. In practice, that means somewhat sequential assignment as blocks are needed, scattering address prefixes across regions. With IPv6, there is no need to pack millions of unrelated network prefixes into a small space, so addresses can better network topology.

      • by raymorris ( 2726007 ) on Sunday January 10, 2021 @05:42PM (#60922334) Journal

        I'm one of the "clowns" who pushed for 128-bit rather than 64-bit for IPv6 in 1998. Sometimes I wish I had lost that argument.

        To understand the reasoning, at the time were having real problems because of 32-bit limitations - not just in IP, but disk sizes, memory, all over the place. People had thought that going from 16-bit numbers everywhere to 32-bit would get rid of all the problems, that 32-but would be plenty enough for everything. Two GIGAbytes had been a ridiculously large number at the time 32-bit was chosen.

        At the same time, the 512-byte BIOS boot was a problem. We had problems all over the place because people before us had thought that X bits would be plenty, and it turned out not to be.

        Sure we thought 64 bits would be enough. Just like the older members of the committee had thought 32 bits would be enough. We had enough humility to realize we couldn't predict the future, though, and figured we had better make damn sure there is enough bits forever.

        Also, the routing of 32-bit IPv4 is stupidly inefficient. By allocating more bits, we could have efficient allocation and routing, analagous to this:

        2001::1/18 Africa
        2001::2/18 Europe
        2001::4/18 North America
                            2004::1/24 Charter
                            2004::2/24 Verizon

        So you could route to the appropriate continent by looking at the first few bits, then the appropriate ISP in the next few bits, etc rather than having routing tables with millions of entries in every router.

        Note also all the IPs issued are 2001:: IPs, so that reduces the size you actually have to pay attention to, while preserving the ability to use others 20 or 30 years from now of needed.

        That all would be good. What surprises me is that common software like Microsoft SQL Server *still* doesn't have a data type suitable for IP addresses, a 128-bit datatype. Some open source database systems support IPv6 IPs, of course, but lots of commonly used software doesn't. For THAT reason I kinda wish we had done 64-bit IPs, just because operating systems and applications natively handle 64-bit numbers now.

        Of course, in 1998 I couldn't possibly know that 23 years later the standard would be 64-bit CPUs. We could all be using 256-bit CPUs in 2021 from what I knew then.

        • by Bert64 ( 520050 )

          Note also all the IPs issued are 2001:: IPs, so that reduces the size you actually have to pay attention to, while preserving the ability to use others 20 or 30 years from now of needed.

          No they are not, 2001:: was just the first range to be allocated. There are now several additional ranges allocated, but you are right about them being regional - for instance 24xx:: is allocated to APNIC etc.

          That all would be good. What surprises me is that common software like Microsoft SQL Server *still* doesn't have a data type suitable for IP addresses, a 128-bit datatype. Some open source database systems support IPv6 IPs, of course, but lots of commonly used software doesn't. For THAT reason I kinda wish we had done 64-bit IPs, just because operating systems and applications natively handle 64-bit numbers now.

          Yeah, postgres does. I was tasked with adding ipv6 support to a postgres based application a few years ago, and the only thing stopping it was a piece of code on the frontend that checked for a valid ipv4 address before passing it to postgres.

          Another way to store ipv6 addresses is a combination of pref

          • > In 1998, 64bit processors had already been available in highend systems for several years (alpha, mips, hppa, power etc), and highend servers were already available with more ram than a 32bit process can support. It was inevitable even in 98 that 64bit was the future.

            As you pointed out, in 1998 64-bit was becoming the PRESENT.
            We were playing games in our Nintendo 64.

            Twenty-two tears before that, 8-bit was all the rage.
            In 22 years, we'd gone from 8 bit to 32 and 64.
            It was hardly a foregone conclusion th

            • by Viol8 ( 599362 )

              After 64 bits its a law of rapidly dimishing returns which is why we're still using them. 64 bit ints are enough for 99.99% of maths work and 64 bit addressing will be sufficiant for any medium future timescale given the current progress is storage technology. All 128+ bits would achive is huge binary sizes.

    • You could do it the AS/400 way where disks and memory are all just one big address space.

    • If I recall, the cache miss penalties for 128bit swamp the performance gains, leading to a net decrease in total performance. This probably explains why manufacturers went with multi-core designs.

    • by Plammox ( 717738 )
      8 million terabytes of memory ought to be enough for everyone (cough).
    • For when the 8 million terabytes of memory you address with 64 bits isn't enough.

      If you have 128 bit addressing, then you don't need memory protection so protect processes from one another.

      Memory protection is done by simply picking a random offset in the 128 bit space. It will take you a universe lifetime squared to find another processes address space with reasonable probability.

  • by backslashdot ( 95548 ) on Sunday January 10, 2021 @03:58PM (#60921872)

    RISC-V processors allow proprietary extensions galore. What will happen if RISC-V takes off is that there will be fragmentation and architectures that are incompatible until one emerges as a standard. And of course you'll have to pay if you want to implement those extensions. Then we're back to where we stand with ARM.

    • I agree and what's more these cores will be licensed in a similar manner to ARM's licensing model. They will certainly not be free, maybe somewhat cheaper than ARM and I doubt they'll even be "open."
    • by evanh ( 627108 )

      Agreed, not adopting GPL is mistake.

      • by dfghjk ( 711126 )

        Perhaps they did not do so because they wanted designers to participate?

        Maybe you should start your own processor design and release it under the GPL, finally fix that mistake.

    • I agree. Having read through the whitepaper, it's my firm belief that RISC-V was designed to be a generic, cheap embedded processor, and there never will be a standard baseline.

      Also, RISC-V isn't really that good or interesting. It's very generic by nature, probably to avoid possible patent infringement. I find it unlikely it will be used in truly high-performance applications outside the embedded and "black box" market.

  • China, Iran, and North Korea are behind a big push to convince the world to move away from Intel and other western controlled monopolies. In most cases, monopolies are bad; however, CPU tech is very important to the security and safety of the free world. We need to do everything we can to make sure we do not fall into Xitler's trap.

    • by nagora ( 177841 )

      China, Iran, and North Korea are behind a big push to convince the world to move away from Intel and other western controlled monopolies. In most cases, monopolies are bad; however, CPU tech is very important to the security and safety of the free world.

      Which is why we shouldn't be using Intel.

    • Your post needed sarcasm tags.

      Superior tech is what to use and Intel are free to produce it.

    • All I heard is "something something tinfoil hat something".

      • Microprocessors are dual-use technology and have a big impact on the balance of military power in the world. Who do you want ruling the world, free societies of the western world, or an evil dictator that thinks concentration camps are acceptable tools to control political opposition?

  • by OrangeTide ( 124937 ) on Sunday January 10, 2021 @04:23PM (#60921990) Homepage Journal

    Or are desktop computers now less relevant?

    I think everyone interprets Apple's recent architectural switch according to their own bias.

  • What other arches need is something standardized like BIOS.

    SRSLY

  • "The x86 ISA dates back 50 years and today exists as a layer that gets translated to a simpler — and faster — underlying hardware architecture. (I suspect this fact is key to the success of the macOS Rosetta 2 translation from x86 code to Apple's M1 code.) "

    Sure, x86 dates back and modern ones have a "decode stage" that could be viewed as a "translator", but the decode stage does NOT translate instructions into a "simpler and faster underlying hardware architecture". Once again, we have a fanbo

    • Also, in what measure is the M1 a "runaway success"?

      Its a runaway success is apple didnt even pay the author of the article to shill for them. They actually gave him nothing at all.

      The M1 isnt a runaway success and the bullshit benchmarks during the review embargo are still bullshit.

      Its not competitive. You've got an extra thousand dollars for a GPU to compete with M1s integrated crap, integrated crap that is still markedly and demonstrably worse than AMDs integrated crap unless your cherry pick, and that better performing AMD integrated crap only costs

  • Microsoft, IBM, Android, Intel, AMD, They all design product that can be used with everything else. Standardized.

    Apple designs software and they design hardware, they are each intimately designed for one another.

    Who else has that luxury? How powerful is this? Look at the results.

  • TFA is a bunch of hyper terms interleved with bolony. It fails to explan RISC-V, its use in M1 and why it's so. And what it has to do with ISAs, x86 and "rediscovery"?

    The RISC-V is used in M1 as a cheap means to implement a controller around a task specific hardware. It is effitient here as it is free (cheap in licence cost) and small (cheap in silicon area) and hence also effitient (cheap in power).

  • Chuck Moore took this one step further by implementing the low level commands of a standardised high level language in silicon, on which the rest of the extendable instruction set is based.

Technology is dominated by those who manage what they do not understand.

Working...