Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Science Technology

Scientist Uses Nanodots To Create 4Tb Storage Chip 207

arcticstoat writes "Solid state disks could soon catch up with mechanical hard drives in terms of cost and capacity, thanks to a new data-packed chip developed by a scientist at the University of North Carolina. Using a uniform array of 10nm nanodots, each of which represents a single bit, Dr. Jay Narayan created a data density of 1 terabit per square centimeter. The end result was a 4cm2 chip that holds 4Tb of data (512GB), but the university says that the nanodots could have a diameter of just 6nm, enabling an even greater data density. The university explains that the nanodots are 'made of single, defect-free crystals, creating magnetic sensors that are integrated directly into a silicon electronic chip.' Dr. Narayan says he expects the technology overtaking traditional solid state disk technology within the next five years."
This discussion has been archived. No new comments can be posted.

Scientist Uses Nanodots To Create 4Tb Storage Chip

Comments Filter:
  • How long... (Score:5, Interesting)

    by Pojut ( 1027544 ) on Monday May 03, 2010 @09:34AM (#32072266) Homepage

    ...until I can get a decent (120GB+) sized SSD that doesn't cost as much as a new video card?

    • 3 ... 2 ... 1 ... (Score:5, Informative)

      by NotBornYesterday ( 1093817 ) on Monday May 03, 2010 @09:46AM (#32072452) Journal
      I suppose that depends on which video cards [newegg.com] and SSDs [newegg.com] you use.
      • by tenco ( 773732 )
        That's no video card, that's electric heating. Would be interesting if this gtx 470 gives one more computing power than a (beowulf ;) cluster of dual core atoms for the same TDP. Transistor count is close enough.
      • by Hurricane78 ( 562437 ) <deleted@slash[ ].org ['dot' in gap]> on Monday May 03, 2010 @11:50AM (#32074054)

        The problem with SSDs is:
        1. cheap
        2. big
        3. reliable
        Choose two!
        But even then, you can only be sure of number 3, after some years have passed. For obvious reasons of there not being any test data for years of use, until years of use have passed. ^^

        • by Kjella ( 173770 )

          1. cheap
          2. big
          3. reliable
          Choose two!

          Please tell me where I can get cheap big SSDs.

          The advantages I've seen:
          1. Performance
          2. Silent
          3. Light
          4. Higher shock resistance

          Undecided/even:
          1. More reliable
          2. Smaller
          3. Lower power

          Disadvantages:
          1. Cost
          2. Size

          Unfortunately, one of the things often touted about SSDs don't quite seem true, they don't really scale down. Big SSDs = more parallel channels = better read/write performance. Which is a shame, because if they could really scale down to 10-20 GB I'd see a huge market for dual storage laptops, SSD fo

        • But even then, you can only be sure of number 3, after some years have passed. For obvious reasons of there not being any test data for years of use, until years of use have passed. ^^

          BEGIN PEDANTIC

          In fact, if you assume a Poison distribution of failure (where two identical working chips are equally probably to fail, no matter that one of them is brand new and the other has had years of use), you could just put 10.000 to test, and, if after a year 100 of them did fail, rightfully claim that they have a MTBF of 100 years.

          Of course this distribution does not fit other components (mainly those with mechanical parts -CDs, HDs-) because there is progressive wear so an older unit is more likel

  • Wow (Score:3, Insightful)

    by MyLongNickName ( 822545 ) on Monday May 03, 2010 @09:35AM (#32072276) Journal

    My first PC had 4k of RAM. I should be used to this type of growth by now... but it still makes my heart race a bit when I see ever increasing memory capacity in an ever decreasing form size.

    I'll tell my grandkids about my first PC and they will roll their eyes as they leave my retirement home...

    • Re:Wow (Score:4, Insightful)

      by Lumpy ( 12016 ) on Monday May 03, 2010 @09:52AM (#32072552) Homepage

      Problem is most software developers and OS makers also race to consume that memory. Honestly all the software today is a bloated blob that is horribly unoptimized for speed and efficiency.

      It's disgusting how bloated most stuff is because we have 4gig of ram and 2 2.5ghz processors... why make it leand and mean? it compiles, ship it.

      • Re:Wow (Score:5, Insightful)

        by Thanshin ( 1188877 ) on Monday May 03, 2010 @10:10AM (#32072780)

        why make it leand and mean? it compiles, ship it.

        And what's the answer to your question?

        If it works, why optimize it? To save in storage space? How much would I be saving? 10$ in storage space for every hour of optimization?

        It's not art, it's a business. You could as well ask why we don't replace steel by titanium in cars.

        • Indeed it is business, but it's also marketing. Sure, at home, I like digital photos, watching movies on my computer, etc, but the sad reality is that in your traditional every-day book-keeping style business, we're doing a lot of the same stuff that we were doing on computer 20+ years ago (evidenced by the fact that in a lot of cases the same COBOL programs running the servers back then are STILL running the servers now). It's just that marketing and increasing software bloat have convinced everyone that

          • Re: (Score:3, Insightful)

            by Thanshin ( 1188877 )

            We should get benefits from newer, faster hardware. Instead we get increasingly lazy programmers and zero net benefit in speed, but with all the negative costs of new equipment purchases.

            We do get benefits from newer, faster hardware. The possibility of hiring cheaper, less prepared, programmers.

          • by tenco ( 773732 )
            I pretty much doubt that your P1 could play HD videos, render code with highlighting in a useful speed or display unicode with only 16MB of RAM. Hell, I'm not even sure you could encrypt a harddrive with AES 256bit without greatly reducing it's speed.
        • by Amouth ( 879122 )

          You could as well ask why we don't replace steel by titanium in cars.

          that is simple - they don't do that or something like that because then your car might last you a long time - and that would cost them money because they wouldn't be able to have you a recurring revenue stream.

          Sorry but the accelerated use of plastics and cheap alloys isn't an accident or an improvement in cars..

          • Re:Wow (Score:4, Insightful)

            by Grishnakh ( 216268 ) on Monday May 03, 2010 @12:34PM (#32074546)

            that is simple - they don't do that or something like that because then your car might last you a long time - and that would cost them money because they wouldn't be able to have you a recurring revenue stream.

            What does using titanium instead of steel have to do with cars lasting a long time? As long as you don't let salt corrode them away, steel-bodied cars will last pretty much forever. Here in the southwest, we don't have any problems with corrosion.

            Besides, automakers wouldn't bother to apply undercoating if they wanted their customers' cars to rust away.

            Sorry but the accelerated use of plastics and cheap alloys isn't an accident or an improvement in cars..

            Now this is just plain stupid. Aluminum alloys improve performance in cars greatly by reducing weight, and also by making engines that perform far better. Most plastics are also a giant improvement; again, weight savings.

            • by Amouth ( 879122 )

              the titamin/steel was the other persons comment - i was going off of the idea

              as for the alloys and plastics - sorry sure they save weight.. but the plastics ALL break down.. they all age poorly.. compared to actual metal parts - and alot of the newer alloys i've run into working on cars - do not last las long..

              sure saving weight is important - but you know .. we can save weight in alot of other places than under the hood - when you start adding up the weight increase from the cosmetic parts of cars it is am

              • as for the alloys and plastics - sorry sure they save weight.. but the plastics ALL break down.. they all age poorly.. compared to actual metal parts

                I haven't noticed any breaking down at all in my 15-year-old car.

                Yes, most plastics will break down over time with UV exposure, but there are ways to mitigate this: certain additives, keeping them out of the sun, etc. Plastic parts in junkyards, for instance, break down and generally look like crap pretty quickly, because they're out in the sun all day (usual

          • Sorry but the accelerated use of plastics and cheap alloys isn't an accident or an improvement in cars.

            There is the benefit that a largely plastic car that deforms on impact absorbs a lot of the energy that would otherwise be transferred to the occupants during a collision. I know I'd much rather be in a squishy modern car than a solid steel behemoth if I'm going to crash into something.

        • This attitude angers me. It's a similar evil to pollution or littering; each developer looks at their own software and says it doesn't matter individually if it is wasteful and inefficient, but the totality of all the bloated inefficient software on my computer causes it to take 2 minutes to boot when we're talking about a machine that is technically capable of performing that task in under 10 seconds.
          • This makes me wonder: why are you loading all of your software on boot? Also, you could probably make use of hibernation to speed things up.

            My machine does in fact boot up to the login screen in 10 seconds, Ubuntu 10.10 from an SSD :) Another 10 or so seconds (including loading up some extra apps I've installed) to a usable desktop, and that's with only a 1.6Ghz Atom..

        • by LilGuy ( 150110 )
          If everything were optimized the technology business cycle would slow to a crawl and possibly die. If programs were crafted the way they were in the 80s when memory and speed were expensive and scarce we would have an endless supply of space and speed for everything right now and no need to continuously upgrade. I would like to see a little bit of a slowdown in the cycle myself because I've never been able to play the latest and greatest games, but I'm patient enough that I can wait 3 or 4 years to play t
      • I kind of agree, for some things anyway. Microsoft's Office is one of those. Word, Excel, Powerpoint -- they haven't significantly changed since Office 97. I mean, they are what they are. More wizards now, different toolbar, prettier graphs. But Office 97 was enough for 99% of the users. Email is the same way -- why does Thunderbird take 113 MB of memory to run, when it doesn't do much more than the 500K of Pegasus mail back in 1994. Web sites are definitely more complex, but Firefox is running at 35

      • I know, it physically disgusts me that developers don't limit themselves to writing Space Invaders and Pacman on quad core 4GB machines, but instead chose to actually use all that memory and processing power. Pass me a bucket someone, I'm going to hurl.
      • Re:Wow (Score:4, Informative)

        by divisionbyzero ( 300681 ) on Monday May 03, 2010 @10:34AM (#32073042)

        Problem is most software developers and OS makers also race to consume that memory. Honestly all the software today is a bloated blob that is horribly unoptimized for speed and efficiency.

        It's disgusting how bloated most stuff is because we have 4gig of ram and 2 2.5ghz processors... why make it leand and mean? it compiles, ship it.

        Sounds like a reasonable outcome of a cost/benefit analysis. Since when is efficiency an end in itself?

        • Re: (Score:3, Funny)

          by aminorex ( 141494 )

          Market forces dictate software bloat, not some centralized cabal of scheming plotters designing an optimal return on investment. As long as people buy more and more bloated crap-ahem-itunes-ahem-ware, as long as managers get rewarded for their bloat factors, as long as developers get specs incorporating bloat, the trend will continue.

      • by tuomoks ( 246421 )

        It's called betterment, "sometimes, you have to make sacrifices for the betterment of the entire group" - really? Yeah, nobody has been able to show what has been gained since we had for ex. 512KB memory for 2000+ online users? Processing is much faster today, response times 10+ times slower?? Same processing - the business hasn't changed? Nice pictures(?) - actually using nice graphical (and expensive) terminals - about the same! Yes, the price of hardware has gone down and a lot but do we really have to w

      • by Surt ( 22457 )

        That's really an urban legend. Most software today is well optimized. It just does much, much more than software did in the past. It uses more memory because many algorithms are trading memory for cpu because memory is cheaper, or memory for disk access because disk accesses didn't keep up with the pace of advancement in cpu and memory (by a couple of orders of magnitude over the last couple of decades).

      • Re: (Score:3, Funny)

        by ScentCone ( 795499 )
        You used the word "bloated" twice.
      • With your complaints. So let's start a list of UNbloated software:

        I'll start:
        MicroXP.

      • Bad software managers are rewarded for producing a lot of software. The more software, the more reward. As a result, you get increasingly useless or downright harmful crap rammed down your throat whenever you buy a commercial software product or a piece of hardware with bundled software. The latter is the worst, because in the case of commercial software there is at least a reality check which comes from the need to prevent the product from becoming so odious that no one will buy it.

    • Re:Wow (Score:5, Insightful)

      by Idiomatick ( 976696 ) on Monday May 03, 2010 @09:58AM (#32072636)
      It may be peaking soon though. 6nm is getting close to physical maximums for most techniques due to the casimir effect. Techniques that push chips from 2d into 3d will be the next useful improvement. But after that point we have run out of easy options.

      Increasing speed of chips and ram could help relieve that pressure mind you. As programmers can tade off more processing for less drive usage, or count on faster ram and compress everything. This will give us a bit more time. Beyond that we will simply have to get more inventive on how we use computers.

      Very very fast internet could become important, if users feel they need access to 10million TB of data personally. That may not be physically feasible on a personal computer. So 'cloud' type services would be important. Having a few duplicates rather than 1million duplicates of any given song is clearly a big improvement. This of course feeding into the idea that when we made the internet we stopped making machines, we just started making components for the one ultimate computer. And when you think of it from that perspective there is tons of room for improvement even if some of the parts are nearing the useful maximums.
      • Would you need the 10million TB on yourself at all moments?

        Maybe having a fridge sized data storage at home will become standard.

        No need for such incredibly high speed communications if it's just for the volume that gets sent from your home computer to "personal" computer (the one you carry).

      • Lame Research? (Score:5, Informative)

        by GameGod0 ( 680382 ) on Monday May 03, 2010 @11:20AM (#32073658)

        It may be peaking soon though. 6nm is getting close to physical maximums for most techniques due to the casimir effect.

        Not quite sure what the Casimir Effect has to do with magnetic dots, but I should mention that 6 nm is below the Superparamagnetic limit (which is typically tens of nanometers). That means you're magnetic nanodot probably isn't magnetic.

        ... Which brings me to my second point: This article says nothing about what this researcher actually did. It sounds like he just fabricated an array of nanodots, which is nothing particularly groundbreaking.

        Does anyone have a link to the original abstract for the conference presentation? The dots must have been multilayer "stacks", otherwise there's a good chance they won't be ferromagnetic (there's a "superparamagnetic limit" that stops ferromagnetic particles from being ferromagnetic when they get around this size.)

        Lastly, the article says they'll look at housing and using "laser technology" to read back from these nanodots. They mention that as a sidenote, but it's really the most important problem if you want to make something useful. The problem with most nanomagnetic memory techniques is that reading/writing is either impractical or not yet possible.

        • ... Which brings me to my second point: This article says nothing about what this researcher actually did. It sounds like he just fabricated an array of nanodots, which is nothing particularly groundbreaking.

          Exactly. And I had to wade through about 10 pages of silly off-topic comments to find confirmation of that. Thank you Game, thank you lame moderators.
      • Another big improvement may be optical chip interconnects. This could make the connection from the processor to the RAM and other devices much faster, while also saving space on motherboards and RAM chips to put... More RAM. Not to mention possible power savings, and the fact that it should be rather easy to have more RAM channels with this technology... Imagine your processor having an individual, parallel connection to each RAM chip.

        It's true that eventually, we will reach a plateau, and in a sense, I
      • Re: (Score:3, Funny)

        by StikyPad ( 445176 )

        Techniques that push chips from 2d into 3d will be the next useful improvement. But after that point we have run out of easy options.

        Just keep adding more dimensions... Duh.

  • by Anonymous Coward on Monday May 03, 2010 @09:39AM (#32072342)

    They have microdots at a 4Tb-per-sq-inch storage density; they don't have any controller that can read and write it.

    This has been "accomplished" numerous times with holographic storage media before. They just never made the read-writers...

    • by clone53421 ( 1310749 ) on Monday May 03, 2010 @09:43AM (#32072406) Journal

      Correct.

      “The next step is to develop magnetic packaging that will enable users to take advantage of the chips,” says the university, “using something, such as laser technology, that can effectively interact with the nanodots.”

      They have a storage medium with nothing to read or write it... yet.

      Although they seem confident that this will come with time, it’s a bit early to be celebrating. Interesting technology, but time will tell whether it’ll ever be usable.

      • by 0123456 ( 636235 ) on Monday May 03, 2010 @10:05AM (#32072722)

        They have a storage medium with nothing to read or write it...

        The perfect DRM! They'll make billions!

      • > They have a storage medium with nothing to read or write it... yet.

        Put the dots on a "disk" in rings. Call them "tracks". Spin the "disk" and access the dots by scanning a laser radially so that it can read and write the dots in each "track" sequentially. There just might be some existing technology that could be adapted for this...

        • That would suck. Spindle drives are already too slow. Let's use something a tad faster...please?

          • Re: (Score:3, Interesting)

            by John Hasler ( 414242 )

            > Let'sLet's use something a tad faster...please?

            They'll put a transisitor over each dot and couple it to the dot in some way so that it can be read and written. Then they'll add a matrix of metallization and logic to multiplex access to the transistors. Add decoding logic and drivers and you've got nonvolatile RAM. And your bit density has gone down by an order of magnitude or so. Still very useful, though, if it's fast enough. Nonvolatile RAM with densities and speeds similar to those of DRAM woul

            • Right now, anything with a decent $/GB that is on par with the better SSDs currently out will change things up pretty well.

              I have 2 Dell D6400 laptops here, both with Win7Pro, with identical CPU/RAM/GPU. One has an SSD Drive, the other, a spindle drive (7200 RPM).

              Without a doubt, the SSD Drive boots faster, opens apps faster, and reboots faster. "Faster" is actually a poor word to describe it. My jaw hit the floor.

              With a few *minor* tweaks, we got that baby to boot, complete to an interactive desktop, in

        • That's idiotic. A pair of micromirrors will be able to point a laser at any point on the chip with far smaller seek times than moving the entire chip. Besides, CDs and DVDs are recorded in a spiral, not rings.
          • by xaxa ( 988988 )

            Magnetic discs (hard and floppy) are recorded in rings (aka cylinders, IIRC).

          • > That's idiotic.

            No it isn't. It's simple, robust, leverages existing technology, and is capable of transfer rates of 1000 Gb/sec.

            > A pair of micromirrors will be able to point a laser at any point on the
            > chip with far smaller seek times than moving the entire chip.

            I guess that's why CDs, DVDs, and BluRays aren't spun.

            Yours is an interesting approach, but there may be a reason why it has not been implemented for any of the existing optical technologies. The latency would be better than that of s

      • Actually despite the fact that the summary and article talk about this as though it is an SSD technology, I think it is more likely to be implemented in a conventional spinning-disk hard drive first.

        As I recently commented [slashdot.org], the hard-drive industry is having a hard time shrinking the magnetic domains on conventional hard drive platters, which use a magnetic thin film. (You can make domains smaller, but they start interacting with one another and not maintaining their magnetization properly.) One proposed
  • This sounds really cool but the artical that it links too is really short on details.
    Things like speed? Storage life? How many read write cycles before it wears out? Addressing? is it byte level or page level?
    I mean is this only a replacement for flash for is it a replacement for ram?
    Cool but it just ticks me off. It is just a tease.
    Yes they may not have those answers but I would be nice to know what they don't have answers for yet!

    • Re:I hate this... (Score:5, Insightful)

      by osgeek ( 239988 ) on Monday May 03, 2010 @10:06AM (#32072734) Homepage Journal

      They don't have any of that information because they don't know any of it. They only have a way IN THE LAB to put a shitload of nanodots onto a medium. They mentioned that they have no packaging (way to read or even really write data into the dots) for an actual product.

      It's like Ben Franklin saying, "Okay, I've discovered electricity. Computers should be along in about five years."

      Okay, it's not that bad, but I hate that five year timeline that is rarely questioned but is thrown out to lure in investors and grant money.

      Slashdot should have an automatic filter that looks for the five year estimate and flags with some "fat chance" special color.

      • Re: (Score:3, Informative)

        by LWATCDR ( 28044 )

        That is why I hate this.
        It reminds me of those Popular Science stores.
        Or even better the one that sticks in my mind. The THOR drive from Radio Shack.
        http://en.wikipedia.org/wiki/Thor-CD [wikipedia.org]

        I was so hyped by this in 1988 it sound so cool and it was only a few years away...
        It never came.
        On the bright side we did eventually get CD-Rs and even CD-RWs but not for a good long time after the THOR drive was announced.

        • Re: (Score:3, Informative)

          by osgeek ( 239988 )

          Yeah, that Thor drive was some great vapor. My painful promise memory, was hologram storage. Back in 1992, I remember holding on to a hologram storage article from MacWeek that described what was supposed to be a consumer product in a year or so.

          The media was the size of a credit card and they promised it would hold 100x as much as the current best hard drives of the day. It's a real shame because you just know that there was some fairly fraudulent monkey business going on to motivate guys like that to h

  • Some promising capacity technologies could not match standard magnetic technologies in these aspects. On the other hand, early CD ROMS and flash was promoted as "write once" technology. They would be so large large that you did not need to reuse storage.
  • ... means I'll have to buy 'The White Album' again...

  • NCSU != UNC (Score:2, Informative)

    BEGIN RANT Seriously, North Carolina State University (NCSU in Raleigh) is not the University of North Carolina (typically in reference to Chapel Hill). One is a school (that I happened to have attended twice) that focuses primarily on Engineering and Agriculture and the other is a liberal arts school down the road. Seriously, fact-check much? http://www.mse.ncsu.edu/CAMSS/bio1.html [ncsu.edu] END RANT
  • Disks? (Score:3, Interesting)

    by pitchpipe ( 708843 ) on Monday May 03, 2010 @09:45AM (#32072446)

    Solid state disks could soon catch up with mechanical hard drives[...]Dr Narayan says he expects the technology overtaking traditional solid state disk technology within the next five years.

    Is shape important in Solid State? It almost seems as if the article is confusing Hard Disk Drives with Solid State Drives.

    • by Ant P. ( 974313 )

      The third letter in "HDD" is only there because it has a motor. SSDs should really be called Solid State Storage, or SSS for short.

  • by alvinrod ( 889928 ) on Monday May 03, 2010 @09:51AM (#32072538)
    The technology sounds impressive, but then they just give it the kiss of death by announcing that it's five years away. Five years from now it will still be five years away, probably because while it's possible to do, no one has been able to do it in a cost-effective manner. Also if Intel can keep up with their current roadmap, they'll probably be using something close to a 10 nm process. I know that both Global Foundaries and TSMC are working on their 28 nm process (Although they are behind schedule.) so it's not inconceivable that the rest of the industry will already be at that point anyhow.
    • by osgeek ( 239988 )

      What's depressing is the way that the press and /. alike eat up stories like this.

      Sure, writing this density of nanodots is an impressive feat. But as you point out, it could be completely nonviable for creating an actual consumer product.

      Why can't the Slashdot's front page be the kind of place where bullshit is called on researchers putting out this kind of nonsense. These guys should be shamed into putting out factual press releases. Whoring it up to get coverage from the general media while seeking in

    • I remember, like 15 years ago, a Byte Magazine special issue "The Future of Computing!" Said that 16 TB holographic CDs were only "5 years out" ;) We were also supposed to have algae-based RAM that was so large, and fast, plus non-volatile, so we wouldn't even need both an HD AND RAM, just one thing. Plus we'd also have quantum CPUs made out of doped diamond, that doesn't need a heatsink because diamond semiconductors don't get less efficient with heat, and can go a lot hotter without damage. AND HOVERC
  • by kenp2002 ( 545495 ) on Monday May 03, 2010 @10:02AM (#32072682) Homepage Journal

    Ok my knee jerk Six-Sigma reflex has just kicked in. On the manfacturing of those defect-free crystals... and about the cost effect and scaling for "overtaking ... in 5 years..."

    Ok, here is a tip:

    Anytime a politician or scientist taks about 5,8, and 12 year targets there is a reason:

    Two 4 year terms = 8 years; when the project falls out they can blame the canidate currently in office.

    5 years = A single Term but just a touch beyond to provide an incentive for re-election because if you don't they might cancel the project

    12 Year = Two terms for canidiate A and a term for his\her heir... "Don't let the evil Democrats\Republicans kill the project!"

    Now last I checked more then a few grants come in at 3,5,8 and 12 year durations... I never hear things coming to fruition in 7 years, or 6 years, or 9 years, or 11 years, or 18 years, 6 months, and 3 days.

    There is just something about 5, 8, and 12 they love. Which due to the frequency they cite those values implies there is some weird cosmic alignment which causes innovations to pop at those figures... or I smell 4/5 dentists approve BS.

    Another one is the 20 years from now number. What is the maturity on that investment I made...

    I would honestly have a lot more respect for senior scientists if they didn't spend every waking hour working on getting grant money leaving the actual work to low-paying interns and students then claiming the work as their own offering nothing more then a second hand "my team and I" comment...

    • ... aka six months. For those just tuning in at home, this unit of time was popularized by one Thomas Friedman, a columnist who was thoroughly mocked on the Internets for his unfortunate habit of claiming that we needed another six months in Iraq to know if it was a success, and then when the six months had gone by, proclaim that another six months was needed. Over and over and over again - to the point where various critics would make a note on their calendars, and then after six months ask "well, it's bee
  • I wonder how do they think all those data can be made accessible with fast access speed and good throughput.
    The article is failing to mention this slightly important topic. Also tapes can store a lot of data (well not really that lot) with ridiculous performances.
  • If nanodots are anything like Dippin' Dots, they sound delicious.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...