Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Intel Upgrades Hardware

Intel and Micron Unveil 128Gb NAND Chip 133

ScuttleMonkey writes "A joint venture between Intel and Micron has given rise to a new 128 Gigabit die. While production wont start until next year, this little beauty sets new bars for capacity, speed, and endurance. 'Die shrinks also tend to reduce endurance, with old 65nm MLC flash being rated at 5,000-10,000 erase cycles, but that number dropping to 3,000-5,000 for 25nm MLC flash. However, IMFT is claiming that the shrink to 20nm has not caused any corresponding reduction in endurance. Its 20nm flash uses a Hi-K/metal gate design which allows it to make transistors that are smaller but no less robust. IMFT is claiming that this use of Hi-K/metal gate is a first for NAND flash production.'"
This discussion has been archived. No new comments can be posted.

Intel and Micron Unveil 128Gb NAND Chip

Comments Filter:
  • by malakai ( 136531 ) on Wednesday December 07, 2011 @01:28PM (#38292662) Journal

    I love SSDs, especially for development work. Nothing like having a dev VM per client each on their own little SSD isolated from your non-work related default operating system. But SSD's are dangerous...

    SSD's are like crack to bad applications. The magically make them feel better, while masking the underlying problem. I'm worried what the future is going to hold when the average desktop comes with an SSD drive. Already I've already seem some development companies demo financial software on striped SSD's as if that's what everyone runs these days. I guess it's no difference then an abundance of RAM and an abundance of CPU power. < Insert in my day rant here >

  • by ksd1337 ( 1029386 ) on Wednesday December 07, 2011 @01:34PM (#38292742)
    Well, that's the problem, isn't it? Lazy programmers aren't writing efficient code, they're just relying on Moore's Law to push them through. Of course, I don't think the average consumers understand much about efficiency, seeing as eyecandy is so popular, even a selling point.
  • by timeOday ( 582209 ) on Wednesday December 07, 2011 @02:17PM (#38293218)
    This moralistic spin ("lazy" programmers) is absurd. The tradeoff between development cost and hardware requirements is obviously affected by cheaper yet higher-spec hardware. If you want to run WordPerfect for DOS at insane speeds on modern hardware, go right ahead. That piece of that software cost $495 in 1983 (cite [answers.com]) and was written in assembly language for speed. (I hope the connection there is not lost on anybody).
  • by blair1q ( 305137 ) on Wednesday December 07, 2011 @02:36PM (#38293450) Journal

    They aren't lazy, they're productive, and taking advantage of the resources available.

    When they're tired of putting the first-to-market markup and the bleeding-edge markup in their bank accounts, then they'll address reports of sluggishness or resource starvation in less-profitable market segments.

    Right now, though, the fruit that are hanging low are fat and ripe and still fit in their basket.

  • by ArcherB ( 796902 ) on Wednesday December 07, 2011 @02:38PM (#38293482) Journal

    Well, that's the problem, isn't it? Lazy programmers aren't writing efficient code, they're just relying on Moore's Law to push them through. Of course, I don't think the average consumers understand much about efficiency, seeing as eyecandy is so popular, even a selling point.

    Most of the programmers I know don't care about timelines, eyecandy, popularity or selling points. These guys are computer nerds. Just as car nerds want their hot rods to purr when idle and roar when pushed, most programmers want their code to run fast, efficient and clean. The problem is that programmers are under thumb of timelines and feature bloat put on them by management and sales.

      This is not necessarily a bad thing as if it were not for deadlines, no programs would ever be finished. Yes, code is more inefficient, but that is only because the hardware has allowed it to be. It does not hurt the bottom line if a customer has to wait 1.5 seconds for program to launch or 3. The bottom line is what management cares about, and to be fair, it is what drives business and keeps Red Bull stocked in the break room fridge.

  • by Nethemas the Great ( 909900 ) on Wednesday December 07, 2011 @02:46PM (#38293632)

    I understand what you're saying but at the same time software that wastes compute resources is also wasting dollars. Dollars needlessly spend on employee hours (waiting for operations to complete), new/upgraded hardware to cope, and one that many people might not realize extra software development, maintenance and support costs. Inefficient software quite often reflects a poor implementation under the hood and frequently behind the wheel. One thing nearly every engineering discipline recognizes is that the fewer moving parts a system has the inherently more reliable and maintainable that system becomes. This is no different with software. Software bloat is the bane of those trying to implement features to support new requirements and a nightmare for those trying to ensure quality control. Software bloat often shows up in the user interface as well in poorly implemented workflows that further slow down productivity.

    Contrary to popular opinion, fancy GUIs replete with eye-candy generally aren't the problem--normally they're built on top of highly abstracted, well optimized and tested frameworks--it's evolution. One of the more common sources of inefficiency is software bloat. Bloat can even plague software that was initially well constructed. Over time, after several iterations of evolution the feature requests, the various modifications and the resulting baggage train required to support them can grow substantially and weigh down a system. It isn't that the bloat is a requirement of a given feature set per-se but rather reflects a set of compromises made necessary by an initial architecture that wasn't designed to support them. Management and even sometimes the engineers have a hard time accepting that a significant or even complete tear down and reconstruction with a new architecture is the best and most appropriate choice. One of the easiest and most notable places this problem can be recognized is in web browsers. Take a trip down memory lane and compare the features, bloat, and usability of the various web browser throughout time.

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Wednesday December 07, 2011 @03:17PM (#38293988) Homepage

    There are some of us who are quite proficient with assembly language. We also had some very sloppy compilers back then, so the two went hand-in-hand.

    Back then, I would build a first prototype in straight C (or whatever), then identify the bottlenecks and rewrite those functions in assembly. Heck, in school I wrote a few QBasic games/apps that linked in some assembly calls. Sometimes I'd get cocky and copy the assembled code directly into a QBasic variable, then execute it. For common stuff like blits and mouse calls, I could type those opcodes from memory. You wouldn't think a QB game could handle 3D graphics as 320x200 on a 386, with sound effects and digital (MOD) music, but with a modest application of hand-tuned code, you can write the script-like glue in whatever language you want with only a minimal impact on final performance.

    I'm not saying we need to write all apps in raw assembly, that's absurd. We rarely did that back in the day, except for extreme situations and bragging rights. Today's compilers seem to do a good-enough job, but the faster they get, the more our so-called developers push into truly wasteful practices like nested script interpreters - most PHP and Ruby frameworks fall into that category. Do we really need 16-core machines with 48gb of Ram to push a few pages of text ? Not if we were writing actual computer code, and not this navel-gazing techno poetry that's more for humans than machines.

  • by GreatBunzinni ( 642500 ) on Wednesday December 07, 2011 @03:27PM (#38294134)

    Well, that's the problem, isn't it? Lazy programmers aren't writing efficient code, they're just relying on Moore's Law to push them through. Of course, I don't think the average consumers understand much about efficiency, seeing as eyecandy is so popular, even a selling point.

    Your comment is either naive or disingenuous. There are plenty of reasons that lead a specific software to do a good job under a specific scenario but do poorly under another which is completely different, and all this without incompetence being a factor. Let me explain.

    Consider one of the most basic subjects which is taught right at the start of any programming 101 course: writing data to a file. For that task, a programmer relies on standard interfaces, either de-facto standards such as platform-specific interfaces or those defined in international standards such as POSIX. This means that a programmer tends to not be aware of any specification regarding the file system or even the underlying hardware when developing a routine that dumps data to a file. Basically, what tends to be taught is to open a file, write to it and then close it. This tends to be acceptable in most scenarios, but this is a dangerous thing to do. After all, just because some data is written to a file it doesn't mean the data is immediately written to that file. The underlying platform may rely on IO buffers to be able to run things with a bit more efficiency. This means that even though your call to write() does succeed, and even though your program can successfully read your data back, that data isn't in fact stored in your file system. This means that if your program is killed/crashed or if your computer dies then you risk losing your data and corrupting the file. If this happens, does it mean that the programmer is incompetent?

    This problem can be mitigated by flushing the data to a file. Yet, calling flush() doesn't guarantee that every single bit of your data will be successfully stored in your file system. The thing is, this only guarantees that, when flush() returns, the data is flushed. If the system dies while your program is still writing away your data then you quite possibly lose all your data, and no call to flush() can save you from that. If this happens, does it mean that the programmer is incompetent?

    Some clever people took a bit of time to think about this, and came up with some techniques which avoided any the risk of corrupting your data. One of the techniques is to dump the data to temporary files and then, after they succeed in saving the data, the old file is deleted/renamed to a backup file name and the newly created temporary file is renamed back to the original name. With this technique, even if the system dies then the only file which might have been corrupted is the newly created temporary file, while the original file is kept in its original state. With this approach, the programmer guarantees that the user's data is preserved. Yet, this also has the nasty consequence of storing what's essentially the same file in entirely different inodes. This screws with a lot of stuff. For example, it renders hard links useless and screws around with the way versioning file systems work. If this happens, does it mean that the programmer is incompetent?

    So, no. This hasn't anything to do with what you arrogantly referred to as "lazy programmers" or even incompetence. Times change, technical requirements change, hardware requirements change, systems change.... And you expect that the software someone designed a couple of years ago will run flawlessly and avoid each and every issue which are only being discovered today and might only be discovered tomorrow. How can programmers avoid these issues, if they don't even have a working crystal ball? This isn't realistic, and you can only make such claims by being completely clueless and out of touch with reality. So, please tone down your arrogance and spend a moment thinking about this issue.

Nothing is finished until the paperwork is done.

Working...