Israeli Startup Claims SSD Breakthrough 159
Lucas123 writes "Anobit Technologies announced it has come to market with its first solid state drive using a proprietary processor intended to boost reliability in a big way. In addition to the usual hardware-based ECC already present on most non-volatile memory products, the new drive's processor will add an additional layer of error correction, boosting the reliability of consumer-class (multi-level cell) NAND to that of expensive, data center-class (single-level cell) NAND. 'Anobit is the first company to commercialize its signal-processing technology, which uses software in the controller to increase the signal-to-noise ratio, making it possible to continue reading data even as electrical interference increases.' The company claims its processor, which is already being used by other SSD manufacturers, can sustain up to 4TB worth of writes per day for five years, or more than 50,000 program/erase cycles — as contrasted with the 3,000 cycles typically achieved by MLC drives. The company is not revealing pricing yet."
Cost? (Score:5, Informative)
If we have to ask how much it costs, we definitely cannot afford it.
Re:Cost? (Score:5, Insightful)
Re:Cost? (Score:5, Interesting)
You have an interesting point there.
Several years ago, maybe back in 2005, Anobit visited us and showed off what they were working on. They were little guys in the flash/solid state business and had come out with this nifty algorithm that would allow the flash with really low read/writes with perform like today's current SSDs.
They were the first (that I know of) to come up with a way to spread the writes across unused portions of memory so that on average, every bit of memory would have the same amount of wear on them. It wasn't until several years later that I saw on Slashdot that Intel had come up with this "new" idea in their SSDs.
Back at the time, the Anobit technology was really cool. But unfortunately, they were prohibitively expensive and we could not use them in our rugged systems.
Seems that they have still been hard at work over there. Very cool. They deserve the success.
Re: (Score:2, Insightful)
So, lets assume what you say is true - is this really a nice business that deserves success? Hard to say.
Obviously if they can do all that is claimed then they "deserve success", though of course that depends on your definition of success. If success means being the richest company in the world showered with personal sex slaves then, no, they really didn't deserve that. If you mean deserve to pay their employees a slightly above average salary for their area and have a slightly above average return for thei
they didn't invent wear leveling (Score:5, Informative)
Wear leveling was normal for NAND long before that.
What kind of n00b are you?
http://www.google.com/patents?vid=6850443 [google.com]
Better ECC (Score:2, Interesting)
It's just a matter of time before someone would use a stronger ECC. Now each 512-byte sector has extra 16 bytes for ECC checksum, which is enough to recover one bit. Given enough space for the checksum it's possible to recover as much data as needed. There are a lot of implementations in hardware. Every wireless tech designed in the last 20 years uses one, typically amount of extra data is in range 1/6 - 1/2. Hard drives certainly implenent better ECC too.
Now the problem is where to place extra checksums in
Re: (Score:2)
Extra ECC data and fancy controller trickery can't get around the fact that the write limit is a limit of the underlying flash, not the controller...
engineers lacking vision (Score:3, Insightful)
Extra ECC data and fancy controller trickery can't get around the fact that the write limit is a limit of the underlying flash, not the controller...
Extra ECC data and fancy controller trickery can't get around the fact that the magnetic media density limit is a limit of the underlying magnetic domains, not the controller...
No wait! Then they invented PRML. Turns out the underlying limit was actually due to engineers lacking vision. All they needed was a new analytic frame of reference. The same deal has happened over and over again with RF spectrum. One man's noise is another man's signal. I just don't know the RF world well enough to cite exampl
Re: (Score:3, Interesting)
I don't know the details of Anobit's technology, but it sure sounds like they are, essentially, adding Forward Error Correction [wikipedia.org] to the written data. Thus, even if the data you get back is a little garbled you can detect how garbled it is and recover the original signal if it's not TOO garbled. You lose some percentage of your capacity, but, like RAID, you can use more cheaper parts to provide the same effective capacity cheaper.
It sounds like a clever and retroactively obvious thing to do-- I wonder if th
Re: (Score:3, Insightful)
If this is really astroturfing, they just shot themselves on the foot. I mean, I've just got the message that this new technology of theirs will be "prohibitively expensive" ...
Re: (Score:2)
Re: (Score:1, Informative)
I can't give exact figures but for a little while I was involved with an 1MH MTBF test on STEC SSDs from a major OEM. The consensus from the OEM's engineers was that so far it had been impressive. That was a while back though.
Re: (Score:2)
Re: (Score:3, Informative)
Not really: their technology is used to make MLC as robust as SLC, so if it cost more than SLC's price, then it's useless..
Re: (Score:3, Interesting)
Really, they should be developing this tech for use with SLC drives. If it can make an MLC perform like an SLC, imagine what it would do for the already-faster-and-longer-lasting SLC drives.
Re: (Score:2)
There are open source processors?
Re: (Score:2)
Yes.
Re: (Score:1, Funny)
What jew talkin' 'bout, Willis?!
Big Deal. (Score:2)
Call me when it's 75% cheaper than other "solutions".
Re: (Score:1)
Re: (Score:2, Interesting)
Actually, I'd love something with any of the following:
1: Noticeably better price, but without sacrificing reliability. An average HDD in the enterprise has 1 million hours MTBF with constant reads/writes. A SSD should be similar, or perhaps a lot more because there are no moving parts.
2: An archival grade SSD that can hold data for hundreds, if not thousands of years before so many electrons escape the cells to make a 1 or a zero impossible to tell apart. I don't know any media that can last for more
Re: (Score:2)
3: SSDs using a different port than SATA. Perhaps have it interface as a direct PCI-E device with a custom bus to add more SSD capacity in a similar form factor to RAM DIMMs.
Seriously...? [newegg.com]
Re: (Score:3, Informative)
It's a tradeoff. Reliability needs redundancy, and redundancy costs money. So either take the financial hit, or wait until the reliable devices get cheap enough.
Re: (Score:2)
i just wish tape was more available in a home use price range for archiving the increasing amount of family data.
Re: (Score:2)
3: SSDs using a different port than SATA. Perhaps have it interface as a direct PCI-E device with a custom bus to add more SSD capacity in a similar form factor to RAM DIMMs.
Yes, I want SSDs that can replace CD readers in my older laptops (just slide out the whole thing), and/or SSDs that I can plug into the usually unused miniPCI port of my older laptop. None existed last time I looked.
A standardized full disk encryption format. This way, I insert a flash disk into my camera or phone
Yes, with an easy way to enter the password on keyboard-less devices, so I won't be afraid to pass through customs with an mp3 player.
Re: (Score:2)
Re: (Score:2)
2: An archival grade SSD that can hold data for hundreds, if not thousands of years before so many electrons escape the cells to make a 1 or a zero impossible to tell apart. I don't know any media that can last for more than 10 years reliably. Yes, maybe a CD-R or two may last that long, but it is more of a matter of luck than anything else.
Meh. Copy it off and back on every five years.
The main problem with long term data on SSD is charge leakage. That does not cause mechanical wear (unlike lots of writes). If you archive data to an SSD, then periodically re-write it, its perfectly fresh again. Doing so will give you decades of safe storage without ever getting near the write limits. And doing so will not take much time due to the inherent speed of the media, and will get both faster and cheaper "for free" as the systems improve over time -- t
Re: (Score:2)
sounds like something that snake oil gibson was pushing some years ago, a program to strengthen the magnetic pattern on the HDD.
Re: (Score:2)
I don't know any media that can last for more than 10 years reliably.
Acid-free paper does. I have a book at home that was printed in 1886.
Re: (Score:2)
I should have stated computer media, because a quality book in a decent environment can last centuries, perhaps more as archival and preservation technologies improve.
Digitial media doesn't fare as well. Paper tape swells and gets misaligned. Punch cards can get put out of order and don't have the density to handle modern storage. Magnetic domains on tape drives get scrambled. CDs and DVDs suffer from oxidation on the dye layer. Photos fade [1]. Hard disks get mechanical issues such as bearing failure
Re: (Score:2)
I think the way to keep an archive for life would be to continuously back it up before the origial media degrades. I have a lot of CDs that are copies of CDs that are no longer readable. That's the beauty of digital media; copies are identical.
I'm not too sure about photographic storage, as film is easily scratched and can degrade in other ways as well. Better to back up early and often.
Re: (Score:2)
I bet you this will be standard on MacBook Pros within the next five years.
Re: (Score:2)
Other than the fact of upgradability/expandability, I wouldn't mind that. If the Flash drive were on a mPCIe card, or perhaps even a superfast MicroSD card, that would be a nice compromise between space and ability to get a larger disk.
Re: (Score:2)
Haven't you been paying attention to Apple's mobile products? Size and battery life trump everything, up to and including expandability and serviceability. If they can compress the MacBook by ten millimeters with a motherboard-integrated Flash drive, they're going to do it. Hell, if you open up a MacBook, the CD drive takes up the most space - followed by the hard drive; until Apple pulls an Apple (circa 1999) and removes the CD like t
Re: (Score:2)
Apple has already done that with the MBAir. I do think that the rest of the MB line will go exclusively flash once there are motherboard based SSDs that have 250GB or more.
RAID (Score:2)
From the description (and a lot of guesswork), it sounds a bit like they might have put in a basic RAID system, but using separate memory chips instead of drives. In terms of price vs performance/capacity, RAID has been a good solution, so this might well make sense, IF they don't try to make it out to be some black box filled with magical gold dust, rather than a simple application of existing tech in a new area.
Re: (Score:2)
Depends on what you consider cheap.... Are you talking about dollars flying out of your wallet? It seems to me that if you go with the other solutions, fewer dollars will fly much more often.
Lets just say:
Standard SSD sustains 3,000 cycles and costs $100
Anobit SSD sustains 50,000 cycles and costs $500
With the same usage, you will have gone through 16+ standard SSD drives before your Anobit SSD fails. So for 5 times the cost, you get 16 times the usage.
If we break that down to cost per write cycle, the va
Old adage(Slightly screwed up) (Score:5, Funny)
Re: (Score:2)
Re: (Score:1, Informative)
Re: (Score:3, Insightful)
It also often costs more and is less upgradable, though. These days Linux's software RAID, for example, beats out hardware RAID in a lot of ways (except on the high end).
Re: (Score:2)
Because dedicated circuitry is more stable and requires less computing overhead?
What about RISC?
Price is the biggest issue (Score:5, Informative)
With Enterprise SSD's (SLC) still in the $100/GB range, we're far away from general acceptance in the datacenter. MLC also has the problem of being slow to write to vs. SLC which is one of the important metrics when considering SSD's to accelerate your classic spindles. SLC's are reliable enough to last for at least 3 years even fully loaded at 3 or 6 Gbps.
I used some Intel X-25-M and Intel X-25-E's in my environment as they are affordable and generally get the highest scores in IOPS and throughput respectively read and write caches and the performance is way under my expectations. The Intel X-25-E's don't work well under heavy loads on LSI controllers (throws errors and SCSI bus resets) while he Intel X-25-M's do work fine. Every other month there is fresh firmware to fix some or another problem and firmware updating is manual labor with a boot CD, not something you can simply schedule at night or do while the system is online so they are what I would call beta-quality. Especially once fully filled the IOPS performance drops from ~3000 IOPS like a brick to ~1000 IOPS which a small set of hard drives can fulfill so the only good thing it's left for is latency.
We'll see what the Vertex 2 EX brings (Sandforce 1500 controller) which has an advertised 50k IOPS although that might be more marketing than anything. I'm still waiting on a decent priced SAS SSD which can actually sustain 5-10000 IOPS by itself even when fully loaded.
Re: (Score:3, Informative)
Isn't it more like $10/GB?
Re: (Score:1)
Re:Price is the biggest issue (Score:5, Informative)
Pliant technologies - ls300s - $10,631/300GB = $35/GB (that's one of the cheaper ones). STEC Zeus (high IOPS): $16,911/18GB = $939/GB
Re: (Score:3, Interesting)
Every other month there is fresh firmware to fix some or another problem and firmware updating is manual labor with a boot CD, not something you can simply schedule at night or do while the system is online so they are what I would call beta-quality.
Why can't firmware be upgraded on SSD drives thusly:
there are x-MB where they are labeled bad blocks, always. The firmware updater (which can be written in a script since writes to these bad blocks are just a dd in a specific place), the controller checks a signature, and if passed then halts all writes and reads while it upgrades the firmware.
Then when it completes all reads and writes resume. ;) Yes I know that can be disastrous but that seems like a good way to live update.
Re:Price is the biggest issue (Score:5, Interesting)
there are x-MB where they are labeled bad blocks, always. The firmware updater (which can be written in a script since writes to these bad blocks are just a dd in a specific place), the controller checks a signature, and if passed then halts all writes and reads while it upgrades the firmware.
Then when it completes all reads and writes resume. ;) Yes I know that can be disastrous but that seems like a good way to live update.
Several years ago, I wrote an ATA drive firmware flash driver and utility, to allow my company's customers to upgrade firmware in the field. Let me explain how drive firmware flash works.
Most/all modern drives (or at least Enterprise versions) support the ATA DOWNLOAD_MICROCODE command. The flash chips on the electronics board (or reserved sectors on the platters, depending on the implementation) have sufficient capacity to hold the running firmware, and to hold the new version. The new version is buffered in the drive, validated, then written to the chips/spindle, validated again, then activated and the drive reset.
Modulo some minor drive-specific quirks, the DOWNLOAD_MICROCODE command works as specified. Other than adding model strings to the utility's whitelist, the Intel X25-Es worked without issue. While we've always recommended performing the flash from single-user mode and immediately rebooting, I've done it during normal operations plenty of times. The main things are to remember to quiesce the channel before the doing the flash, and properly reinitializing it afterwards.
Posting anonymously because I'm revealing details about my job.
Re: (Score:2)
It gets interesting if the drive is behind a RAID controller. We just did that and it took a while to get it right and work around the bugs in pass through mode.
Re: (Score:2)
Does your environment support trim natively? Just curious.
My environment does not, and after a week or two I start to notice performance going south and remember to run the 'optimization' utility intel offers. This on an X-25M, G2.
As an aside, I've noticed that your average Dell workstation cannot support two X-25's.
Re: (Score:2)
TRIM doesn't work when your drive is actually filled 100%. I use it as cache, not as a data carrier. Even so, in the datacenter, drives are frequently filled to such a capacity that even TRIM won't do much and TRIM only works when you know what blocks are supposed to be empty something a lot of data carriers (in the datacenter) don't know (eg. RAID controllers, iSCSI targets, ...).
Re: (Score:2)
Especially once fully filled the IOPS performance drops from ~3000 IOPS like a brick to ~1000 IOPS which a small set of hard drives can fulfill so the only good thing it's left for is latency.
What about noise, heat, and energy usage?
Re: (Score:2)
One thing you need to be careful about with the Intel SSDs is that they have some serious firmware bugs with their SMART implementation. Issuing a SMART command while the controller is busy with other non-SMART commands can brick the SSD and require a full reset or power cycle to fix.
If you are getting bus errors on your controllers and not issuing SMART commands then it probably isn't the SSDs fault.
In anycase, SSDs have plenty enough going for them to warrant the significantly increased cost per GB of st
Re: (Score:2)
Why don't you grab a PCIe SSD? An ioDrive or something? Those can score 150k IOPS in real-world tests, for only a couple thousand dollars. If IOPS matter more than capacity, they deliver.
I just glanced at the specs, but Sandforce? (Score:4, Insightful)
How is this different/better than the sandforce controllers we already have?
Re: (Score:2)
They invented Algebra.
And 0.
So you use one of their products every second of your life.
Re: (Score:2)
Re: (Score:3, Funny)
I'm sure Algebra and 0 were invented sometime before 1948.
Re: (Score:2)
Israel is a bit older than that.
I mean, they talk about it in the old testament, a 3000+ year old set of books.
Re: (Score:2)
If anything (Score:5, Interesting)
I suspect this will eventually bring down the manufacturing costs of Enterprise class drives, rather than making consumer drives "more reliable". I think reliability concerns with current consumer-oriented MLC designs to be overstated.
Anecdotally, my Intel 160GB G2 drive is going on 7 months of usage as a primary drive on a daily used Win7-64 box, and has averaged about 6GB per day of writes over that period (according to Intel's SSD toolbox utility). Given that rate of use over a sustained period (which theoretically means it could last decades, assuming that some as yet undiscovered manufacturing defect doesn't cut it short) combined with the fact that even when SSDs fail, they do so gracefully on the next write operation, I just don't see the need for consumer-oriented drives to sport such fancy reliability tricks.
Re: (Score:1)
Re: (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
This is called write amplification and it depends on many factors: Linearity of writes by the computer, how often the computer tells the SSD to flush dirty data to media, the size of the SSDs ram cache, the ability of the SSD to write-combine or scatter/gather sectors, the wear leveling algorithm used by the SSD, and a few other factors.
MLC flash uses 128K blocks. If a database or log is flushing every 1K you wind up with a 128:1 write amplification effect, for example. With some tuning (for example flus
Re: (Score:3, Interesting)
I don't get it. Is that 2TB/day per 64GB of storage? (Approx 40 total rewrites of your entire storage capacity per day?) Or 2TB/day spread across a much larger storage capacity? I would guess the latter, in which case the writes would be spread across a large number of drives and less intensive on each drive.
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
IANADBA, but something like the redo log volumes don't exactly tax a mechanical disk, being mostly sequential reads and writes, and so would be a reasonable candidate to leave as HDD. Even a cheap as chips 5400rpm laptop drive could sustain 23MB/s (2TB/day) sequentially without breaking a sweat.
However, using the SAME (Stripe And Mirror Everything) principle, spreading all load across multiple mirrored SSDs should provide both the speed and endurance capacity you would need, with the great random performanc
Re: (Score:2)
If you are on the extreme end, then platter failures are quite common.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Makes sense... it will make it less expensive to manufacture reliable enterprise drives.
New enterprise SSDs can be MLCs using this technology, they may be higher capacity, or provide more profits to the SSD part manufacturers, but will be just as expensive. Enterprises pay for reliability that meets the requirements for their market.
Consumer market has a lower level of reliability... consumers aren't willing to pay as much for reliability, so reliability will be less.
You can't provide greater reli
Signal to noise ratio in FLASH MEMORY? (Score:2, Interesting)
How can a solid state drive have a "signal to noise ratio"?
It's all digital. Either the voltages are within their valid thresholds or they are not.
Wouldn't you need the world's fastest DSP to "clean up" noisy digital signals and still maintain the type of transfer rates they claim?
There is nothing about this breakthrough that makes any sense. Snake oil?
Re:Signal to noise ratio in FLASH MEMORY? (Score:5, Informative)
Say you're talking about a 4-level MLC cell, and say it runs at 3.3V. If the voltage is on [0V, 0.825V), that's 00b; [0.825, 1.65V) is 01b; [1.65V, 2.475V) is 10b, and [2.475V, 3.3V) is 11b. But those are analog voltages - the controller has to read the voltage, do an analog-to-digital conversion, and figure out which level it corresponds to. The ranges listed above are for if you have perfect discrimination - in most cases, it's difficult to differentiate small differences, so they don't use the full range. With better A-to-D and signal processing, they can resolve the differences better, which in turn lets them get more write cycles.
Those numbers are pulled out of the air for illustrative purposes; I have no idea what the real values are.
Re: (Score:2)
I'm pretty sure flash chips use analog voltage comparators internally, not A/D's. Though, theoretically, it would be possible to mess with the thresholds for the comparators so if a block had excessive bit errors the thresholds could be manipulated and the block re-read to determine which bits are the most likely culprits. With that information in-hand further error correction could be done.
That is, normally ECC is calculated without any knowledge about which of the N bits of data might be erroneous. If
Re: (Score:2)
I've never seen a Flash chip with an analog interface. Citation needed.
Re: (Score:2)
the controller has to read the voltage, do an analog-to-digital conversion
If there are only 4 levels then it makes much more sense to use comparators. The number of transistors required would be greatly reduced and the latency almost eliminated. Should one require the flexibility of being able to adjust the reference voltage, they could utilize a DtoA as a reference. DtoA circuitry is much simpler/faster then AtoD circuitry.
Re: (Score:2)
If there are only 4 levels then it makes much more sense to use comparators.
A comparator is a 1 bit A to D converter. 4 comparators make a 2 bit A to D converter.
Internally, something like this is already done. During writes, a reference cell or cells are written which are used during reads to adjust or generate the reference
Re: (Score:3, Insightful)
It's all digital.
Actually, once you get far down enough, nothing is :)
Re: (Score:1, Insightful)
Onc you get far down enough, everything is. Consider Planck time: it's the smallest quantum of time for which there can be "a difference that makes a difference".
Old technology (Score:2)
We already did something very similar to this on the BAIL backup subsystem of the Cassini spacecraft many years ago, and it didn't require a "special" processor.
New trend (Score:3, Funny)
The SSD will have a more powerful CPU than the computer.. All it will need is a graphics and audio chip, more RAM and.. oh... nevermind..
Oh sure (Score:1)
It's expensive (Score:1)
The company is not revealing pricing yet."
They are competing on reliability, so it makes sense the price would be higher.
The fact they are not advertising the price, strongly suggests they do not intend to compete based on price, and price will be high.
Marketing rule #1 is shove all the positive aspects of your product in the customer's face.
Don't talk about the negatives or the disadvantages, if you can avoid it.
In this case the product's not out yet, so they can avoid talking about the high pr
Great tech, but MLC still remains bad news. (Score:3, Insightful)
So we can have 50.000 instead of 3000 rewrite cycles. That's great. However, I still like the 100.000 to 1.000.000 rewrite cycles of SLC. Actually, SLC is only 50% more expensive to manufacture (per bit) than two-level MLC - I really don't understand why are manufacturers so enamoured with MLC.
Re: (Score:2)
Because price is the most important factor here? Reliability has got to good enough, what needs to happen now is a sharp reduction in price.
Re: (Score:2)
As I said, SLC is only 50% more expensive, per bit, than 4-level (2 bit/cell) MLC. That's hardly helping in a "sharp" decrease in price.
Re: (Score:1, Informative)
If I'm buying 100,000 parts, SLC costs 5x more (per bit) than MLC at the present. I'm pretty certain the reason is supply - there's factories churning out an ungodly amount of MLC for use in memory cards, thumbdrives, MP3 players, etc. but SLC really only finds use in the embedded (where I've used it) and enterprise-SSD space.
MLC isn't *that* bad - the reliability issues you'll find with it are bit errors, not entire lost blocks of data. Add an extra level of error protection and plenty of spare area to han
This article is IMPOSSIBLE to decode (Score:5, Insightful)
This sounds absolutely no different to how all wear-leveled, error correcting flash controllers work. They all use multiple levels of ECC to decrease the error rate. The 'signal processing' they're doing doesn't sound like anything new.
If there is something new going on here, it's absolutely impossible to decode from the layman's language used in the article. All I hear is "Other vendors use X bits for ECC. We use Y bits and we do it in software instead of hardware.", which is basically just another way of saying "Other vendors have 4 blades, we have 5 blades."
Re: (Score:2)
You just don't read marketese. Are you an engineer or what? If yes this is not for you.
Re: (Score:2)
If there is something new going on here, it's absolutely impossible to decode from the layman's language used in the article. All I hear is "Other vendors use X bits for ECC. We use Y bits and we do it in software instead of hardware.", which is basically just another way of saying "Other vendors have 4 blades, we have 5 blades."
Well, as you can see, their dials go to eleven!
SSD? (Score:1)
Thats a shame... i thought they had developed a Super Star Destroyed. Nothing to see here... move along.
How many write cycles SLC/MLC? (Score:2)
The article says this new technology boosts the number of write cycles from 3000 to 50000. Sounds good, but then again, SLC flash in 1991 supported 1million writes and MLC 100.000 writes. Later consumer grade MLC flash claimed to handle 10000 writes and Micron is selling MLC flash that supports 30000 writes and I recall AMD having MLCs with 100.000 writes. Maybe the 3000 writes MLC is high density & as cheap as possible kind of flash and this new Israeli technology works on that. But unless it is cheape
"In a big way" (Score:2)
Do the geek proud and make a bit of an effort when writing. After all, the typical geek reads more than Joe Average -well "he" claims so and I personally do anyway- and hence trains his brain in appreciating well formed sentences.
Besides, there are so many alternatives to "in a big way".
Re: (Score:2)
They should have used "up to X % better" or "all new"to make it clear that this is marketing BS.
Re: (Score:2)
After all, the typical geek reads more than Joe Average
Then why do so many slashdotters spell "lose" with two Os, even though both can be verbs and have completely different meanings? Or don't know when and when not to use an apostrophe, can't tell their from there, etc? For some of them English is a second language but most of them write as if they've never read a book before. Even someone who never reads anything but pulp fiction would do better than that.
Be careful when travellijg to Israel. (Score:1, Offtopic)
special flash chips required (Score:1)
Variations due to process (the cell is smaller or larger than intended) only need to be calibrated once. Variations due to environ
Re: (Score:1, Troll)
Well if they'd stop oiling the spindle with blood and pituitary glands of Palestinian babies, I'd buy from them again. Christ, it's just about impossible to get that shit out of the rug.