Top Solid State Disks and TB Drives Reviewed 216
Lucas123 writes "Computerworld has reviewed six of the latest hard disk drives, including 32GB and 64GB solid state disks, a low-energy consumption 'green' drive and several terabyte-size drives. With the exception of capacity, the solid state disk drives appear to beat spinning disk in every category, from CPU utilization, energy consumption and read/writes. The Samsung SSD drive was the most impressive, with a read speed of 100MB/sec and write speed of 80 MB/sec, compared to an average 59MB/sec and 60MB/sec read/write speed for a traditional hard drive."
Longevity of NAND flash (Score:3, Insightful)
Re:Longevity of NAND flash (Score:4, Insightful)
Re:Longevity of NAND flash (Score:4, Insightful)
Do traditional drives fail if the same sector is written to over and over again as well?
Then don't fill the drive (Score:5, Insightful)
Re: (Score:3, Funny)
Ya, becuase THAT is realistic in the real world...
Re:Longevity of NAND flash (Score:4, Informative)
No, but they'll fail either reading or writing over time regardless if you are writing or just reading just because the drive is moving. Even if you cool your standard drive, eventually it could just fail because it was left on for 10 years (since an active drive is constantly spinning).
Now its not guaranteed to fail, but the chances of a standard HDD failing that you only read from and don't write it is far greater than a SSD that you put files on it one time and don't write further.
I think SSD shine in archival types of things that you don't plan on trashing and rewriting that often such as image collections, movies, and MP3s. That said, swap disks, scratch disk, and cache file directories would logically still have better performance on your spinning platter drives and if that drive goes belly up you haven't lost much.
Re: (Score:2)
Archive does not need fast speed nor good seek times.
Normal hard drives have plenty of speed for archive however, would be spun down most of the time (no wear and tear) and they provide what SSD cannot: capacity.
Having 64gig of data archived is great and all but at home I have 900gig of archive data. Small problem dont you think?
Re:Longevity of NAND flash (Score:4, Insightful)
It is a relevant question, but this wouldn't kill your hard drive, it would simply reduce the amount of free disk space. And it's not difficult to imagine a file system smart enough to move files around when this happens. When a sector gets written to too many times, it can simply look for and move a really old file onto that sector to free up some of the rarely used sectors of the drive. With the increased performance of SSD, you probably wouldn't even notice it.
Aside from the re-write issue, flash memory drives should be WAY more reliable than a mechanical HD. It should never just completely die or start getting bad sectors so fast you don't have time to retrieve your data. It should also be a lot easier to replace when it starts to degrade. It shouldn't be as susceptible to damage when you drop it from a height of 3-5 feet, or due to heat, cold, vibration, dust, humidity, etc. I'm not sure whether a magnetic field could erase it like a hard drive, but if not, that's another plus for SSD. I imagine SSD's are more susceptible to static electricity, but so is almost everything else plugged into your motherboard, so I'm not sure if that could be considered a minus.
I'm sure if you ever tried an SSD on a laptop, you'd never want to go back to an old HD. The improved performance and battery life would make going back to an old laptop HD seem like going from broadband back to an old 56K modem.
Re:Longevity of NAND flash (Score:5, Informative)
How many heavily used spinning drives do you know that last even 10+ years?
Re:Longevity of NAND flash (Score:5, Informative)
Re: (Score:2, Informative)
Re:Longevity of NAND flash (Score:5, Interesting)
Some friends of mine at another company that were using them in a I/O laden system that wanted to replace laptop drives to make the machinews lower power and more reliable can blow out a flash drive in about 4 weeks.
Kirby
Re: (Score:2)
Re:Longevity of NAND flash (Score:5, Interesting)
Yes I have. However, I've never had one magically get smaller on me in such a way that fsck decides that your done fixing the filesystem. With SSD, YES, I've had exactly that happen to me.
In my life, I've lost a total of about 42Kb be completely unrecoverable with spinning media (yes, I mean that number literally). I use RAID extensively, I was the DBA/SA/Developer at a place that had ~10TB of disk online for 5 years. In all that time, 42KB is all I lost. Oh, that was in the off-line, tertiary backup of the production database (it was one of 5 copies that could be used as a starting point for recovery, we also had the redo logs for 5 days, each DB was a snapshot from one of the previous 5 days). It was stored on bleeding edge IDE drives put in a RAID 5 array. We used it as a cheap staging area before pushing the data over Firewire/USB to a removable drive that an officer of the company took home as part of the disaster recovery system (it had only the most recent DB and redo logs). The guy didn't RMA the hot spare, and we had two drives fail in about 3 days while the hot spare was waiting for the RMA paper work to fill out. In that one particular case, using ddrescue, I recovered all of the data off of the RAID5 array but 42KB (even though it was an ext3 filesystem on LVM, on a RAID5 array, which made the recovery even more complex). Every other bit and byte of data in my life from spinning media that I cared about, I've recovered (I've had a number of drives die with data I didn't care about, but I could have recovered from if need be). Trust me, I know about reliability, backups, and how to manage media to ensure that failure doesn't happen. I know about failure modes of drives. I've hot swapped my fair share of drives, and done the RMA paperwork. I've been in charge of drives that losing any one of the ~200 drives would have cost 10 times as much as I made in a year if I couldn't reproduce the data on it within hours.
If it had been worth $10K, I'd have sent off the drive to get that 42KB of data recovered. But it wasn't. It's well understood how the failure mode of spinning media. People know exactly how to do things like erase drives securely. People know who to call that has a clean room that can remove the magentic media to and put it under a microscope to get the data recovered. SSD isn't nearly as mature in that sense.
All of that is really to say: Yes, I know something about disks and drives. My point is to say that SSD's aren't magic pixie dust in terms of reliablabilty. I've had exactly what he's saying I shouldn't worry about happen to me on a regular basis. Enough, that our engineering department has developed specific procedures to deal with them in the field. We've changed our release procedures to accout for them. If your going to use an SSD or flash drive, go kick the crap out of it. Don't believe on faith anything you read on Slashdot (including this post, which is anecdotal). We order lots of 5,000 flash disk, and you can bet that at least 100 of them has serious flaws within being fielded. The ones the developers and testing uses regularly develop problems in terms of months, not years. The manufacturer tells us essentially, it's not worth it to find those, so deal with it.
The whole point of replacing the laptop drive was to make the silly thing more reliable. But making it uber-reliable for 4 weeks until the write leveling crapped out wasn't the idea.
Kirby
Re: (Score:2, Insightful)
Re: (Score:3, Interesting)
I believe what Kirby was saying, in addition to SSD's crapping out in weeks instead of years, is that he can get the data back from rotating media virtually every time if it's important enough to be worth spending the $$$s on. Unimportant stuff he d
Re:Longevity of NAND flash (Score:5, Interesting)
When I was young and stupid about drives and media, I lost a 1.2GB WD drive and lost everything on it. I couldn't spell "mkfs" or "fsck" and had no idea how to recover the drive at the time (I also didn't have the money to have a second drive to recover too, and no credit card so I could hold onto the first while having the second during the RMA). I was just young and ignorant. I lost a 1-2GB laptop drive that I literally just rode into the ground, I could have copied everything off and moved along. I knew the drive was going bad, but it was just a knock around system that I didn't care about. In the end, had I been thinking, I'd have saved the e-mail on it. I lost the first ~5-6 years of e-mail I had, but who wants e-mail from when they were 18-24? That was probably a couple of hundred MB that I might regret, but of nothing more then sentimental value. I'd never read it, and only be amused that I could prove I'm getting the same chain letters 15 years later.
I believe I had 4-5 drives I lost due to a virus or pilot error, but not a mechanical/media problem.
I've RMA'ed probably 100-200 drives due to some type of failure. I've had lots of of drives fail that were in a RAID array, that the mirror saved me. I've had lots of drives fail that were stand alone that had a section of bad sectors. All of that I recovered every byte of data from. Normally a drive that is going bad, you can still recover from for a very limited amount of time. Normally you have plenty of lead time, especially with SMART drive monitoring that your drive is going south. As long as you pay attention, spinning media isn't that hard to keep in good shape.
As a professional IT person, 42KB is it. On machines where production work is done for money at a company. 42KB is it, and in that case I was bound and determined to recover absolutely everything, and I invested a week into that project. I gave up on the 42KB once I proved that it was in a backup for the database that was at that point 15 days old (and thus of no use). Had it been necessary or cost effective, I'd have spent the $1-3K to get that drive images recovered by a professional data recovery shop. I think I've lost a drive or two on my personal machines at work, but the drive was fine, the laptop SATA controller was overheating. Using FSCK, I recovered the entire FS once the RAID controller was replaced. I think I had to re-rip some music from CD, because I failed to back it up prior to sending the laptop in for repair. I re-imaged the drive just to be safe in case the RAID controller had corrupted something important on the OS drive, which was the only reason I actually lost the music.
Again, it's the fact that the flash drives we have decided the drives are smaller at the interface level. Using fsck just scragged the system pretty much start to finish. I don't have a clue where the missing blocks are from. I have no idea what happened, upon reboot it decided that the block devices was smaller. Filesystem recover tools haven't had a chance to mature to understand those types of failures. Flash makers haven't yet decided that access to diagnostics and re-mapping logs might be of value to data recovery tools (at least none that I'm aware of). Access to the raw data (in case they are holding blocks in reserve). All of these things are reasons to be concerned about write leveling.
Kirby
Re:Longevity of NAND flash (Score:4, Insightful)
For the above example, a flash drive works very well. If you need the benefits of flash storage mediums (vs spinning media) you should be prepared to engineer around the situation. Run temporary data out of RAM with battery backup, and only commit the data to flash between reboots and power outages.
Re:Longevity of NAND flash (Score:4, Interesting)
I think that makes perfect sense, but then I'd think that all the money they'd spent in making the thing perform faster then say 1MB/sec read rate is totally wasted. I'd assume folks are trying to push these are replacements for enterprise server machines, which I'd be extremely relucant to do.
Folks talk about these things in the theoretical (the original poster linked to a story that crunched numbers to show it should be safe). My question is does anyone have solid experience they can point to show that it has actually been safe for say 6-18 months under some well known duty cycle (A database, a file server, an e-mail server).
I have actual experience, with crappy flash made by a low end manufacturer that shows me, it's not terrible reliable. It is my understanding that we've had better luck with other makers, but their parts were too expensive (but software development is free *sigh*).
There are other threads in here that make me want to cram a CF-IDE converter into my machines and try putting my journal onto a Flash drive. Sounds like the performance boost and power consumption is a big win, but the fact that every byte of data pushed to the journal might be an issue. On a home machine, it might be worth playing with for giggles for performance testing.
Other folks I know who have tried to do things with flash have also been disappointed over the past 12-24 months, despite assurances from various experts that "it should work"... I'm looking for, "I'd done it, here it is, go play with it.". Now obviously MP3 players have been doing it for a while. I'm more interested in general purpose usage of a Flash drive. Those are the types of things I'm currently working on, cramming a flash into a machine that runs an ext2/ext3/xfs/reiserfs/jfs or some other read/write heavy usage ready FS on it.
Kirby
Re:Longevity of NAND flash (Score:4, Interesting)
Re: (Score:2)
How about when you sell or trade in the car?
You'd wipe the device once, leaving the drive empty for the next person (as well as hundreds of thousands of writes available). I said treat the device as being write once, not actually making it write once. You don't even have to wipe the drive if you're one of those types of non-copyright-obeying folks.
How about when your teenager plugs in their 50GB mp3 player and it gets auto-loaded to the car-audio system and your half full system is now full?
Remember on floppies the little plastic that you could move to enable write only? Think about that, but in software. If you hook up an iPod, the in-dash system should automatically play music from i
Re: (Score:2)
How many heavily used spinning drives do you know that last even 10+ years?
I have at least 15 of them doing that right now. my last employer changed out the SCSI array in a couple of powervaults in 2005 I picked the drives out of the trash and have been using them in a powervault I got off ebay for $25.00 the drives have been spinning for over 10 years now.
I have had only 1 drive fail out of the "untrustworthy" ones I got out of the trash.
SCSI U160 drives are incredibly robust, not like the crap they have
Re: (Score:3, Interesting)
on the other hand i have 4
Swap partition/file (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Do a web search for "flash wear leveling."
-aRe: (Score:2)
And you do a search to see how well that works when your SSD is mostly full, and the swap space is getting hit hard. Leveling doesn't tend to move static files often, meaning when the SSD is mostly full, only a small part of it is getting continually whacked. And when that goes out of service, you have an even smaller pool of free space to handle all the activity.
Re:Longevity of NAND flash (Score:5, Interesting)
Re: (Score:3, Informative)
For that matter, noatime is a sensible default for any desktop OS. When was the last time you actually searched for files you hadn't accessed in six months?
With low enough cost (Score:3)
Disk performance it the main roadblock to getting on the server first, which has a huge advantage over slower-loading players.
Yes, I am a LPB. Sue* me.
* By "sue" I mean attempt to frag.
Re: (Score:2)
Re: (Score:2)
Troll, Troll, Troll, and Troll
Attorneys at Law
Reliability (Score:5, Insightful)
The no-moving-parts characteristic is, in part, what protects your data longer, since accidentally bumping your laptop won't scramble your stored files. Samsung says the drive can withstand an operating shock of 1,500Gs at .5 miliseconds (versus 300Gs at 2 miliseconds for a traditional hard drive). The drive is heartier in one other important way: Mean time between failure is rated at over 2 million hours, versus under 500,000 hours for the company's other drives.
Re: (Score:2)
I was just thinking the other day that 300G just wasn't cutting it anymore. I can't count how many times I've thrown my laptop out of the space shuttle and the drive was barely readable after it landed in a concrete parking lot.
Re: (Score:3, Funny)
I've been climbing mountains for some 30 years. I've never thought to bring a hard drive with me. I've dragged around quite a passel of other odd and heavy things, but I appear to be missing something again ...
Hmm (Score:5, Interesting)
Re: (Score:2)
"Samsung says the drive can withstand an operating shock of 1,500Gs at
It gets better (Score:4, Interesting)
Re: (Score:3, Interesting)
I used to hook up the IDE->CF [microcenter.com]. But the next time, I do this, I will use this instead (cheaper and does not take up a slot) [microcenter.com]. In addition, absolutely do not use the cheap CF garbage. There is lots coming out of China and the quality is horrible. If you do use one of the cheap one and it goes bad, you will at least understand why quality costs. I used Sandisk [microcenter.com]. I bought it at micro center since it was close,but I would go with newegg if ordering off the web (lots cheaper).
As to the
Re: (Score:2)
http://www.earth.org.uk/low-power-laptop.html [earth.org.uk]
Rgds
Damon
Re: (Score:2)
Not necessarily. It really depends on the statistical distribution of the number-of-writes-until-failure the various blocks (or whatever the unit of failure is) in a SSD. If they're normally distributed, then you'd probably see several blocks fail here or there long before the majority of them had failed.
OTOH, if you or your operating system are
Re: (Score:3, Insightful)
Re: (Score:2)
Is it just me? (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Interestingly, the Seagate has so much space that "[t]he odds are excellent that Windows will never again tell you that you're running low on hard disk space with this 1TB drive, and that alone might be worth the price of admission", while the equally-sized Hitachi "doesn't boast efficiency, but its slightly lower platter
Number of writes? (Score:3, Interesting)
Why is the ultimate number of writes never taken into account in these comparison reviews? Why are solid state drives tested so that their weaknesses are not probed?
Re:Number of writes? (Score:5, Informative)
If you want, buy an HDD and a Flash-Drive of the same cost, hook them up to a program that runs each at equal data-transfer rates, and see how much data you can read and write to each before they fail. Report back to us in the six months it'll take you.
Oh, and you need to do the trial over a wide sample, so get, oh, at least ten of each.
Bayesian or Monte Carlo? (Score:2)
Because it's a measure best reflected by Baysean Data, and they don't have enough time to test them.
What's Bayesian Data? [And yes, I am too lazy to Google it.]
Did you mean Monte Carlo?
Or maybe Latin Squares?
MTBF/Write Cycles (Score:5, Interesting)
Write Cycles: Even at the lowest estimate, 100,000 write cycles to failure
Meaning on a 32GB Drive, before you start seeing failures, you would have to (thanks to wear-leveling) write 32*100,000 GB, or 3.2Petabytes
at 60MB/sec write speed of the Samsung drives, you would need to write (and never, ever read) for 3,200,000,000/60, or ~53Million seconds straight.
53Million divided by 86,400 means you would need to be writing (and never ever reading) for ~617 Days straight (That's roughly 20 months of just writing, no reading, no downtime, etc...
So... the sky is not falling, these drives are slated to last longer than I've ever gotten a traditional drive to last in my laptop(s)
Almost forgot to mention, standard NAND of late has been more in the 500k-1M write cycle between failures range. 100k was earlier technology, so multiply numbers accordingly.
Re: (Score:2)
That leaves you about 10 GB of space to use for writes for swap, temp files, etc.
Re:MTBF/Write Cycles (Score:4, Interesting)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
If the block on disk has ever been written the flash device has to keep it. It has no idea that no file inodes point to it anymore. Wehn a write is done, it picks a block from the pool and writes it there, and juggles its own mapping. But I am curious about a flash device that will, on its own, just juggle things around. That could avoid the data stagnation problem, where any data that doesn't get written on is just keeping the zones of writing all that much smaller. But it can also increase the number
Re: (Score:2)
Re: (Score:2, Insightful)
NOT true, unless the drive is completely empty! If you have 31 gigs of data on that drive which you were using as long-term storage, then you'd only have to write (32-31)*100,000 GB of data before failure. You obviously wouldn't be overwriting any data already stored on the drive
Re: (Score:2)
Re: (Score:2)
On a magnetic hard disk, once you get a failure you can expect the thing to die completely soon, because failures tend to be mechanical. Once there's scraped magnetic material bouncing around on the inside it's only going to get worse, possibly very fast.
On a SSD what should happen is that sectors die in a predictable fasion, and they die due to writes, so you can still read and recover your data.
Re:MTBF/Write Cycles (Score:5, Informative)
No, but the wear-leveling routines in the drive will happily move around your existing data so that rarely written sectors are available for heavy writing operations.
Seriously, this "issue" comes up in every discussion about SSDs, and it seems like people are just unwilling or unable to accept that what was once a huge problem with the technology is now not even remotely an issue. Any SSD you buy today should outlive a spinning disk, regardless of the operating conditions or use pattern. It is no longer 1989, engineers have solved these problems.
Re: (Score:3, Interesting)
No, but the wear-leveling routines in the drive will happily move around your existing data so that rarely written sectors are available for heavy writing operations.
Seriously, this "issue" comes up in every discussion about SSDs, and it seems like people are just unwilling or unable to accept that what was once a huge problem with the technology is now not even remotely an issue. Any SSD you buy today should outlive a spinning disk, regardless of the operating conditions or use pattern. It is no longer 1989, engineers have solved these problems.
Actually, I think the issue is there are differences in the drives that don't come up in the articles themselves, so that detail gets left out every time.
So, it's inevitable that someone who doesn't know this particular detail, but is already familiar with how platter based magnetic media work will come up with that issue in pretty much every discussion.
The problem is it's new. That's all. (Or, perhaps that techno-journalists write about stuff they don't know enough about.)
Re: (Score:3, Interesting)
Re: (Score:2)
I agree that on the whole, flash is a lot more durable now than it used to be, but I'm not quite convinced that these will be suitable as a general-purpose replacement for magnetic disks. Aside from the NAND longevity issue, I'd be concerned about the a
Re: (Score:2)
So performance isn't that far from a nearly empty drive.
Although I do agree, I'd be concerned about recovering from controller failure more than with a magnetic drive.
Re: (Score:2)
With DDR2 prices so cheap, I don't see why anyone (with a modern enough system to use DDR2) is swapping data to disk regularly. Certainly not anyone who can afford a SSD.
Re: (Score:2)
Hey, I never get this question answered: the bad block map has to be stored somewhere, so is it also limited to 100,000 writes? You can't remap the map, can you? If not, then, are you limited to 100,000 total errors?
To those worried about longevity... (Score:2)
What about real performance (Score:4, Interesting)
Of course SSD will beat an IDE disk hands-down, but that is not why you buy IDE drives.
I have always used SCSI for my OS/system and IDE for my storage, this combination (in addition to SMP rigs when available) has allowed me to out-live 3 generations of processors. Therefore saving me money on upgrades.
SSD seems best marketed to 'gamers' so why is it always connected to a very limited IO bus?
Re: (Score:2)
Re:What about real performance (Score:5, Insightful)
Flash is great, if your disk is basically read-only.
Re: (Score:2)
Re:What about real performance (Score:4, Interesting)
On the other hand, random writes issues are "fixable". My company just published tests for various Raid-5 Flash SSDs setups. For 4 drives testing with 10 threads on Linux 2.6.22 using our MFT "driver", we get:
4K random reads 39,689 IOPS 155 MB/sec
4K random writes 29,618 IOPS 115 MB/sec
These are real numbers and the application does see the performance improvement.
For full details on drive performance see:
http://managedflash.com/news/papers/index.htm [managedflash.com]
Not the jump I was hoping for (Score:2, Interesting)
Looking at it, the biggest benefit I can see is that the solid state drives should be better at withstanding shock and vibration - which normal hard drives hate. If they cannot improve the performance (which will still be useful for gamers, serv
Re: (Score:2, Informative)
Re: (Score:2)
Re: (Score:2)
They should be able to parallel several flash chips to increase the speed. Or maybe the old drives already did this?
Re: (Score:2)
Re: (Score:2)
Well, that and size. Give me the power of a laptop in something the size of a cell phone, with a projected screen in midair, with a way of registering me typing commands (without a keyboard) and I will be happy with my computer - for a year or two.
Where can I buy one? (Score:2)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
Re: (Score:2)
It's just a shame most manufacturers seem to be concentrating on mobile users, not those who need serious IO.
Waiting for low-end drives (Score:4, Insightful)
Re:Waiting for low-end drives (Score:4, Funny)
The REAL article link (Score:3, Informative)
Speed for a mech. HD is burst, not track-to-track? (Score:4, Interesting)
"Samsung rates the drive with a read speed of 100MB/sec and write speed of 80 MB/sec, compared to 59MB/sec and 60MB/sec (respectively) for a traditional 2.5" hard drive."
The speed quoted for a mechanical hard drive is a burst speed, accurate for reading only one track, and doesn't include the time it takes for a conventional rotating hard drive to change tracks. Isn't that correct?
Re: (Score:2)
Depends. My IDE drives seem to sustain 60-ish MB/second on a large contiguous file even across multiple tracks... but suck if the file is heavily fragmented.
With the Exception of What??? (Score:2)
Well excuse me, BUT, capacity is the single largest factor in my disc drive purchase decisions. I'll give away speed, power consumption, size, heat, noise, and even cost - everything but reliability - in favor of capacity. Even "slow" hard drives are quite fast historically speaking, and none of those other factors make up for running out of drive space.
And don't the SSD's cost a lot more too?
Re: (Score:2)
There are also valuable business applications for the same technology. If 64GB f
Is NV that important? (Score:2)
We're going to see how effective this is over the coming months: NAS and SAN products are clearly going to start sprouting SSDs either instead of the primary cache or as a mid tier between the RAM and the disk. I'm not expecting miracles: RA
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
WANT (Score:2)
Now has anyone found any place to GET ONE? I've been looking and I can't even find a model/part number. WTF? Why can't I be the first one on my block to have a 0 spindle laptop? It
We're only at 1TB? (Score:3, Interesting)
- 3600 or 4200 RPM rotational speed
- low noise
- low heat
- low power consumption
The reduced speed (wear and tear on parts) and heat should also lead to greater reliability. If a 3.5" drive can be 1 TB today, a 5.25" drive should be 1.5-2TB. A drive like this would be perfect for a home media server or HTPC, where performance is not critical (SD DVD is only 4 GB/hour; even BluRay is only 25 GB/hour--and I'm pretty happy with ripped DVDs at ~1500 kbps--less than 1 GB per hour) but low heat, low noise, and low power consumption are all desirable traits. (There's more rotating mass, but at lower speed there should be much less energy/momentum/intertia/whatever overall.) And as long as CDs and DVDs are still ~5"--and that seems to be the case (DVD, HD-DVD, BluRay, SACD)--we'll already be using properly-sized cases.
* granted, that old thing was slow as hell. Swapping out the stock 8 GB Quantum Bigfoot for a 30 GB Maxtor dropped boot times from 3 minutes to 40 seconds.
Re: (Score:2)
Re: (Score:2)
"Up to" (Score:2)