Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Cellphones Hardware

Flash Memory, Not Networks, Hamper Smartphones Most 121

Lucas123 writes "New research shows that far more than wireless network or CPUs, the NAND flash memory in cell phones, and in particular smartphones, affects the device's performance when it comes to loading apps, surfing the web and loading and reading documents. In tests with top-selling 16GB smartphones, NAND flash memory slowed mobile app performance from two to three times with one exception, Kingston's embedded memory card; that card slowed app performance 20X. At the bottom of the bottleneck is the fact that while network and CPUs speeds have kept pace with mobile app development, flash throughput hasn't. The researchers from Georgia Tech and NEC Corp. are working on methods to improve flash performance (PDF), including using a PRAM buffer to stage writes or be used as the final location for the SQLite databases."
This discussion has been archived. No new comments can be posted.

Flash Memory, Not Networks, Hamper Smartphones Most

Comments Filter:
  • by Anonymous Coward
    writing smaller applications? Maybe, you know, stick to one thing and master it instead of spewing forth so many OSes, languages and nonsense that there's no hope of reigning in the software chaos? But don't worry, the hardware folks will pull a rabbit out of their hats (again), so that software geeks never, ever have to learn or change.
    • While I agree, I don't see it happening... It's almost like inventing another layer of abstraction in programming is a marketting ploy now. We had C for the longest time, then Java & .NET came out and since then it seemed like abstraction was just "the" answer. The sad part is that Java is kind of cool with its pure object orientation & cross platform abilities but the idea just blew up into a new paradigm so to speak...

      That being said, I think objective C is kinda neat.

    • by icebike ( 68054 ) * on Saturday February 18, 2012 @09:00PM (#39089817)

      writing smaller applications? Maybe, you know, stick to one thing and master it instead of spewing forth so many .

      An interesting comment, but not the focus of the linked article.

      Its not about the size of applications. Large reads into memory of big applications is not the problem, as this happens with sequential reads, which are very fast on the media type they were looking at.

      Its the small RANDOM read/write units that cause the problem, the small sqlite database updates to databases stored on the MicroSD card that are the problem.
      The recommendations for placing often used sqlite databases in RAMdisk yielded a tremendous performance increase because it eliminates tons of little random read and write operations that tend to be scattered all over the microSD card. This is buried in page 10 of the Actual Research document [usenix.org], but pretty much glossed over in the linked article. This would require a bit of an OS re-engineering, on the part of smartphone OS designers, such as offering APIs to do much of the routine data storage that these apps all end up using. That storage would be in ram, backed up to MicroSD in a more efficient manor.

      Programmers tend to heavily use the general-purpose
      “all-synchronous” SQLite interface for its ease of use but
      end up suffering from performance shortcomings. We
      posit that a data-oriented I/O interface would be one that
      enables the programmer to specify the I/O requirements
      n terms of its reliability, consistency, and the property of
      he data, i.e., temporary, permanent, or cache data, with-
      out worrying about how its stored underneath.

      Yes, ramdisk can go away on a un-planned phone reboot, but if the RAMdisk was used as a cache, and occasionally written to disk performance would be much better, because you find these little sqlite databases used all over Android and IOS.

      • by arth1 ( 260657 ) on Saturday February 18, 2012 @10:37PM (#39090335) Homepage Journal

        Its the small RANDOM read/write units that cause the problem, the small sqlite database updates to databases stored on the MicroSD card that are the problem.
        The recommendations for placing often used sqlite databases in RAMdisk yielded a tremendous performance increase because it eliminates tons of little random read and write operations that tend to be scattered all over the microSD card.

        How about the recommendation to not use sqlite whenever a flat file works better? Do one write instead of database operations which cause multiple blocks to change even if you only change one thing.

        And if you have to use SQL for whatever reason, don't use indices unless absolutely necessary. It seldom is, despite what school has taught you. The index has to get updated too, which causes additional non-sequential writes. The minor speed boost you may get from selects are easily eaten up by the major speed bumps you cause on inserts and updates.

        An embedded system, which this should be treated as, isn't a miniature version of your desktop computer. And an SD card isn't a miniature version of a hard drive, or even an SDD. You have to think about minimizing writes and especially random writes. Not just minimize the number of high-level-abstracted changes, but actual low-level writes caused by them. Because there isn't a 1:1 correspondence.

        Also, memory is at a premium. Dropping pages really hurts speed. Don't rely on garbage collection as a substitute for using less memory or freeing up memory manually.
        Don't hang on to resources in case you may need them. Be cooperative, and close the sockets and file handles when you don't use them, because other parts of the system may benefit from that memory. CPU is rarely the issue, so don't worry about CPU outside identified bottlenecks. Worry about being a resource hog, because that causes the entire system to slow down, and drag your app with it.

        • by Tablizer ( 95088 )

          In the future, the distinction between desktop and portable will likely get blurrier, and development techniques and related economics will become similar.

          • by arth1 ( 260657 )

            In the future, the distinction between desktop and portable will likely get blurrier, and development techniques and related economics will become similar.

            In the future, we will have flying cars too.

            Don't be a cargo cult follower and believe that everything will be solved by Moore's law, convergence, or whatever silver bullet seems most promising. Program for reality.

            • by jedidiah ( 1196 )

              Better to be a "cargo cult follower" than to ignore the evidence that is already quite obvious if you just bother to pay any attention to the evolution of these devices.

              Given what's already in these devcies, I find the suggestion that you should avoid something like SQLite of all things due to its "high overhead" simply hilarious.

              Trying to pretend you're running an Atari 400 and can code in nothing but assembler has it's own problems.

            • by Tablizer ( 95088 )

              It might not even be about convergence, but rather portable devices of tomorrow being as powerful as desktops of today. High-level tools like RDBMS become more economical in terms of development versus hardware concerns as power increases.

              It's similar to how C is used instead of assembler for most embedded apps, such as mid-sized toys.

              • by arth1 ( 260657 )

                It might not even be about convergence, but rather portable devices of tomorrow being as powerful as desktops of today.

                Oh, in some if not all respects, they will be. But you shouldn't count on it and disregard today's devices and throw frugal programming aside based on what may be out in the future.
                If you conserve resources so it runs well on current systems, it will run great on future systems, and give you an edge and enough leeway to expand the app without causing problems. It's a win for everyone.
                Disregarding resource use hoping that the future will take care of it isn't a win for anyone.

                • by Tablizer ( 95088 )

                  In my observation, being first to market and low purchase price seems to trump resource conservation as far as actual sales priority. You can call consumers "stupid" all you want, but that won't change their sales choices.

        • by msobkow ( 48369 )

          And if you have to use SQL for whatever reason, don't use indices unless absolutely necessary. It seldom is, despite what school has taught you. The index has to get updated too, which causes additional non-sequential writes. The minor speed boost you may get from selects are easily eaten up by the major speed bumps you cause on inserts and updates.

          100% true!

          The issue is so common that a virtual index that is not maintained by the database is often referred to as a "business index". With small datasets,

        • How about a recommendation to do DB writes in a separate thread? I know I've seen that recommendation but don't recall if it was in the info published by Google or in third party tutorials. It has always been the case that if you are writing interactive programs, you need to think about spinning anything that can block response to the UI into a separate thread.

          I'm sure that you can program something faster than a database access using flat files. That too has always been the case. However you trade off prog

          • by arth1 ( 260657 )

            How about a recommendation to do DB writes in a separate thread?

            In one word: no.
            The SD card is for practical purposes a single-threaded bottleneck, which doesn't do concurrent writes and reads like your average SSD.
            Multiple threads for the sake of multiple threads is going to hurt performance by the overhead and extra memory usage. Use multiple threads when you need them, but not to camouflage the underlying problem which you just make worse by doing so.

            • In one word: no.
              The SD card is for practical purposes a single-threaded bottleneck, which doesn't do concurrent writes and reads like your average SSD.

              In other words, while a DB thread blocks on a write, the UI thread will not service the interface? That's a disappointment. I wonder if the same is true of the built in flash (which is used by default for application databases.)

              • by arth1 ( 260657 )

                In other words, while a DB thread blocks on a write, the UI thread will not service the interface?

                It may, if there are no reads or writes for the GUI. Not even reads. Else, you'll likely make a bad problem worse. Parallelism is only good as long as you use resources that are available for it, and keep in mind the overhead of setting it up.

                The blocking on the card can be mitigated by having plenty of RAM - in which case OS caching might cause your parallel read to succeed even though the SD card is busy. But you shouldn't count on that.

        • by 21mhz ( 443080 )

          And if you have to use SQL for whatever reason, don't use indices unless absolutely necessary. It seldom is, despite what school has taught you. The index has to get updated too, which causes additional non-sequential writes. The minor speed boost you may get from selects are easily eaten up by the major speed bumps you cause on inserts and updates.

          In many applications reads (SQL SELECT) happen much more often than writes. In this case, indexing still brings an overall benefit. In fact, the hardware architecture of flash memory is tilted to fast reads versus very slow writes, so if you have to do any writes at all, you already pay the price. The non-sequentiality issue is also mooted by controller sicruitry in bulk flash devices like SD cards, which reallocates logical buffers for wear leveling, hiding bad blocks, and write optimization.

          The real issue

          • by arth1 ( 260657 )

            The non-sequentiality issue is also mooted by controller sicruitry in bulk flash devices like SD cards, which reallocates logical buffers for wear leveling, hiding bad blocks, and write optimization.

            This is a common myth, and why I wrote that SDs are not like SSDs. For a CF card, it's somewhat true, but an SD card doesn't have more than very rudimentary controller for basic wear levelling and bad block mapping, and no RAM buffers in which to prepare write optimizations, nor multiple channels in which to service other operations while one expensive operation occurs.
            Don't treat the SD card in a smartphone as if it was an SSD. Both are MLC (or in rare cases SLC), but the similarity stops there. An SD c

      • by sco08y ( 615665 )

        Yes, ramdisk can go away on a un-planned phone reboot

        So it's a complete no-go. The point of using a DBMS is that you have some kind of guarantees as to your state.

        By all means, you can push intermediate work to a RAM disk or use it as a cache, but any time you're going to tell the user their changes have been committed, there has to be a call to fsync.

    • by __aaltlg1547 ( 2541114 ) on Saturday February 18, 2012 @09:01PM (#39089829)

      A couple points:

      1. Smart phone makers have no control of what crap software you're going to stick on their phone -- only the crap software that comes preinstalled.

      2. The former is the point of why consumers buy smart phones in the first place.

      3. The smart phone makers have a big incentive to find a way to make the hardware faster because running your apps faster is a selling point.

    • by Urkki ( 668283 )

      writing smaller applications? Maybe, you know, stick to one thing and master it instead of spewing forth so many OSes, languages and nonsense that there's no hope of reigning in the software chaos? But don't worry, the hardware folks will pull a rabbit out of their hats (again), so that software geeks never, ever have to learn or change.

      Or, you know, maybe mobile phone makers should stop making phones with increasing resolutions and CPU power and ability to do multitouch pan&zoom. It was a neat trick from Apple, first make people expect smooth pan&zoom at paltry 320x480 resolution. And then the resolution creeps up to sizes like today's 800x1280 in the latest tablet-phone hybrids, where people expect apps to take advantage of the resolution, but still have equally smooth scrolling. That's like 6x bandwidth increasnig throught the g

      • by beelsebob ( 529313 ) on Sunday February 19, 2012 @04:18AM (#39091241)

        Except that apple are still doing this just fine at 1024x768, and are likely in a few months to be doing it just fine at 2048x1536... The reason the iPhone can do this and android can't is nothing to do with the resolution – it's that iOS has always used the graphics hardware to render the UI.

        • by Urkki ( 668283 )

          That's not really relevant to flash memory performance, hardware acceleration is just what puts the bottleneck squarely at flash throughput. If you have N x the pixels on screen, you'll need N x throughput for flash memory, or the app will either load slower, or it will not take advantage of the resolution and needs to resort to real time scaling (which is ok, but far cry from proper "photoshop" scaling quality). Another thing is increase in camera megapixels, and also increase in acceptable thumbnail size

          • You're assuming that the majority of what is on screen is from raster sources and not vectors. Both Android and iOS make heavy use of vectors throughout the UI and, while these do place a heavy load on the CPU and GPU for scaling, they place very little load on the flash. It's also worth noting that the flash write times are the main problem - read is usually fast enough.
          • The point being that whether the UI stutters or not is nothing to do with flash load times – it's purely and simply to do with whether the UI code is written in a sane way (i.e. with multiple threads for image loading, and with hardware acceleration used to get it onto the screen).

            • by Urkki ( 668283 )

              UI stutter isn't flash problem, but long app start time, or app UI having nothing to display yet because hasn't loaded yet certainly can be a flash issue. Relatively slow flash might actually reduce cases of UI stutter, since CPU isn't so busy processing data... ;)

  • Users don't have the option of trading network performance for faster local storage. The two are so unrelated it's not clear as to why we're comparing them (I'd use the term "apples and oranges" but I'm sure that would piss off some anti-Apple trolls.)

    And sure, SSDs could be faster, believe me nobody would complain if they were. But after using spinning magnetic storage for decades, SSDs seem blisteringly fast to me.

    • by Microlith ( 54737 ) on Saturday February 18, 2012 @07:58PM (#39089413)

      SSDs and the eMMC NAND storage inside smartphones are wildly different, even if they are fundamentally the same concept.

      SSDs get all of their speed from massive parallelization, writing to and reading from multiple die at once. Often these die tend to consume a bit more power, allowing for faster overall access. In smartphones, you usually have eMMC as the primary NAND device. These are basically high capacity, soldered down SD cards packed with several high density MLC NAND. You have drastically fewer IO channels, no DMA capability (it's all PIO via the CPU,) and slower NAND due to reduced power requirements.

      This is why watching your memory consumption is essential on mobile devices, and why optimization for each device has to be done. The moment the CPU has to access the NAND, you're going to take a performance hit no matter how many cores you have.

      • Re: (Score:3, Informative)

        by dmitrygr ( 736758 )
        Not even close to correct. All current, last generation, and the generation before that SoCs for phones/Tablets/PDAs support DMA to SDMMC module. Tegra 1/2/3? yes Omap 1/2/3/4/5? Yes QC? Yes Marvell 3x, PXA2xx? Yes
        • by Anonymous Coward

          So what you're saying is that he's correct on all but one point? That sounds pretty "close to correct" to me....

        • Not even close to correct.

          snarkiness like that makes slashdot a sore to read, and the author probably a sore in real life as well. Can't we all be more polite?

      • by mrmeval ( 662166 )

        This is self-refreshing DRAM so as long as it has power it's good.
        http://www.cellularram.com/faq/index.html [cellularram.com]

        It would not be that expensive to have one 128MB one for long term storage. Micron has one that's very low power though there may be others out there.

        I'm not a fan of flash.

    • by Nemyst ( 1383049 ) on Saturday February 18, 2012 @08:35PM (#39089649) Homepage

      The good term is "Apples and Androids", obviously.

    • Yep. pointing out where performance issues come from is soooooooooo useless.

    • by Anonymous Coward

      Well, that'll change once the new high definition DRM [nextgenera...memory.com] is entrenched into every single wafer of flash. Only then will it be safe to let citizens have local storage. Maybe.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      They are being compared in this manner because in the US, a network provider made an advert where they "race" two smart phones to prove that their network is faster. That is, they have two phones and click "install" on the market at the same time on both of them, and then we see that one of them installs in 2 seconds flat while the other one takes like 30 seconds to install the the app. The advert is obviously bullshit because even on a wifi connection and broadband, apps never install that fast. This artic

  • by Anonymous Coward

    Why not have the phone offload all the info to a ramdrive and occasionally backup the info to the ssd?
    Phones could be blisteringly fast if they used ram this way.

    • by Kenja ( 541830 )
      For much the same reason they dont have RAID-0 in the phone. It's too expensive, and its a PHONE!
    • so you want a phone to have 4-32GB RAM disk + say 256-512MB + system ram

      • No. You want a separate RAM cache, with its own battery, that can persist between crashes and phone power cycling. The idea would be that changes would be stored there and later flushed to the flash if they had not been superseded after a while. You want something that has the same persistence guarantees as a flushed write to flash, but at a lower cost. Having a small amount of RAM (32MB would probably be enough) with a separate - small - battery or capacitor. When the phone loses power, it would write
    • by Microlith ( 54737 ) on Saturday February 18, 2012 @08:01PM (#39089439)

      RAM consumes a lot of power, far more than a NAND disk that can be powered down in between accesses.

  • It would be nice if it were faster. I backed up and downloaded the pictures off my iPhone to my computer today. How long did it take? 3 hours.

    • by robogun ( 466062 )

      The flash memory isn't what's holding it back. Sounds like whatever software you're using to bridge your locked-down device to the computer is slowing down the process.

      Many devices hook up directly via USB2.0 and/or have removable flash cards which download at 2GB/min, these may save you a lot of time if this is really a problem.

      • Re: (Score:2, Troll)

        by Lumpy ( 12016 )

        That "software bridge" is called the craptastic USB stack that Microsoft uses on windows 7.

        Even if you open the iphone from the file manager and grab them by hand it takes forever. USB2.0 is utter crap for any large amounts of data transfer. Small single files? good, sustained data transfer is garbage. Apple was stupid for removing the Firewire interface from the ipod and iphhone.

        12 gigs of photos over USB 2.0 is slow as hell simply because the USB2.0 speeds are slow as hell.

        • by arth1 ( 260657 )

          Reading 12 gigs of photos over high-speed USB 2.0 isn't that bad when you do it from a reasonably fast card and dedicated card reader. You can typically get 360 Mbps (of the 480 Mbps advertised), which equals 45 MB/s, or about 5 minutes, if your HD can keep up with writing that fast.

          If you use your phone as a card reader, don't expect anywhere near that speed. The phones aren't optimized for that, unlike a card reader.

          • I work for a company that sells USB flash memory devices. We've sold some rather high end ones ... I've NEVER seen 45MB/s over USB in my life, I'm fairly certain due to over head in the USB protocol that its technically impossible to do, even if its technically possible, the reality of it is that no actual USB product of any kind does anywhere near the full 480mbs that USB has when you look at the spec sheets.

            My single WD 5400 RPM green laptop drive can write a sustained 80MB/s, not sure where you've found

          • I cant get more then 30MB/s out of USB 2.0, ever. Hell my NAS over gigabit is faster then USB 2.0
        • My 7 year old 2.5" USB hard drive gets sustained 30MB/sec transfer rates over USB2. The hard drive gets similar speed plugged into an IDE bus. That's on Windows 7. 12GB would take 7 minutes. The bottleneck in this picture is the hard drive platter speed and density. Not USB2. Not Windows 7.
  • Cut back a little (Score:5, Insightful)

    by Waccoon ( 1186667 ) on Saturday February 18, 2012 @07:57PM (#39089411)

    While I sympathize with developers who have ambitious ideas, the bottom line is that you have to develop within the limitations of the hardware. If your software is too slow or otherwise suffers in performance, then your software is simply too slow.

    Cue stories about how RAM chips were too slow to keep up with cutting-edge video controllers in the 90's.

    • by tuxicle ( 996538 )
      What about stories of how RAM chips today are too slow to keep up with cutting edge CPUs?
      • RAM speeds have not been a significant issue for some time now. Go and read some of the performance tests done recently on various speed RAM, even jumping from 1333 to something like 2133 DR3 RAM has such an insigificant performance change on the rig that it is clear memory is not the bottleneck nowadays. The Bus, HDD etc are the bottlenecks that hurt more than anything else.
        • DRAM latency hasn't changed much in 10 years. Your 2133 ram will have the same latency has 1066 ram.
          Now think why CPU's with large SRAM caches perform much better than those without. It's not like the extra few MB makes the difference. Its the 1 - 3 cycle latency
        • Re:Cut back a little (Score:5, Interesting)

          by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Sunday February 19, 2012 @01:45AM (#39091089)

          RAM speeds have not been a significant issue for some time now. Go and read some of the performance tests done recently on various speed RAM, even jumping from 1333 to something like 2133 DR3 RAM has such an insigificant performance change on the rig that it is clear memory is not the bottleneck nowadays. The Bus, HDD etc are the bottlenecks that hurt more than anything else.

          False.

          RAM speed does have a significant impact. It's just that cache hides a LOT of the impact. If you ever disable the cache on your CPU, your computer would feel like it was 15 years old - it's that slow.

          Now, one thing about RAM - RAM hasn't actually gotten significantly faster - the clock speeds may have gone up, but the latencies have gone up as well. The faster RAM may have true access times (measured in nanoseconds) close to that of the slower one, especially the cheaper RAM.

          But a modern CPU is very cache dependent. Hell, even the little ARM in your smartphone suffers tremendously when cache is off. (I should know - I've had to do actual timing tests of the ARM caches.).

    • by izomiac ( 815208 )
      IMHO, it's related to how powerful x86 hardware has become. There's probably a generation of programmers that haven't really grasped the concept of various operations having a cost.

      On a desktop you can write back and forth to disk rather quickly, and only power users will notice the delay, so many programs do so annoyingly frequently (e.g. it takes one million writes to boot Windows). Now that mobile devices are gaining popularity, the philosophy of "why optimize, it already runs fast enough on my ha
      • There's probably a generation of programmers that haven't really grasped the concept of various operations having a cost.

        The problem here is that they are taught high-level languages. In C, it's pretty easy to look at some code and figure out - roughly - how much it will cost. In Java, it's a lot harder. In JavaScript, not only is it even harder, the cost can vary dramatically between implementations. If you know an assembly language or C, then you can easily think 'how is this actually implemented?' when you write something in a high-level language. If you don't, then it's very hard to perform this translation. And gu

  • Comment removed based on user account deletion
  • Choose two... (Score:4, Insightful)

    by Anonymous Coward on Saturday February 18, 2012 @08:11PM (#39089511)

    Reliable, fast, energy efficient. Choose two.

    • Sometimes we don't even get to be able to choose one. These days, you should also add "open" to the list in the case of computer hardware.

    • apple chooses all 3 at the expensive of some other ones you didn't list like "openness" or "freeness" or something

  • by Anonymous Coward on Saturday February 18, 2012 @08:17PM (#39089547)

    Flash memory may be responsible for slowing things down, but you can't slow it down if you don't have it.

    I would kill for a happy medium between $35 for unlimited data that barely works (Virgin) and $80 and a first born child for 2gb/month for decent coverage (Verizon, AT&T, T-Mobile...). A little legislation wouldn't be a bad thing, though I understand I might think differently if I had campaign donations to worry about.

    • by anagama ( 611277 )

      Well, T-Mobile's most expensive unlimited everything (and apparently not throttled), is $70. There is also an Unlimited-but-throttled plan (5gb @ 4g, the remaining at 3G), with unlimited text and 100 minutes of talk: that's only $30.

      I live in a medium sized town (about 80,000 people, next largest town is 25 miles away and smaller), and I have decent 4g coverage. No problem streaming videos on Netflix.

      Plan chart:
      http://prepaid-phones.t-mobile.com/monthly-4g-plans [t-mobile.com]

    • At some point in the future your legislation would be a good thing but for now the only reason AT&T has dumped ~$10B into upgrading their network in the last 6 months to a year is because there are people willing to pay the $80/month.
      Once we hit LTE-Advanced (1gbit/s from tower to stationary devices...shared between all devices but 1gbit/s between 250 devices is still 4mbit/s which is plenty fast for even HD netflix) we'll see the return of unlimited data, until then these $80-100 plans are financing th

  • by aaronb1138 ( 2035478 ) on Saturday February 18, 2012 @08:50PM (#39089733)

    It is nice to see that there is an actual effort to make an empirical test, but I think most techies had this figured out long ago. The simple test is boot time on the devices. A relatively small OS which typically takes 2+ minutes to cold boot, yeah sounds like a storage issue.

    Fixing the issue with some form of data striping would be attractive, but chews battery for each additional chip. Some kind of balloon-able RAM buffer configuration would work nicely, where the buffer RAM was turned off when the device was not in active use or where individual modules could be brought online as needed.

    Frankly, Microsoft pointed at flash as being a speed culprit early on with their requirements for WP7 phones for add-on, non-removable storage expansion micro SD cards. Sure there were a lot of people all gruff and bemoaning the double price premium for Microsoft certified micro SD cards, but it was mostly just a lack of understanding of the needs for device performance. If I recall correctly, Microsoft had somewhere around 10-12 MiB/S read and write and I/O per S requirements which put most cheap commodity flash modules out of the running. I would also guess that WP7 stripes data between the on board and add-on SD card or otherwise uses some kind of secondary caching algorithm since the micro SD cards get married to the device.

    In the Android world, plenty of RAM cache hacks have been implemented, most notably some in Cyanogen and similar. Consider the technical implications of this post at XDA forums regarding I/O schedulers: http://forum.xda-developers.com/showpost.php?p=22134559&postcount=4 [xda-developers.com]

    As an anecdote, the most frequent crashy app on my Android device is the Gallery. It tends to have all kinds of issues with the scheduler as it is reading images and creating thumbnails, likely due to flash access speeds.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Your app does not crash because of flash. Your app crashes due to shit coding.

    • What devices are you cold booting that take 2+ minutes?
    • While I agree with you, I would like to point something out.

      The simple test is boot time on the devices. A relatively small OS which typically takes 2+ minutes to cold boot, yeah sounds like a storage issue.

      If it takes 2+ minute to boot, its not a small OS, even on flash. I think we underestimate just how much crap we're asking these devices to do.

      • by Threni ( 635302 )

        I sometimes wonder what's happening when a machine takes 2 mins to boot, copy a file etc. 2 mins nowadays is billions of operations, and/or gigs (or hundreds of megs) of data transfer. It's impossible to justify either. There must be some screwed up crap going on somewhere where loading the OS is seen as some sort of edge case unworthy of optimising.

    • by mlts ( 1038732 ) *

      For Android devices, it isn't speed that is an issue, but capacity. Right now, the largest MicroSD card that I can find for an Android phone is 32 gigs. A 64GB card was announced last May, but still is vapor.

      I'd like to see an Android phone that can step up to the plate and keep competitive with the iPhone 4S on this front. At the minimum, add another SD card, so apps that can point to a storage place other than /sdcard or /mnt/sdcard can use it.

  • by Osgeld ( 1900440 ) on Saturday February 18, 2012 @10:39PM (#39090347)

    Isn't this true with any computer system since the dawn of computers?

    from plugging in jumper wires to transfer a program from paper to memory, to tape streamers, to magnetic disks, to flash the slowest thing for a computer is its mass storage, always has been, and will continue to be for a very long time

  • Things would have been a lot less tense if he had clarified.

  • Can anyone explain why it is that my desktop PC with 6 gb of ram and a 1 tb hard drive can actually CLOSE a program when I exit it but my smartphone with a 1ghz processor and 1 gb of ram for memory and storage total keeps every damn program running forever?

    Who's idiot idea was this anyways?

    • by Osgeld ( 1900440 )

      Jobs

      in the mac world when you close an app it doesn't exit, its actually really handy when your running on a slow computer but it was at best cute in 1999 on osX beta +, back when it took a semi significant amount of time to launch a program and its idol resources was low

      now your smartphone rivals a early 2000's deskrop so its really useless

  • using SQLITE for every operation is for sure convenient, but is it efficient? I have seen weird performance issues when in principle small data structures maintained using sqlite exceeded certain bounds.

    It is the task of the programmer to distiguish between data which should be kept in ram and data which can be written to flash.

    • by 21mhz ( 443080 )

      I've seen how this happens:
      1. Gee, I can just fire INSERTs in SQLite like I did in SQL Server, isn't it convenient?
      2. (few months in development) Crap, these queries thrash the flash medium and cause a lot of waiting on I/O, we need to batch them and use transactions more.
      3. (with deadlines looming) Attempts to tweak the database access flags or even relax the durability requirements, to get out of the corner we have painted ourselves into.

      It would be instrumental for better performing code if database acce

      • by drolli ( 522659 )

        Yes. Thats how is see it. Accessing a database in a syncronous way is always adding something very intransparent to your program. In one moment it runs fast, in the next when the moon hits the right position some kind of resource which you don't even think you use is busy......

  • I remember with the old versions of windows CE it was basically like knoppix with a fused file system to ram. If the devices battery died or the OS crashed all of your data went with it. They had a separate backup battery but obviously this didn't stop your data from disappearing on a regular basis. One thing I loved about the platform between XIP and your data in ram all file operations at least were instantaneous no delay and no worries about burnt out flash.

    Given the capabilities of modern hardware it

    • That brings me back. Yes Windows CE PDAs were way more responsive than current era smart phone OSes. In fact, after my first smart phone purchase of a used G1, followed by a Galaxy S (Vibrant) I could never understand why they were less responsive than my father's ~2002 Toshiba e740. I just kept thinking to myself, the resolution is doubled, but the only additional processes are for Bluetooth and cellular radios which have their own processor. But yeah, when CE took a dump, you lost it all. Also, there

  • Well, I just tested my class10 microSD and it has 45 IOPS !! Lucky me, I bought a class4 microSD which has 1800 IOPS and boy, can I feel the difference. PS: My disk has 200 and my SSD has 14000 iops.

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...