Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Hardware IT

Replacing Traditional Storage, Databases With In-Memory Analytics 124

storagedude writes "Traditional databases and storage networks, even those sporting high-speed solid state drives, don't offer enough performance for the real-time analytics craze sweeping corporations, giving rise to in-memory analytics, or data mining performed in memory without the limitations of the traditional data path. The end result could be that storage and databases get pushed to the periphery of data centers and in-memory analytics becomes the new critical IT infrastructure. From the article: 'With big vendors like Microsoft and SAP buying into in-memory analytics to solve Big Data challenges, the big question for IT is what this trend will mean for the traditional data center infrastructure. Will storage, even flash drives, be needed in the future, given the requirement for real-time data analysis and current trends in design for real-time data analytics? Or will storage move from the heart of data centers and become merely a means of backup and recovery for critical real-time apps?'"
This discussion has been archived. No new comments can be posted.

Replacing Traditional Storage, Databases With In-Memory Analytics

Comments Filter:
  • Goodbye Orwell (Score:2, Interesting)

    by schmidt349 ( 690948 )

    The marginalization of long-term data storage can only be a good thing -- the big advertising and other firms get the analytical data that actually matters to their bottom line, and to the extent that the average joe's privacy is being invaded at the very least the fruits of that invasion will become increasingly accessible.

    • Re:Goodbye Orwell (Score:5, Informative)

      by quanticle ( 843097 ) on Saturday January 01, 2011 @03:18PM (#34731578) Homepage

      You're misinterpreting the post. No one said anything about long term data storage being marginalized or eliminated. Instead, the author is talking about the difference between persistent and non-persistent storage. He's saying that existing database technologies that rely on persistent storage are being marginalized as the speed difference between spinning disks and RAM widens, and the low cost of RAM makes it practical to hold large data sets entirely in memory. According to the author, data processing and analysis will increasingly move towards in-memory systems, while traditional databases will be relegated to a "backup and restore" role for these in-memory systems.

      • Mod parent up.

        The post asks an 'or' question which is plainly stupid and demonstrates a lack of knowledge on the part of the poster. Analytics are but one part of organizational asset deployments. In and of themselves, analystics initiatives don't really change storage. There are occasions where outputs are transient, but audit/compliance necessitate storing enough that whatever needs to be constructed can, and what can be legally/ethically discarded will be.

        So data center storage needs don't really change-

      • Comment removed based on user account deletion
      • by dintech ( 998802 )

        I'm a KDB developer at a large financial institution. Most banks using KDB store today's stock market data and an on disk store of everything before today. The theory goes that there is the most to be gained by manipulating the most important data in memory, namely today's data. You need the history but the speed of the on-disk partition is always going to be slower.

      • I work for a large (global) web hosting company, and I'd just like to counter the 'low cost of RAM' idea... Yes, most RAM is cheap, but when you start looking at 'large data sets', cheap is a relative term.

        For example, the HP DL580 G7 can hold a Terabyte of RAM, but to do so it uses 16GB DIMMs, at $1000 each. http://h30094.www3.hp.com/product/sku/5100299/mfg_partno/500666-B21 [hp.com]

        When you add that up, it's $64,000 just for RAM in ONE server. And we don't sell it to you, (in fact we only lease it from HP oursel

        • But low cost is always relative to:

          a: the competition and
          b: the opportunity cost sacrificed by using slower systems

          Even if the absolute cost is high, if it's lower than the cost of either going with a competing product or going with a slower solution, it's still cheap.
  • Totally inane (Score:5, Insightful)

    by MrAnnoyanceToYou ( 654053 ) <dylan@dyRABBITla ... minus herbivore> on Saturday January 01, 2011 @02:14PM (#34731142) Homepage Journal

    Discarding data is something that, as a programmer, I don't often do. Too often I will need it later. Real time analytics are not going to change this. As long as hard drive storage continues to get cheaper, there's going to be more data stored. Partially because the easier it is to store large blocks the more likely I am to store bigger packets. I'd LOVE to store entire large XML blocks in databases sometimes, and we decide not to because of space issues. So, yeah, no. Datacenters aren't going anywhere. Things just get more complicated on the hosting side.

    Note that the article writer is a strong stakeholder in his earthshattering predictions coming true.

    • Indeed. Some information is useful in the short term, but most information is quite useful for long periods of time. I'm personally, in the middle of archiving my audio CDs to disk, scanning my photos and sorting my digital images. On top of that I've got emails to hang onto.

      The bigger issue isn't storage space, it's finding a way of keeping track of it all. Deleting the things that you don't need or aren't allowed to store beyond a certain point and keeping track of the other files you do want or need t
      • There must be some way to solve a problem like that, where you have a series of pointers to files, if not the files themselves as well, with the ability to add markers of some kind to each of those pointers. (maybe we can call them, "Records!!!" like CD's used to be called) And then! Then! We can disguise how the management of these 'records' are organized from the user, so they don't have to think about it. And give them a simple, logical way to get data about those 'records' out of the big, organized

        • My point here isn't that you should use a database to store your data about your files, (unfortunately, a unified markup system for files doesn't exist yet; it would be nice, but all that stuff is in the OS right now) my point is that the author of the article is missing that even if in-memory data systems do become extremely large, the underlying theory of the technology will not change much.

          I realize that, but it's a related issue. Back in the 80s, it didn't do you a damned bit of good to know that the file was saved if you had to spend 10 hours sorting through disks to find it. In the modern era that's a much smaller concern for most people as a 1tb disk is quite affordable and there's a number of products to search it efficiently.

          It's something which has been talked about before. The discussion I best remember was in terms of back up systems. (Backup & Recovery [oreilly.com] if you're curious)

          Th

        • Nepomuk [semanticdesktop.org] (there are versions for the three main OSes) and other semantic desktop technologies are working on that. All you need is a tracker to index them and a RDF database.

        • Actually the approach I'm in the middle of implementing will use NexentaStor in a virtual machine with the heaviest duty search engine [probably dtSearch and/or Windows Search with all the plugins] I can lay my hands on to keep things nicely indexed. The community edition of NexentaStor is good to 18 TB which should do for a while as I'm a text junkie not a music or video junkie. It also uses ZFS so it is single-instance storage right out of the box plus additional goodness. I've been planning this for a
    • Re:Totally inane (Score:4, Insightful)

      by fuzzyfuzzyfungus ( 1223518 ) on Saturday January 01, 2011 @02:21PM (#34731212) Journal
      Also, it isn't really all that earthshattering. The fact that RAM is faster and offers lower latency than just about anything else in the system has been true more or less forever. Essentially all OSes of remotely recent vintage already opportunistically use RAM caching to make the apparent speed of disk access suck less(nicer RAID controllers will often have another block of RAM for the same purpsoe). Programs, at the individual discretion of their creators, already hold on to the stuff that they will need to chew over most often in RAM, and only dump to disk as often as prudence requires.

      The idea that, as advances in semiconductor fabrication make gargantuan amounts of RAM cheaper, high-end users will do more of their work in RAM just doesn't seem like a very bold prediction...
      • by Kilrah_il ( 1692978 ) on Saturday January 01, 2011 @02:42PM (#34731334)

        As advances in semiconductor fabrication make gargantuan amounts of RAM cheaper, high-end users will do more of their work in RAM.

        Now you have a bold prediction.
        Sincerely,
        me

        • Re:Totally inane (Score:4, Insightful)

          by tomhudson ( 43916 ) <barbara.hudson@b ... m ['son' in gap]> on Saturday January 01, 2011 @03:09PM (#34731488) Journal
          Good one - except that in this case, a lot of the so-called "work" is BS, consumers are pushing against being data-mined, regulators are getting into the act, and if your business model is so dependent on being a rude invasive pr*ck, perhaps you deserve to die ...

          And the same thing will happen when revenue-strapped governments slap a transfer tax and/or minimum hold periods on stocks - something that should have been done a long time ago.

          • and a lot of it is fraud detection (say, at Visa) and large internet sites deciding what sorts of products to show you when you log in based on your purchase history/similar users' history.
              1. Fraud detection doesn't need microsecond timing. Fraud detection is based on good data, not "fast data"
              2. Behavioral tracking is illegal in several countries. Expect to see more governments giving advertisers a choice - stop, have all behavioral tracking stripped at the borders, be sued into bankruptcy, or just be blocked.
              • by Firehed ( 942385 )

                Fraud detection doesn't need microsecond timing. Fraud detection is based on good data, not "fast data"

                Sorry, but that's just wrong. Fraud analysis on credit transactions needs to be performed extremely quickly (and payment sites that process ACH need to do that quickly as well) in order for the networks to be usable. So while it requires good data, it also needs fast data - and a lot of it. At a minimum, it often looks at the user's complete payment history, the history on that credit card (did the user suddenly change? if so, the card number was probably stolen) not specific to the user, the activity at th

                • The actual indicators of fraud don't need micro-second timing.

                  To the contrary, accepting some delay makes fraud harder.

                  Example - rather than writing balances to temporary storage, then reconciling them with persistent storage, accepting a second or two while the canonical database is doing it's thing means that you can't "re-play" a credit card transaction.

                  If you think you need faster than that, you're looking at the problem wrong.

        • by Tablizer ( 95088 )

          Because I crave pizza, I have an italics prediction...

      • by epine ( 68316 )

        The fact that RAM is faster and offers lower latency than just about anything else in the system has been true more or less forever.

        This is the problem when the article is so poor to begin with, if you're not careful, you're pulled down to the same inane level. Since my brain isn't working well after reading that tripe, let me add that GaAs has been faster than silicon more or less forever. OK, I'm better now.

        Let's not go too far down that road, or we'll run into the truism that the quickest man for the job is the man with the smallest dataset (and the fattest wallet).

        The more I think about that article, the further I drift away from

    • Re:Totally inane (Score:4, Informative)

      by quanticle ( 843097 ) on Saturday January 01, 2011 @03:22PM (#34731616) Homepage

      I didn't really see the author mention anything about discarding data. Rather, it seems like he's saying that existing databases (which attempt to commit data to persistent storage as soon as possible) will be marginalized as the speed gap between persistent storage and RAM widens. Instead, business applications are going to hold data in RAM, and rely on redundancy to prevent data loss when a system fails before its data has been backed up to the database.

    • by chgros ( 690878 )

      I'd LOVE to store entire large XML blocks in databases sometimes, and we decide not to because of space issues.
      Wait, do you mean that XML takes *less* space than a database? What kind of data do you have in there? I find that a binary format gzipped in a DB is way more efficient (time and space-wise) than XML.

  • by Animats ( 122034 ) on Saturday January 01, 2011 @02:32PM (#34731266) Homepage

    For the cutting edge in this area, see what the "high frequency traders" are doing. Computers aren't fast enough for that any more. The trend is toward writing trading algorithms in VHDL and compiling them into FPGAs [stoneridgetechnology.com], so the actual trading decisions are made in special-purpose hardware. Transaction latency (from trade data in on the wire to action out) is dropping below 10 microseconds. In the high-frequency trading world, if you're doing less than 1000 trades per second, you're not considered serious.

    More generally, we have a fundamental problem in the I/O area: UNIX. UNIX I/O has a very simple model, which is now used by Linux, DOS, and Windows. Everything is a byte stream, and byte streams are accessed by making read and write calls to the operating system. That was OK when I/O was slower. But it's a terrible way to do inter-machine communication in clusters today. The OS overhead swamps the data transfer. Then there's the interaction with CPU dispatching. Each I/O operation usually ends by unblocking some thread, so there's a pass through the scheduler at the receive end. This works on "vanilla hardware" (most existing computers), which is why it dominates.

    Bypassing the read/write model is sometimes done by giving one machine remote direct memory access ("RDMA") into another. This is usually too brutal, and tends to be done in ways that bypass the MMU and process security. So it's not very general. Still, that's how most Ethernet packets are delivered, and how graphics units talk to CPUs.

    The supercomputer interconnect people have been struggling with this for years, but nothing general has emerged. RDMA via Infiniband is about where that group has ended up. That's not something a typical large hosting cluster could use safely.

    Most inter-machine operations are of two types - a subroutine call to another machine, or a queue operation. Those give you the basic synchronous and asynchronous operations. A reasonable design goal is to design hardware which can perform those two operations with little or no operating system intervention once the connection has been set up, with MMU-level safety at both ends. When CPU designers have put in elaborate hardware of comparable complexity, though, nobody uses it. 386 and later machines have hardware for rings of protection, call gates, segmented memory, hardware context switching, and other stuff nobody uses because it doesn't map to vanilla C programming. That has discouraged innovation in this area. A few hardware innovations, like MMX, caught on, but still are used only in a few inner loops.

    It's not that this can't be done. It's that unless it's supported by both Intel and Microsoft, it will only be a niche technology.

    • by Simon80 ( 874052 )
      If Intel tried to market its tools to mainstream and OSS developers (yes, open source the tools), then maybe the stuff would catch on better. They are quite capable of making stuff user-friendly for the average developer, but they only seem to market to the HPC market, because that's where the high margin CPUs sell. I think if they spent more time increasing general awareness of anything, it would be easier to get people to use them in their target markets, which would help them sell high end CPUs anyway.
    • by Gorobei ( 127755 ) on Saturday January 01, 2011 @02:48PM (#34731376)

      Yep, the article is 10-20 years out of date.

      HFT has been using statistical synchronization of dbs for years.

      Big financial shops switched to in-memory dbs decades ago. With co-lo on the compute farms.

      I don't know why he's even talking about 32G boxes as servers. That's a desktop, real db hosts are an order of magnitude bigger.

      His "push the disks to the edge of the network?" Um, that's already happened - it's called tier 2. Tier 1 is the terabytes of solid-state storage we keep just in case.

      This is a blast from the 1990s.

    • by Rich0 ( 548339 ) on Saturday January 01, 2011 @03:11PM (#34731504) Homepage

      There is another simple solution to optimizing HFT - just aggregate and execute all trades once per minute, with the division between each minute taking place in UTC plus/minus a random offset (a few seconds on average - with 98% of divisions being within 5 seconds either way).

      Boom, now there is no need to spend huge amounts of money coming up with lightning-fast implementations that don't actually create real value for ordinary people.

      Business ought to be about improving the lives of ordinary people. Sure, sometimes the link isn't direct, and I'm fine with that. However, we're putting far to much emphasis on optimizing what amounts to numbers games that do nothing to produce real things of value for anybody...

      • While I like your future better, I'm guessing that the real one will look more like "A solid ball of hyper-computronium wrapped around the NYSE, tended by robots and powered by a Dyson sphere capturing the entire output of the sun"...

        Sure, the only surviving life forms will be extremophilic bacteria in the wastelands and investment bankers in the Suburbidomes(tm); but think of how high the GDP per capita will be!
      • You really do not understand the domain in question. The whole idea behind hft is to analyze real time data and make a near instantaneous stock trade that capitalizes on that data analysis *before* anyone else does. Waiting a second is too long in this case. The value they add to their customers: Cold hard cash. The value to the stock market: liquidity (fair argument if its too much liquidity).

        • by Rich0 ( 548339 )

          Uh, I understand exactly what it is, and who benefits, which would not be the economy at large.

          The point in aggregating trades is to entirely negate the advantage of HFT, thus eliminating it from the market. It isn't like there wouldn't still be liquidity - you'll just have to wait 1-2 minutes to have an order filled. The average person making a trade usually has a lag of hours between an event happening and getting to make a trade anyway.

          • Oh, so your solution to the technical problem is to get rid of the industry which experiences it?

            Ok, I guess. I'm really more here on slashdot to discover some sweet techniques for solving immensely difficult technical problems.

            I didn't get that from your first post. Maybe because you started out with the technical part? don't know exactly. I'm not knowledgeable in the field of trading to make an intelligent comment about the result of banning HFT. The market does need liquidity, that much I do know.

            • by Rich0 ( 548339 )

              The market does need liquidity, that much I do know.

              The market had plenty of liquidity before the invention of HFT. I'm just suggesting limiting liquidity to a few minutes, rather than a few nanoseconds. Will it really hurt the economy if it takes a stock 10 minutes to plunge 50% rather than a few seconds, with only a few big well-connected institutions getting out in time?

              I'm all for technology that solves real-world problems. However, HFT is a case of where technology and a lack of regulation has actually created real-world problems. Improving HFT actu

          • The liquidity HFT provides should be at arbitrage margins, not the insane profits the players are making. If it makes sense at 0.001%, then go for it. At 0.1%, they are raping the system for the 'value' they provide.

      • Are you sure this won't simply create a different game?

      • by Gorobei ( 127755 )

        Right. We can have the banks just trade once a minute or once a day.

        End users can go back to using Travellers Cheques: sure you spend a few hours of your foreign vacation either getting ripped off or waiting in line at a bank, but hey, at least global trading is now leisurely.

        Stocks are just as good: you paid 3% to trade, but hey, it's a long term investment!

        Commodities? You need a supply of tin? Just buy a tin mine.

        People proposing slowing down trading speeds are like people proposing slowing down com

        • by Rich0 ( 548339 )

          Actually, I'd prefer once per day at midnight, with a blackout on company announcements after 5PM. That would go even further towards leveling the playing field.

          What value does a bot generate when all it does is capitalize on the tiniest fluctuations in stock price. It isn't like it makes the stock any more efficient - the price would certainly adjust itself. The only difference is that some investment bank can't make a fortune solely based on its ping time.

          • by Gorobei ( 127755 )

            So you would be happy if Google could only adjust its search algorithm once a day? It would be a more level playing field, and then search companies couldn't make a fortune based solely on their ping times.

            • by Rich0 ( 548339 )

              Yes, but Google's search algorithms help ordinary people find information they need, and they help real business that produce real things to do so more efficiently, which makes the cost of everything you consume a little cheaper.

              A better HFT algorithm just ensures that some big banker makes a few hundred million more dollars at the expense of any ordinary person who has a retirement account.

              I have nothing against progress. However, most of the financial industry just shuffles numbers around manufacturing m

              • by Gorobei ( 127755 )

                99% of the fuel market is not about day traders scalping a dollar or two on a few thousand barrels of oil. It's more like:

                1. Geeks building code to track every tanker, tender, barge, pipe, and hub in the world to estimate oil availability.
                2. Traders yelling "lease me a tanker" and having people on call to figure the time and cost to get it moving oil from A to B.
                3. Full time meteorologists predicting short-term weather.
                4. Geeks building models based on the above.
                5. Geeks pricing out the cost of refine

                • by Rich0 ( 548339 )

                  Then, why did the price of gasoline drop $1.50 in a few weeks from record highs when the hedge funds dried up?

                  I am sure that lots of effort goes into the logistics of oil distribution, etc. That is all effort well-spent.

                  The part I don't like is when people buy oil futures speculating that prices will rise without any intention to take delivery of the oil. That just results in people bidding up the price.

                  I'm certainly not the only one suggesting that needless speculation drives up the cost of commodities.

                  • by h4rm0ny ( 722443 )

                    Then, why did the price of gasoline drop $1.50 in a few weeks from record highs when the hedge funds dried up?

                    That question is clearly one designed as a counterpoint, but only to those who know what your supposed answer would be. For those of us not well up on the markets, can you explain what significance this has / you believe it has. (Not sarcastic - genuinely ignorant person here).

                    • by Rich0 ( 548339 )

                      Simple - the kinds of people who were up to their eyeballs in hedging the price of oil futures, were also up to their eyeballs in hedging the prices of real-estate, mortgage-backed securities, and credit-default swaps. They lost their shirts, and for a little while they couldn't afford to keep buying oil futures. Suddenly the price of oil plummeted tremendously, and now ordinary people who buy oil for the purpose of actually burning it and not trading it can afford to do so.

                      Derivatives can serve a legitim

                    • by h4rm0ny ( 722443 )

                      Suddenly the price of oil plummeted tremendously, and now ordinary people who buy oil for the purpose of actually burning it and not trading it can afford to do so.

                      Which is a good thing! I get your point now, thank you.

    • Re: (Score:2, Informative)

      by BitZtream ( 692029 )

      So I'm guessing you've never actually done any development?

      The 'byte stream' model is not from UNIX, its just the way the hardware is laid out physically.

      IPC happens in an entirely different way unless you're using something simplistic like pipes

      RDMA is pretty much a stable of high speed cluster computing, however its DMA that allows pretty much everything in your PC to work without slowing the processor down. Even your keyboard controller uses DMA to get the characters into somewhere useful.

      As far as what

      • The old PS2 keyboards used interupts, not DMA. USB I'm not sure about.
      • by Animats ( 122034 )

        I usually don't reply to people this stupid. But it's a slow night.

        The 'byte stream' model is not from UNIX, its just the way the hardware is laid out physically.

        No, the hardware isn't laid out that way. The byte stream model is a software-implemented convenience to hide things like disk blocks and packet sizes. There's overhead associated with that, in several senses. You usually have to impose some protocol on top of the stream just to define the boundaries between items. There have been non-UNI

    • [...]

      More generally, we have a fundamental problem in the I/O area: UNIX. UNIX I/O has a very simple model, which is now used by Linux, DOS, and Windows. Everything is a byte stream, and byte streams are accessed by making read and write calls to the operating system. That was OK when I/O was slower. But it's a terrible way to do inter-machine communication in clusters today. The OS overhead swamps the data transfer. Then there's the interaction with CPU dispatching. Each I/O operation usually ends by unblocking some thread, so there's a pass through the scheduler at the receive end. This works on "vanilla hardware" (most existing computers), which is why it dominates.

      This is true. Though you're underestimating "modern" os's. Though, think of it as defensive planning. Who knowed ~20+ years ago that we would have solid state disks? Who knowed we would have 10GB NICs? SATA?
      But the foundamental design of IO streams works and is easily adapted on new devices. Add on that the simplicity of /dev and all the concept of input and output in UNIX. Think about it.

      [...]

      The supercomputer interconnect people have been struggling with this for years, but nothing general has emerged.
      RDMA via Infiniband is about where that group has ended up. That's not something a typical large hosting cluster could use safely.

      Add to that fibrechannel. And NUMA is an old and tried technology.

      Most inter-machine operations are of two types - a subroutine call to another machine, or a queue operation. Those give you the basic synchronous and asynchronous operations. A reasonable design goal is to design hardware which can perform those two operations with little or no operating system intervention once the connection has been set up, with MMU-level safety at both ends. When CPU designers have put in elaborate hardware of comparable complexity, though, nobody uses it. 386 and later machines have hardware for rings of protection, call gates, segmented memory, hardware context switching, and other stuff nobody uses because it doesn't map to vanilla C programming. That has discouraged innovation in this area. A few hardware innovations, like MMX, caught on, but still are used only in a few inner loops.

      At the cost of my mood points or whatever, now i call

  • Even a single consumer hard drive is a terabyte of storage.... how many servers at any cost have a terabyte of RAM?
    • by Simon80 ( 874052 )
      I think you're missing the point. If the data is analyzed in a single pass as it is received, 1TB of RAM is not necessary.
      • by Anonymous Coward

        I think you are missing the point here. If the data to analyze is so small, then why the fuss? If the data fits in memory, leave it in memory, if not, store it and retrieve it later. Guess what, the place to store your data is probably a database with storage attached. Unless of course, you are one of those young kids (disclaimer, I'm 28), that reinvent the wheel all the time and write that part themselves, because databases are out.

        So, lets say to analyze your incoming data of size 1MB, you also need to re

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          I think, perhaps, that you're missing the point, at least of the article. It has nothing to do with whether to store information in memory or in the database and everything to do with the current trend of using dedicated analytics products (i.e. OLAP) to do data analysis. Whereas we used to use the same relational databases to store, retrieve and analyze all data with SQL as the Swiss Army knife that enabled it all, we're moving towards a model where the relational database is responsible for storage and re

    • 1TB is still in the realm of rather specialized; but 512GB systems(while not inexpensive) are actually pretty available. A quick glance at Dell shows that(even without the benefits of a rep, volume pricing, or any sort of negotiation), a 2U R815 with 512GB of RAM can be yours for a hair under $40,000. Kitted out with the specs you actually want, of course, it might run you another $20k above that. If AMD isn't your flavor, the intel-based but otherwise similar R810 will run five to ten thousand more than th
      • by Rich0 ( 548339 )

        the cost of enormous amounts of RAM has dropped pretty significantly

        Uh, your example was 512GB, and you're comparing $40k for RAM to about $40 for a hard drive. That's around 1000:1!

        Sure, RAM is only getting cheaper, but so are hard drives. A few years ago I got 2GB of RAM for about the same price as 320GB of hard drive. So, if anything the relative cost of RAM has gone UP, and not down...

        • Oh, RAM isn't even close to HDDs, no is there any reason to expect that it will ever be, if you care about storage space. Only if latency and IOPs are at issue does RAM become a relevant competitor. When it comes to I/O operations, particularly highly random ones scattered across the storage area, RAM will(unsurprisingly, given what its name stands for) absolutely wipe the floor with anything with moving parts. To even touch the I/O performance, you would probably be talking multiple racks jammed full of to
      • by Tablizer ( 95088 )

        At those prices, I'd venture to say that Flash still has a reasonably bright future

        Unlike your puns ;-)
           

        • That one wasn't even intentional, unfortunately. My love of puns has, apparently, seeped directly into whatever part of my brain is responsible for day-to-day verbal and written work...
    • We bought a machine for FEM a few weeks ago (there was budget left for 2010).

      4*12 core opteron, 256GByte ram. 12k.
      Which is peanuts, pretty much.

      So i have little doubt that 1TB Ram is quite affortable nowadays if you have big-iron level money available.

  • It's funny that only today I chatted with some folks on the PostgreSQL IRC support channel about this, asking whether it is at all possible to have 2 postmasters running at the same time, one to do in memory SQL against an all-in-memory database, and the other to write to the database (and no, they think that it is not possible to have 2 postmasters talking to the same database this way, they believe it will corrupt the data). The suggestion was just to increase shared_buffers and file system block buffer

  • This is all well-and-good until someone accidentally knocks out the power. Then all of that stuff needs recomputed if it's not stored to disk.
  • by mwvdlee ( 775178 ) on Saturday January 01, 2011 @03:06PM (#34731466) Homepage

    I'm getting sick and tired of hearing about yet another hype in IT-land where everything has to be done in yet another new way.

    All developers understand that different problems require different solutions. Will the managers who shove this crap up our asses please stop doing so? It's not productive, you're not going to get a better solution by forcing it do be implemented in whatever buzzword falls of the last bandwagon of an ever-growing parade of buzzwords.

    "In-memory analytics" is what we started out with before databases, and guess what; it's never gone away. We've never stopped using it. Now just tell us what problem you have let us developers decide how to solve it.

    • Exactly, "in-memory analytics" sounds like more marketing BS, just another way to sell some unneeded software or service.
    • by Desert Raven ( 52125 ) on Saturday January 01, 2011 @03:24PM (#34731634)

      Agreed, someone comes up with something new to solve a very specific issue, and all of a sudden someone's predicting how it will completely replace everything else in the next month.

      Grow up.

      Physical storage and relational databases aren't going anywhere anytime soon. in-memory this and non-relational that are all well and good for the specific problems they were designed for, but physically stored and relational data fits the needs of 90% of data storage and retrieval. I sure as HECK don't want my bank storing my financial data purely in memory.

      So keep yelling to yourselves about how the sky is falling on traditional techniques. Meanwhile the rest of us have real work to do.

      • by AllenNg ( 954165 )
        I think you're missing a few evolutionary pieces. Most data analytics systems that I'm aware of are not currently relational. Long ago, the data lived in memory, but memory was expensive, so everything was moved to disk. The relational model added the formalisms of normalization (to cut down on space, among other reasons), but the types of multi-dimensional queries used by the analytics apps required too many joins for this to work. So the data was de-normalized (eg. OLAP) to improve performance. As
    • It also ticks me off how they redesign these existing practices, to the point where they stop making sense, and you have to relearn the new and better (read: rephrased) technology. Almost like they want you to rewrite all those tests...

      CAPTCHA: KISSUBAI - keep it simple stupid, unless buzzwords are involved

  • Download a free (as in the beer) app http://www.qlikview.com/us/explore/experience/free-download [qlikview.com] and see for yourself what current commercial software can do. I load as much as a hundred GB into the RAM for analytics with this application. Just keep in mind that star schema is the best for this software. Get your tables from an existing database as flat files, load them "as is" and start analysis immediately.

  • But puting data in system ram = harder reboots as you need to dump it to a disk. Also what about UPS's you need one that has the power to last for the time it takes to do that as well.

    • And god forbid your system halts and you lose any data you haven't already committed to persistent storage.

    • by Anonymous Coward

      You should be using stable operating systems and diesel backups. You should also be using clusters with the same data so a loss of one system isn't catastrophic.

      • what help is diesel when the main power room with Transfer Switch is on fire and the UPS don't have the power to run the systems for a long time as they are setup just to be there for the time it's takes for the diesel to start up.

    • by hazem ( 472289 )

      Decentralization is the way.

      If you're a consultant and find a client working in a centralized way, you sell decentralization as the way to solve all their woes. If you find them working in a decentralized way, you sell them on centralizing to solve all their woes.

      There are only two constants here: 1) every business has woes, regardless of structure; 2) consultants extract lots of value by shifting those woes around

  • You will always want that data so you can manipulate it in some other manner that wasn't taken into account by the in-memory analysis, or even the scope of your project. These marketing blokes sure like to seize the day, don't they?
  • Hard-drives aren't really as slow as people think. The problem is that mechanical hard-drives is slow on seeking, but if seeking can be eliminated, you can quite easily saturate your CPU on even a moderately complex calculation.

    Case of point: http://www.youtube.com/watch?v=WQw7c-PliB4 [youtube.com]

  • In-memory data storage is fine as long as it isn't primary data storage. Yes it's faster but there are a lot of downsides as well. The most important is that it isn't easy to share between servers (a close second is that it's hard to replicate to a remote site for disaster recovery purposes) so each server needs to have its own copy of the data and there needs to be some way of keeping all that data in sync.

    The alternative is to have good old "traditional" storage sitting where it always sits and when the

  • I believe VoltDB (http://voltdb.org) uses in-memory and MPP if anyone is interested in giving it a test-spin. It's from Michael Stonebreaker of various databases (Ingres, Vertica, etc)

    They've been doing a number of presentations on the topic you can probably find on the site.

  • by drdrgivemethenews ( 1525877 ) on Saturday January 01, 2011 @05:31PM (#34732580)
    Although TFA doesn't say so explicitly, I think it's talking about the race to get the best targeted advertising analytics in place for global applications like eBay, FB etc. These applications don't have the same database requirements as traditional business apps. It makes sense to talk about new ways of doing things for them, but TFA's author and a lot of other people make the mistake of thinking or implying that these new techniques will apply directly to traditional business apps as well. Sorry, not.

    ----------

    Happy New Year, may it suck less for ya than the last one.
  • Most OS's and programming languages will let you map your memory data structure to a contiguous disk file so your disk IO is performed at paging speeds. The file system is only touched when the file is mapped (opened). Your system can then be configured to chose to what degree your data is in memory vs. disk.
  • Remember when the first 64-bit machines became commercially available?
    "zOMG, now we can keep whole databases in RAM with the 4GB limit gone!"
    This is just CS101. Memory hierarchy - you keep your data in the fastest memory it'll fit in (that you can afford.)
    Now we can afford more RAM so we can do more per unit time because we don't have to wait for IO. Duh.

  • WTF is SoulSkill still drunk?

    This is SO nothing new, nor is it even interesting.

    In memory DB's are nothing new, they are simply prone to failure and this is why hardware storage be it spinning drives or Flash will always be around.

    All it takes is one hiccup by the memory logic or an interrupt controller or DMA channel and all your in memory data is toast forcing a reload from the last checkpoint which can take quite a while when you are talking say a terabyte of information.

    Clifford Hersh and Jeffery Spirn

  • No, we will simply keep everything powered on and start anew if we lose power.

    Also, we will all be mega-corps, even at home. No one will start with datasets under a few Pentabytes. Not even for photos and text.

  • "The number of flash drives or PCIe flash devices needed to achieve the performance of main memory is not cost-effective, given the number of PCIe buses needed, the cost of the devices and the complexity of using them compared to just using memory."

    ... "Even if flash device latency improves, it still has to go through the OS and PCIe bus compared to a direct hardware translation to address memory."

    ... "Knowledge of how to do I/O efficiently is limited because I/O programming is not taught in schools."

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...