Replacing Traditional Storage, Databases With In-Memory Analytics 124
storagedude writes "Traditional databases and storage networks, even those sporting high-speed solid state drives, don't offer enough performance for the real-time analytics craze sweeping corporations, giving rise to in-memory analytics, or data mining performed in memory without the limitations of the traditional data path. The end result could be that storage and databases get pushed to the periphery of data centers and in-memory analytics becomes the new critical IT infrastructure. From the article: 'With big vendors like Microsoft and SAP buying into in-memory analytics to solve Big Data challenges, the big question for IT is what this trend will mean for the traditional data center infrastructure. Will storage, even flash drives, be needed in the future, given the requirement for real-time data analysis and current trends in design for real-time data analytics? Or will storage move from the heart of data centers and become merely a means of backup and recovery for critical real-time apps?'"
Goodbye Orwell (Score:2, Interesting)
The marginalization of long-term data storage can only be a good thing -- the big advertising and other firms get the analytical data that actually matters to their bottom line, and to the extent that the average joe's privacy is being invaded at the very least the fruits of that invasion will become increasingly accessible.
Re:Goodbye Orwell (Score:5, Informative)
You're misinterpreting the post. No one said anything about long term data storage being marginalized or eliminated. Instead, the author is talking about the difference between persistent and non-persistent storage. He's saying that existing database technologies that rely on persistent storage are being marginalized as the speed difference between spinning disks and RAM widens, and the low cost of RAM makes it practical to hold large data sets entirely in memory. According to the author, data processing and analysis will increasingly move towards in-memory systems, while traditional databases will be relegated to a "backup and restore" role for these in-memory systems.
Re: (Score:2)
Mod parent up.
The post asks an 'or' question which is plainly stupid and demonstrates a lack of knowledge on the part of the poster. Analytics are but one part of organizational asset deployments. In and of themselves, analystics initiatives don't really change storage. There are occasions where outputs are transient, but audit/compliance necessitate storing enough that whatever needs to be constructed can, and what can be legally/ethically discarded will be.
So data center storage needs don't really change-
Re: (Score:3)
Re: (Score:2)
RAM since DDR has gotten so ridiculously fast that NO SSD has a snowball's chance of catching up anytime soon, if at all,
Apparently you haven't heard of PRAM.
http://en.wikipedia.org/wiki/Phase-change_memory [wikipedia.org]
If PRAM doesn't pan out there are other nonvolitile as-fast-or-faster-than-DRAM technologies in the works as well.
Re: (Score:2)
I'm a KDB developer at a large financial institution. Most banks using KDB store today's stock market data and an on disk store of everything before today. The theory goes that there is the most to be gained by manipulating the most important data in memory, namely today's data. You need the history but the speed of the on-disk partition is always going to be slower.
Re: (Score:3)
I work for a large (global) web hosting company, and I'd just like to counter the 'low cost of RAM' idea... Yes, most RAM is cheap, but when you start looking at 'large data sets', cheap is a relative term.
For example, the HP DL580 G7 can hold a Terabyte of RAM, but to do so it uses 16GB DIMMs, at $1000 each. http://h30094.www3.hp.com/product/sku/5100299/mfg_partno/500666-B21 [hp.com]
When you add that up, it's $64,000 just for RAM in ONE server. And we don't sell it to you, (in fact we only lease it from HP oursel
Re: (Score:2)
a: the competition and
b: the opportunity cost sacrificed by using slower systems
Even if the absolute cost is high, if it's lower than the cost of either going with a competing product or going with a slower solution, it's still cheap.
Totally inane (Score:5, Insightful)
Discarding data is something that, as a programmer, I don't often do. Too often I will need it later. Real time analytics are not going to change this. As long as hard drive storage continues to get cheaper, there's going to be more data stored. Partially because the easier it is to store large blocks the more likely I am to store bigger packets. I'd LOVE to store entire large XML blocks in databases sometimes, and we decide not to because of space issues. So, yeah, no. Datacenters aren't going anywhere. Things just get more complicated on the hosting side.
Note that the article writer is a strong stakeholder in his earthshattering predictions coming true.
Re: (Score:2)
The bigger issue isn't storage space, it's finding a way of keeping track of it all. Deleting the things that you don't need or aren't allowed to store beyond a certain point and keeping track of the other files you do want or need t
Re: (Score:2)
There must be some way to solve a problem like that, where you have a series of pointers to files, if not the files themselves as well, with the ability to add markers of some kind to each of those pointers. (maybe we can call them, "Records!!!" like CD's used to be called) And then! Then! We can disguise how the management of these 'records' are organized from the user, so they don't have to think about it. And give them a simple, logical way to get data about those 'records' out of the big, organized
Re: (Score:2)
My point here isn't that you should use a database to store your data about your files, (unfortunately, a unified markup system for files doesn't exist yet; it would be nice, but all that stuff is in the OS right now) my point is that the author of the article is missing that even if in-memory data systems do become extremely large, the underlying theory of the technology will not change much.
I realize that, but it's a related issue. Back in the 80s, it didn't do you a damned bit of good to know that the file was saved if you had to spend 10 hours sorting through disks to find it. In the modern era that's a much smaller concern for most people as a 1tb disk is quite affordable and there's a number of products to search it efficiently.
It's something which has been talked about before. The discussion I best remember was in terms of back up systems. (Backup & Recovery [oreilly.com] if you're curious)
Th
Re: (Score:2)
Nepomuk [semanticdesktop.org] (there are versions for the three main OSes) and other semantic desktop technologies are working on that. All you need is a tracker to index them and a RDF database.
Re: (Score:2)
Re:Totally inane (Score:4, Insightful)
The idea that, as advances in semiconductor fabrication make gargantuan amounts of RAM cheaper, high-end users will do more of their work in RAM just doesn't seem like a very bold prediction...
Re:Totally inane (Score:4, Funny)
As advances in semiconductor fabrication make gargantuan amounts of RAM cheaper, high-end users will do more of their work in RAM.
Now you have a bold prediction.
Sincerely,
me
Re:Totally inane (Score:4, Insightful)
And the same thing will happen when revenue-strapped governments slap a transfer tax and/or minimum hold periods on stocks - something that should have been done a long time ago.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Fraud detection doesn't need microsecond timing. Fraud detection is based on good data, not "fast data"
Sorry, but that's just wrong. Fraud analysis on credit transactions needs to be performed extremely quickly (and payment sites that process ACH need to do that quickly as well) in order for the networks to be usable. So while it requires good data, it also needs fast data - and a lot of it. At a minimum, it often looks at the user's complete payment history, the history on that credit card (did the user suddenly change? if so, the card number was probably stolen) not specific to the user, the activity at th
Re: (Score:2)
To the contrary, accepting some delay makes fraud harder.
Example - rather than writing balances to temporary storage, then reconciling them with persistent storage, accepting a second or two while the canonical database is doing it's thing means that you can't "re-play" a credit card transaction.
If you think you need faster than that, you're looking at the problem wrong.
Re: (Score:2)
Governments? Give me a break. Show me one government that has the spine to stand up to ad agencies, either the snarfers at the front line like Phorm, or the data-miners. Ain't gonna happen. Even the EU is running scared and has backed down, showing that they pretty much have zero interest in privacy, even though the lessons in privacy were taught very brutally during WWII.
Jennifer Stoddard, Canada's Privacy Commissioner. She's the one who forced Facebook to change their procedures the last time, and she's got them in her sights again.
And at $11,000 per incident (page view), it would quickly send Facebook into Chapter 11.
Especially since the last time, the Europeans quickly joined in.
Oh, revenue transfer tax... also not going to happen. Especially with the Tea Party here in the US having a stranglehold on the government this year. Expect to see government just give a rubber stamp to any business practices, no matter how unethical.
Several states and many local governments won't be able to roll over their bonds. Likely candidates include California, Nevada, New York, Michigan, etc. At that point, Uncle Sam has 4 choice
Re: (Score:2)
And at $11,000 per incident (page view), it would quickly send Facebook into Chapter 11.
Sure it would. Facebook would simply exit Canada. Users would complain, but who gives a shit about them, right? But advertisers would also complain that they don't have access to that market anymore. And advertisers are just another word for business. Stoddard may really be anti-business, but I wonder if her bosses are, or if her new bosses would be.
Don't kid yourself. Facebook isn't going anywhere, not until th
Re: (Score:2)
And at $11,000 per incident (page view), it would quickly send Facebook into Chapter 11. Sure it would. Facebook would simply exit Canada. Users would complain, but who gives a shit about them, right? But advertisers would also complain that they don't have access to that market anymore. And advertisers are just another word for business. Stoddard may really be anti-business, but I wonder if her bosses are, or if her new bosses would be.
Don't kid yourself. Facebook isn't going anywhere, not until the users stop using it.
There are always other companies ready to fill in the gap. That's the nature of the beast, and Facebook knows it - just like they know that their user statistics are totally cooked.
You can buy facebook followers at the rate of 5 for a penny. The only ones who would be impacted are the "social media directors" who would be shown to be totally superfluous.
-- Barbie
Re: (Score:1)
Because I crave pizza, I have an italics prediction...
Re: (Score:1)
It isn't? What's wrong with you people?
Re: (Score:1)
Hey, I'm happy with my Commodore 64, but I am considering getting an Amiga.
Re: (Score:2)
Personally, I'm thinking about getting a C64 myself.
Re: (Score:2)
Re: (Score:2)
You are one sick puppy! I mean, you had an argument about a fucking HOSTS file and you didn't agree. What do you do? Do you go back to your private rwal-world life and ignore the other person's comment? No, you find out when he posts regarding a completely unrelated topic and flame him there.
Get a life, man.
Oh, and you still didn't find the time to register a username on /. (or you really are a coward). Sweet.
Re: (Score:2)
The fact that RAM is faster and offers lower latency than just about anything else in the system has been true more or less forever.
This is the problem when the article is so poor to begin with, if you're not careful, you're pulled down to the same inane level. Since my brain isn't working well after reading that tripe, let me add that GaAs has been faster than silicon more or less forever. OK, I'm better now.
Let's not go too far down that road, or we'll run into the truism that the quickest man for the job is the man with the smallest dataset (and the fattest wallet).
The more I think about that article, the further I drift away from
Re:Totally inane (Score:4, Informative)
I didn't really see the author mention anything about discarding data. Rather, it seems like he's saying that existing databases (which attempt to commit data to persistent storage as soon as possible) will be marginalized as the speed gap between persistent storage and RAM widens. Instead, business applications are going to hold data in RAM, and rely on redundancy to prevent data loss when a system fails before its data has been backed up to the database.
Re: (Score:2)
I'd LOVE to store entire large XML blocks in databases sometimes, and we decide not to because of space issues.
Wait, do you mean that XML takes *less* space than a database? What kind of data do you have in there? I find that a binary format gzipped in a DB is way more efficient (time and space-wise) than XML.
The cutting edge is in high frequency trading (Score:5, Informative)
For the cutting edge in this area, see what the "high frequency traders" are doing. Computers aren't fast enough for that any more. The trend is toward writing trading algorithms in VHDL and compiling them into FPGAs [stoneridgetechnology.com], so the actual trading decisions are made in special-purpose hardware. Transaction latency (from trade data in on the wire to action out) is dropping below 10 microseconds. In the high-frequency trading world, if you're doing less than 1000 trades per second, you're not considered serious.
More generally, we have a fundamental problem in the I/O area: UNIX. UNIX I/O has a very simple model, which is now used by Linux, DOS, and Windows. Everything is a byte stream, and byte streams are accessed by making read and write calls to the operating system. That was OK when I/O was slower. But it's a terrible way to do inter-machine communication in clusters today. The OS overhead swamps the data transfer. Then there's the interaction with CPU dispatching. Each I/O operation usually ends by unblocking some thread, so there's a pass through the scheduler at the receive end. This works on "vanilla hardware" (most existing computers), which is why it dominates.
Bypassing the read/write model is sometimes done by giving one machine remote direct memory access ("RDMA") into another. This is usually too brutal, and tends to be done in ways that bypass the MMU and process security. So it's not very general. Still, that's how most Ethernet packets are delivered, and how graphics units talk to CPUs.
The supercomputer interconnect people have been struggling with this for years, but nothing general has emerged. RDMA via Infiniband is about where that group has ended up. That's not something a typical large hosting cluster could use safely.
Most inter-machine operations are of two types - a subroutine call to another machine, or a queue operation. Those give you the basic synchronous and asynchronous operations. A reasonable design goal is to design hardware which can perform those two operations with little or no operating system intervention once the connection has been set up, with MMU-level safety at both ends. When CPU designers have put in elaborate hardware of comparable complexity, though, nobody uses it. 386 and later machines have hardware for rings of protection, call gates, segmented memory, hardware context switching, and other stuff nobody uses because it doesn't map to vanilla C programming. That has discouraged innovation in this area. A few hardware innovations, like MMX, caught on, but still are used only in a few inner loops.
It's not that this can't be done. It's that unless it's supported by both Intel and Microsoft, it will only be a niche technology.
Re: (Score:2)
Re:The cutting edge is in high frequency trading (Score:4, Interesting)
Yep, the article is 10-20 years out of date.
HFT has been using statistical synchronization of dbs for years.
Big financial shops switched to in-memory dbs decades ago. With co-lo on the compute farms.
I don't know why he's even talking about 32G boxes as servers. That's a desktop, real db hosts are an order of magnitude bigger.
His "push the disks to the edge of the network?" Um, that's already happened - it's called tier 2. Tier 1 is the terabytes of solid-state storage we keep just in case.
This is a blast from the 1990s.
Re:The cutting edge is in high frequency trading (Score:4, Insightful)
There is another simple solution to optimizing HFT - just aggregate and execute all trades once per minute, with the division between each minute taking place in UTC plus/minus a random offset (a few seconds on average - with 98% of divisions being within 5 seconds either way).
Boom, now there is no need to spend huge amounts of money coming up with lightning-fast implementations that don't actually create real value for ordinary people.
Business ought to be about improving the lives of ordinary people. Sure, sometimes the link isn't direct, and I'm fine with that. However, we're putting far to much emphasis on optimizing what amounts to numbers games that do nothing to produce real things of value for anybody...
Re: (Score:2)
Sure, the only surviving life forms will be extremophilic bacteria in the wastelands and investment bankers in the Suburbidomes(tm); but think of how high the GDP per capita will be!
Re: (Score:3)
You really do not understand the domain in question. The whole idea behind hft is to analyze real time data and make a near instantaneous stock trade that capitalizes on that data analysis *before* anyone else does. Waiting a second is too long in this case. The value they add to their customers: Cold hard cash. The value to the stock market: liquidity (fair argument if its too much liquidity).
Re: (Score:2)
Uh, I understand exactly what it is, and who benefits, which would not be the economy at large.
The point in aggregating trades is to entirely negate the advantage of HFT, thus eliminating it from the market. It isn't like there wouldn't still be liquidity - you'll just have to wait 1-2 minutes to have an order filled. The average person making a trade usually has a lag of hours between an event happening and getting to make a trade anyway.
Re: (Score:2)
Oh, so your solution to the technical problem is to get rid of the industry which experiences it?
Ok, I guess. I'm really more here on slashdot to discover some sweet techniques for solving immensely difficult technical problems.
I didn't get that from your first post. Maybe because you started out with the technical part? don't know exactly. I'm not knowledgeable in the field of trading to make an intelligent comment about the result of banning HFT. The market does need liquidity, that much I do know.
Re: (Score:2)
The market does need liquidity, that much I do know.
The market had plenty of liquidity before the invention of HFT. I'm just suggesting limiting liquidity to a few minutes, rather than a few nanoseconds. Will it really hurt the economy if it takes a stock 10 minutes to plunge 50% rather than a few seconds, with only a few big well-connected institutions getting out in time?
I'm all for technology that solves real-world problems. However, HFT is a case of where technology and a lack of regulation has actually created real-world problems. Improving HFT actu
Re: (Score:2)
The liquidity HFT provides should be at arbitrage margins, not the insane profits the players are making. If it makes sense at 0.001%, then go for it. At 0.1%, they are raping the system for the 'value' they provide.
Re: (Score:2)
Well, my proposal to randomize the exact time that trades are executed was intended to accomplish the purpose of preventing last-nanosecond orders from coming in.
Insider trading will always be a problem that has no technical solution. However, even lower-frequency trades, like once a day, might help equalize access to the markets.
Re: (Score:1)
Are you sure this won't simply create a different game?
Re: (Score:2)
Right. We can have the banks just trade once a minute or once a day.
End users can go back to using Travellers Cheques: sure you spend a few hours of your foreign vacation either getting ripped off or waiting in line at a bank, but hey, at least global trading is now leisurely.
Stocks are just as good: you paid 3% to trade, but hey, it's a long term investment!
Commodities? You need a supply of tin? Just buy a tin mine.
People proposing slowing down trading speeds are like people proposing slowing down com
Re: (Score:2)
Actually, I'd prefer once per day at midnight, with a blackout on company announcements after 5PM. That would go even further towards leveling the playing field.
What value does a bot generate when all it does is capitalize on the tiniest fluctuations in stock price. It isn't like it makes the stock any more efficient - the price would certainly adjust itself. The only difference is that some investment bank can't make a fortune solely based on its ping time.
Re: (Score:2)
So you would be happy if Google could only adjust its search algorithm once a day? It would be a more level playing field, and then search companies couldn't make a fortune based solely on their ping times.
Re: (Score:2)
Yes, but Google's search algorithms help ordinary people find information they need, and they help real business that produce real things to do so more efficiently, which makes the cost of everything you consume a little cheaper.
A better HFT algorithm just ensures that some big banker makes a few hundred million more dollars at the expense of any ordinary person who has a retirement account.
I have nothing against progress. However, most of the financial industry just shuffles numbers around manufacturing m
Re: (Score:2)
99% of the fuel market is not about day traders scalping a dollar or two on a few thousand barrels of oil. It's more like:
1. Geeks building code to track every tanker, tender, barge, pipe, and hub in the world to estimate oil availability.
2. Traders yelling "lease me a tanker" and having people on call to figure the time and cost to get it moving oil from A to B.
3. Full time meteorologists predicting short-term weather.
4. Geeks building models based on the above.
5. Geeks pricing out the cost of refine
Re: (Score:3)
Then, why did the price of gasoline drop $1.50 in a few weeks from record highs when the hedge funds dried up?
I am sure that lots of effort goes into the logistics of oil distribution, etc. That is all effort well-spent.
The part I don't like is when people buy oil futures speculating that prices will rise without any intention to take delivery of the oil. That just results in people bidding up the price.
I'm certainly not the only one suggesting that needless speculation drives up the cost of commodities.
Re: (Score:2)
Then, why did the price of gasoline drop $1.50 in a few weeks from record highs when the hedge funds dried up?
That question is clearly one designed as a counterpoint, but only to those who know what your supposed answer would be. For those of us not well up on the markets, can you explain what significance this has / you believe it has. (Not sarcastic - genuinely ignorant person here).
Re: (Score:3)
Simple - the kinds of people who were up to their eyeballs in hedging the price of oil futures, were also up to their eyeballs in hedging the prices of real-estate, mortgage-backed securities, and credit-default swaps. They lost their shirts, and for a little while they couldn't afford to keep buying oil futures. Suddenly the price of oil plummeted tremendously, and now ordinary people who buy oil for the purpose of actually burning it and not trading it can afford to do so.
Derivatives can serve a legitim
Re: (Score:2)
Suddenly the price of oil plummeted tremendously, and now ordinary people who buy oil for the purpose of actually burning it and not trading it can afford to do so.
Which is a good thing! I get your point now, thank you.
Re: (Score:2)
I thought I gave some examples - FX, equity, commodity prices get better as frequency increases.
Less cost and fuss for consumers and importers/exporters, etc. A few people spend their lives making prices tighter, and millions of people get better prices on vacations, on their mortgages, etc. Why begrudge them for pocketing a few percent off the top?
International trade on high-tech products becomes possible: you can get a firm offer on 20 inputs you need in 1 hour. In the old days, that level of co-ordi
Re: (Score:2)
Yeah, I know exactly what it is. My proposal basically is to get rid of it by making it useless. It provides no real benefit to the economy, so nobody will be hurt if it goes away...
Re: (Score:2, Informative)
So I'm guessing you've never actually done any development?
The 'byte stream' model is not from UNIX, its just the way the hardware is laid out physically.
IPC happens in an entirely different way unless you're using something simplistic like pipes
RDMA is pretty much a stable of high speed cluster computing, however its DMA that allows pretty much everything in your PC to work without slowing the processor down. Even your keyboard controller uses DMA to get the characters into somewhere useful.
As far as what
Re: (Score:2)
Re: (Score:2)
I usually don't reply to people this stupid. But it's a slow night.
The 'byte stream' model is not from UNIX, its just the way the hardware is laid out physically.
No, the hardware isn't laid out that way. The byte stream model is a software-implemented convenience to hide things like disk blocks and packet sizes. There's overhead associated with that, in several senses. You usually have to impose some protocol on top of the stream just to define the boundaries between items. There have been non-UNI
Re: (Score:2)
[...]
More generally, we have a fundamental problem in the I/O area: UNIX. UNIX I/O has a very simple model, which is now used by Linux, DOS, and Windows. Everything is a byte stream, and byte streams are accessed by making read and write calls to the operating system. That was OK when I/O was slower. But it's a terrible way to do inter-machine communication in clusters today. The OS overhead swamps the data transfer. Then there's the interaction with CPU dispatching. Each I/O operation usually ends by unblocking some thread, so there's a pass through the scheduler at the receive end. This works on "vanilla hardware" (most existing computers), which is why it dominates.
This is true. Though you're underestimating "modern" os's. Though, think of it as defensive planning. Who knowed ~20+ years ago that we would have solid state disks? Who knowed we would have 10GB NICs? SATA? /dev and all the concept of input and output in UNIX. Think about it.
But the foundamental design of IO streams works and is easily adapted on new devices. Add on that the simplicity of
[...]
The supercomputer interconnect people have been struggling with this for years, but nothing general has emerged.
RDMA via Infiniband is about where that group has ended up. That's not something a typical large hosting cluster could use safely.
Add to that fibrechannel. And NUMA is an old and tried technology.
Most inter-machine operations are of two types - a subroutine call to another machine, or a queue operation. Those give you the basic synchronous and asynchronous operations. A reasonable design goal is to design hardware which can perform those two operations with little or no operating system intervention once the connection has been set up, with MMU-level safety at both ends. When CPU designers have put in elaborate hardware of comparable complexity, though, nobody uses it. 386 and later machines have hardware for rings of protection, call gates, segmented memory, hardware context switching, and other stuff nobody uses because it doesn't map to vanilla C programming. That has discouraged innovation in this area. A few hardware innovations, like MMX, caught on, but still are used only in a few inner loops.
At the cost of my mood points or whatever, now i call
Terabyte RAM? (Score:2)
Re: (Score:2)
Re: (Score:1)
I think you are missing the point here. If the data to analyze is so small, then why the fuss? If the data fits in memory, leave it in memory, if not, store it and retrieve it later. Guess what, the place to store your data is probably a database with storage attached. Unless of course, you are one of those young kids (disclaimer, I'm 28), that reinvent the wheel all the time and write that part themselves, because databases are out.
So, lets say to analyze your incoming data of size 1MB, you also need to re
Re: (Score:2, Interesting)
I think, perhaps, that you're missing the point, at least of the article. It has nothing to do with whether to store information in memory or in the database and everything to do with the current trend of using dedicated analytics products (i.e. OLAP) to do data analysis. Whereas we used to use the same relational databases to store, retrieve and analyze all data with SQL as the Swiss Army knife that enabled it all, we're moving towards a model where the relational database is responsible for storage and re
Re: (Score:3)
Re: (Score:2)
the cost of enormous amounts of RAM has dropped pretty significantly
Uh, your example was 512GB, and you're comparing $40k for RAM to about $40 for a hard drive. That's around 1000:1!
Sure, RAM is only getting cheaper, but so are hard drives. A few years ago I got 2GB of RAM for about the same price as 320GB of hard drive. So, if anything the relative cost of RAM has gone UP, and not down...
Re: (Score:2)
Re: (Score:1)
Unlike your puns ;-)
Re: (Score:3)
Re: (Score:2)
We bought a machine for FEM a few weeks ago (there was budget left for 2010).
4*12 core opteron, 256GByte ram. 12k.
Which is peanuts, pretty much.
So i have little doubt that 1TB Ram is quite affortable nowadays if you have big-iron level money available.
Funny (Score:2)
It's funny that only today I chatted with some folks on the PostgreSQL IRC support channel about this, asking whether it is at all possible to have 2 postmasters running at the same time, one to do in memory SQL against an all-in-memory database, and the other to write to the database (and no, they think that it is not possible to have 2 postmasters talking to the same database this way, they believe it will corrupt the data). The suggestion was just to increase shared_buffers and file system block buffer
Re: (Score:2)
But but but, you are missing the point. Can 2 postmasters access the same disk, one to read from it only and the other one to do writes?
If that was possible, then 2 postmasters could be on one machine, each on its own processor/memory or on 2 machines with the data directory mapped to both. The answer from the PostgreSQL guys in the IRC channel was that it's not possible, because all postmasters end up writing SOMETHING to the data directory, maybe those are just XLOGs, but they will write something and wi
All well and good until... (Score:1)
Can we please stop already? (Score:5, Insightful)
I'm getting sick and tired of hearing about yet another hype in IT-land where everything has to be done in yet another new way.
All developers understand that different problems require different solutions. Will the managers who shove this crap up our asses please stop doing so? It's not productive, you're not going to get a better solution by forcing it do be implemented in whatever buzzword falls of the last bandwagon of an ever-growing parade of buzzwords.
"In-memory analytics" is what we started out with before databases, and guess what; it's never gone away. We've never stopped using it. Now just tell us what problem you have let us developers decide how to solve it.
Re: (Score:2)
Re:Can we please stop already? (Score:4)
Agreed, someone comes up with something new to solve a very specific issue, and all of a sudden someone's predicting how it will completely replace everything else in the next month.
Grow up.
Physical storage and relational databases aren't going anywhere anytime soon. in-memory this and non-relational that are all well and good for the specific problems they were designed for, but physically stored and relational data fits the needs of 90% of data storage and retrieval. I sure as HECK don't want my bank storing my financial data purely in memory.
So keep yelling to yourselves about how the sky is falling on traditional techniques. Meanwhile the rest of us have real work to do.
Re: (Score:2)
Re: (Score:2)
It also ticks me off how they redesign these existing practices, to the point where they stop making sense, and you have to relearn the new and better (read: rephrased) technology. Almost like they want you to rewrite all those tests...
CAPTCHA: KISSUBAI - keep it simple stupid, unless buzzwords are involved
Free "in memory" analytics app Qlikview (Score:1)
Download a free (as in the beer) app http://www.qlikview.com/us/explore/experience/free-download [qlikview.com] and see for yourself what current commercial software can do. I load as much as a hundred GB into the RAM for analytics with this application. Just keep in mind that star schema is the best for this software. Get your tables from an existing database as flat files, load them "as is" and start analysis immediately.
But puting data in system ram = harder reboots (Score:2, Interesting)
But puting data in system ram = harder reboots as you need to dump it to a disk. Also what about UPS's you need one that has the power to last for the time it takes to do that as well.
Re: (Score:2)
And god forbid your system halts and you lose any data you haven't already committed to persistent storage.
Re: (Score:1)
You should be using stable operating systems and diesel backups. You should also be using clusters with the same data so a loss of one system isn't catastrophic.
Re: (Score:3)
what help is diesel when the main power room with Transfer Switch is on fire and the UPS don't have the power to run the systems for a long time as they are setup just to be there for the time it's takes for the diesel to start up.
I know am being your stereotypical anarchist but.. (Score:2)
Re: (Score:3)
Decentralization is the way.
If you're a consultant and find a client working in a centralized way, you sell decentralization as the way to solve all their woes. If you find them working in a decentralized way, you sell them on centralizing to solve all their woes.
There are only two constants here: 1) every business has woes, regardless of structure; 2) consultants extract lots of value by shifting those woes around
Hell no (Score:2)
It's a matter of use and optimisation. (Score:2)
Hard-drives aren't really as slow as people think. The problem is that mechanical hard-drives is slow on seeking, but if seeking can be eliminated, you can quite easily saturate your CPU on even a moderately complex calculation.
Case of point: http://www.youtube.com/watch?v=WQw7c-PliB4 [youtube.com]
Re: (Score:2)
An addition not a replacement (Score:1)
In-memory data storage is fine as long as it isn't primary data storage. Yes it's faster but there are a lot of downsides as well. The most important is that it isn't easy to share between servers (a close second is that it's hard to replicate to a remote site for disaster recovery purposes) so each server needs to have its own copy of the data and there needs to be some way of keeping all that data in sync.
The alternative is to have good old "traditional" storage sitting where it always sits and when the
Open-Source VoltDB (Score:1)
I believe VoltDB (http://voltdb.org) uses in-memory and MPP if anyone is interested in giving it a test-spin. It's from Michael Stonebreaker of various databases (Ingres, Vertica, etc)
They've been doing a number of presentations on the topic you can probably find on the site.
Global-scale analytics != standard IT load (Score:3)
----------
Happy New Year, may it suck less for ya than the last one.
map your data (Score:1)
Heard it all before (Score:2)
Remember when the first 64-bit machines became commercially available?
"zOMG, now we can keep whole databases in RAM with the 4GB limit gone!"
This is just CS101. Memory hierarchy - you keep your data in the fastest memory it'll fit in (that you can afford.)
Now we can afford more RAM so we can do more per unit time because we don't have to wait for IO. Duh.
And along w/the rest of the "Duh's" (Score:2)
WTF is SoulSkill still drunk?
This is SO nothing new, nor is it even interesting.
In memory DB's are nothing new, they are simply prone to failure and this is why hardware storage be it spinning drives or Flash will always be around.
All it takes is one hiccup by the memory logic or an interrupt controller or DMA channel and all your in memory data is toast forcing a reload from the last checkpoint which can take quite a while when you are talking say a terabyte of information.
Clifford Hersh and Jeffery Spirn
Will storage [...] be needed in the future? (Score:2)
No, we will simply keep everything powered on and start anew if we lose power.
Also, we will all be mega-corps, even at home. No one will start with datasets under a few Pentabytes. Not even for photos and text.
If you don't like the game... (Score:2)
... "Even if flash device latency improves, it still has to go through the OS and PCIe bus compared to a direct hardware translation to address memory."
... "Knowledge of how to do I/O efficiently is limited because I/O programming is not taught in schools."
Re: (Score:2)
Re: (Score:1)
Are you really sure you want them to come up with something new?
Re: (Score:2)
Re: (Score:2)
The remainder of the bottle will, depending on whether you work for that somebody or not, either enable a heartwarming humanitarian gesture, or be your only friend during the days of hair-raising stress and thankless toil that could strike at any second...