Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Businesses Cloud Data Storage Apache

Is Big Data Leaving Hadoop Behind? 100

knightsirius writes: Big Data was seen as one the next big drivers of computing economy, and Hadoop was seen as a key component of the plans. However, Hadoop has had a less than stellar six months, beginning with the lackluster Hortonworks IPO last December and the security concerns raised by some analysts.. Another survey records only a quarter of big data decision makers actively considering Hadoop. With rival Apache Spark on the rise, is Hadoop being bypassed in big data solutions?
This discussion has been archived. No new comments can be posted.

Is Big Data Leaving Hadoop Behind?

Comments Filter:
  • Nope. Not happening. (Score:5, Informative)

    by Art Popp ( 29075 ) * on Wednesday May 13, 2015 @05:33PM (#49685707)

    FTA: ...biggest problem is that people allegedly still can’t use Hadoop... Hadoop is still too expensive for firms...

    Hadoop is an ecosystem with lots of moving parts. Those are real problems above, but Spark (Particle) is not a stand alone replacement for an ecosystem the size of Hadoop. Moreover it has no problem running integrating with Yarn on Hadoop where you can run Hbase, Cassandra, MongoDB, Rainstor, Flume, Storm, R, Mahout and plenty of other Yarn-compatible goodies.

    It's also worth noting that Hortonworks and Cloudera may not be "taking off as hoped" because the branded big-iron players are finally in the ring. They hide the (rather hideous) complexity and integrate well with any existing systems you have with those vendors. Teradata for instance has a Hadoop/Aster integration that's impressive and turn key. They bought Rainstor, and will soon have it integrated, and that's Spark-fast and hassle free. IBM's BigInsights is very impressive if you have the means.

    So, no, Hadoop is in no danger of being replaced. The value proposition that my $4.2M cluster outperformed two $6M "big name" vendor supported appliances is undeniable, but only that stark when your $'s have an M suffix. What will probably occur though is that we'll end up replacing every component in Hadoop with a faster one, and MapReduce will become a memory as things like Spark and Hive/Tez move away from that methodology.
         

    • Re: (Score:1, Funny)

      Yarn on Hadoop where you can run Hbase, Cassandra, MongoDB, Rainstor, Flume, Storm, R, Mahout and plenty of other Yarn-compatible goodies.

      It's also worth noting that Hortonworks and Cloudera

      I know R. My wife has a Yarn store. WTF are those other things?

      • I've heard of MongoDB. It's Web Scale!! [youtube.com]

      • Funny I was thinking they were all children's books. cloudera, horton hears a works, etc.

      • by sfcat ( 872532 )

        I know R. My wife has a Yarn store. WTF are those other things?

        Its a distributed exec for Java processes. That's really it. It has crappy monitoring built in that's unnecessary due to SNMP but they built it in anyway because...well I don't know why.

    • I agree that the problem is that most companies don't know how to run it and it's left to bigger organizations that 1) have the expertise in house and 2) actually need the added complexity.
      Understanding which pieces of the ecosystem you need, how to deploy and running them in a production environment can be daunting, not to mention all the different possibilities of which cloud provider to use, which services, etc.

      Cloudera and Hortonworks are capitalizing on it basically helping sorting out this complexity

      • by Rich0 ( 548339 ) on Wednesday May 13, 2015 @09:22PM (#49686773) Homepage

        I agree that the problem is that most companies don't know how to run it

        I think a bigger problem is that most companies don't even know what big data actually is. It is a big buzzword. I hear managers talking about it all the the time. Half the time they're talking about some database table with a few hundred thousand records in it. Other times they're talking about some repository full of documents or binary files that might be terrabytes in size, but it is just random stuff. They don't actually have questions in mind that they want to answer, and ultimately that is what tools like Hadoop are about.

        I've heard "big data" applied to problems that are basically just file shares or the like.

        Then if a company really does have a problem where Hadoop and such is useful, they want to buy some product off the shelf that solves that particular problem, and usually they don't exist. Or they want to hire a bunch of random rent-a-coders and have them solve the problem, and they go about solving it with single-threaded solutions written in .net or whatever the commodity solution in use is at the company.

        Sure, your Facebooks and Googles and Netflixs and Amazons know what they're doing. Your average GE or Exxon or Pfizer generally doesn't do that level of comp sci.

        • by jbolden ( 176878 )

          You are overestimating the difficulty at this point. This not compsci anymore and hasn't been for many many years. It isn't even hard administration. It is probably easier to get a big data system running in 2015 than it was to use Oracle in 1995.

          As far as your examples you went way too big. GE is a huge DevOps shop, they know what Big Data is. Exxon has massive supercomputing datasets. I would bet they were doing big data long before it got cool. Pfizer has an IT department that is some of everythin

          • by Rich0 ( 548339 )

            You are overestimating the difficulty at this point. This not compsci anymore and hasn't been for many many years. It isn't even hard administration. It is probably easier to get a big data system running in 2015 than it was to use Oracle in 1995.

            I think you're misunderstanding my point.

            Sure, it is easy to install Hadoop, and run it.

            The hard part is figuring out WHAT to run on it.

            • by jbolden ( 176878 )

              That's easy the big 5:

              1) Datasets to big to use an RDBMS
              2) 360 view of customers (CRM consolidation, sales systems consolidation...)
              3) Security data from network security devices.
              4) Stream in huge amounts of operational data (GPS on employees, physical sensors, machine health...) and do integrated data analysis
              5) data warehouse consolidation

    • So you are basically saying that hadoop will eventually fall in disuse but HDFS (Hadoop file system) will linger on with new platforms built on top of it? Or do you believe that the HDFS will also be replaced eventually?

  • by Culture20 ( 968837 ) on Wednesday May 13, 2015 @05:37PM (#49685743)
    I thought Spark worked from within Hadoop. Is that like using emacs to run vi?
    • Re:Rival? (Score:5, Informative)

      by Anonymous Coward on Wednesday May 13, 2015 @05:47PM (#49685793)

      They need to refer the the pieces of hadoop. HDFS is the storage piece and many things can interface to it, it isn't great but is often good enough especially if you just have a couple local disks per node. YARN is the scheduler piece, it is mostly awful performance-wise but is fairly easy to use...long run it'll lose to something like mesos I think. MR is the map reduce piece that everyone thinks of when you say hadoop. Almost everything will run quicker in spark(still using a map/reduce methodology) than hadoop MR.

      As a side note, I don't know anyone who still writes MR jobs directly, they are all doing pig or hiveql.

      • Re:Rival? (Score:4, Interesting)

        by careysub ( 976506 ) on Wednesday May 13, 2015 @06:15PM (#49685951)

        They need to refer the the pieces of hadoop. HDFS is the storage piece and many things can interface to it, it isn't great but is often good enough especially if you just have a couple local disks per node. YARN is the scheduler piece, it is mostly awful performance-wise but is fairly easy to use...long run it'll lose to something like mesos I think.

        That's a good call. With Cloudera and HortonWorks both adding new components to the Hadoop stack it has exploded in the number of components in the last a year or two, and that can be a bad thing. The complexity of the whole ecosystem is getting horrendous, with a typical configuration file doubling from 250 or so to 500 configuration items, which are almost all undocumented (unless you read the code - which scarcely qualifies as "documented") in the last year. For a practical deployment you are pretty much forced to use a commercial stack to get something up and running in a manageable fashion. And then there is the fact that the HDFS foundation is showing its age.

        MR is the map reduce piece that everyone thinks of when you say hadoop. Almost everything will run quicker in spark(still using a map/reduce methodology) than hadoop MR.

        Spark on Mesos is looking mighty awesome.

        As a side note, I don't know anyone who still writes MR jobs directly, they are all doing pig or hiveql.

        MapReduce is still viable for stable production jobs, but not in a dynamic requirements environment.

        Although HiveQL is alive and kicking, the complete replacement of Hive Server with Hive Server 2, while possibly an improvement in usability overall (I am not convinced), it trashes your skill investment in the (now) obsolete Hive stack component. Maybe I am just grousing, but I start having reservations about technology planning in the data center when a key stack component changes so much it a relatively short period of time

        • You are absolutely right about the complexity of the ecosystem, but from my experience every Java based platform eventually evolves such complexity (it is like a xml fetish)

  • Is this a question for Hadoop employees or slashdot? If there's something better, why does it matter to anyone other than the company developing Hadoop if it's relevant?
    • by jbolden ( 176878 )

      Hadoop is open source. The companies building it are LinkedIn, Yahoo, Facebook and then the Hadoop vendors: Hortonworks (tightly tied to Microsoft), IBM, Cloudera (enterprise support vendor)...

    • by Ksevio ( 865461 )
      Hadoop is open source software so it's more significant if it's in decline than a closed commercial alternative.
  • by Luthair ( 847766 ) on Wednesday May 13, 2015 @05:56PM (#49685845)

    Is security really that big of a deal? Isn't the intent to run it on a private network to crunch numbers behind the scene?

    We don't ask about the susceptibility of safety deposit boxes to crowbars and dynamite, they're inside a vault.

  • Did I trip into a time warp and come out a decade in the past?
    Who the fuck is actually talking about hadoop or map reduce in 2015? The same retards that were creaming their little cunts about it in 2005?

    Even when you ignore the joke that is Java, hadoop is unwieldy, unreliable shit if you actually care about storing and retrieving correct, synchronized data.
    If you're fine with throwing all of your data in a pot and getting some sort of result that looks mostly correct, then knock yourself out and use hadoo

    • by Tablizer ( 95088 )

      If your data needs to be correct, define it and its relationships then use SQL. You will have to pay someone decent money to do this correctly.

      PHB's have to learn the hard way. They want it cheap, big, and now. Security & reliability issues are something they try to blame on somebody else using their well-honed spin skills.

    • by jbolden ( 176878 ) on Wednesday May 13, 2015 @10:03PM (#49686953) Homepage

      Hadoop didn't exist in 2005. 1.0 release was December 2011 earliest versions I know of were floating around in 2007.

      As for using SQL, Hadoop supports SQL (mostly). Problem with Hadoop is the data sets are too big for RDBMS engines to handle. It has nothing to do with developer skill it has to do with the type of database engine and how data is being handled.

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Hadoop didn't exist in 2005.

        Unless you work in recruitment.

      • Hadoop was created in 2005 and named after a toy elephant. It was an open source implementation of some shit Google wrote some papers on.
        The "Apache Hadoop" branded package hit RTM in 2011. Apache only got involved because of all the retards mindlessly jumping onto it. Those retards jumped onto it because they were told it was based on Google's work.

        As for datasets being too big for RDBMS engines to handle, WTF are you talking about? MS SQL can handle all the data you throw at it and has complete cluste

        • Something fitting in maximum supported size of a database does not mean that performance of data manipulation with the database will meet the business criteria in the available budget.

    • Did I trip into a time warp and come out a decade in the past?
      Who the fuck is actually talking about hadoop or map reduce in 2015? The same retards that were creaming their little cunts about it in 2005?

      Even when you ignore the joke that is Java, hadoop is unwieldy, unreliable shit if you actually care about storing and retrieving correct, synchronized data.
      If you're fine with throwing all of your data in a pot and getting some sort of result that looks mostly correct, then knock yourself out and use hadoop.

      If your data needs to be correct, define it and its relationships then use SQL. You will have to pay someone decent money to do this correctly.

      None of these complaints seem to keep people from using Splunk.... unstructured data soup isn't going anywhere at any scale, we'll just call it different things.
      I can't even fathom a world where all the data we analyze in Splunk could have been fed into Oracle and turned into usable reports. All of our users would have to be Oracle DBAs.

    • by Anonymous Coward

      +2 Interesting? More like -5 Ignorant.

      RDBMSs are not a workable solution for the kinds of problems Big Data is trying to solve. You need something else. There is no such thing as a "simple" Big Data solution.

      The Java-based Big Data solutions are really the only ones that exist in the world, other than those that were developed in-house years ago by companies who had to deal with huge-scale problems in the past.

      So if your solution for Big Data is Oracle (RDBMS), you don't belong in this conversation.

      • If you're using a term like "Big Data", you don't belong in the fucking building.
        Relational databases are perfectly suited to extremely large and complex datasets. You just have to intelligently design your database. You can't just throw noise into a pot and expect useful results. Hadoop (map reduce) tries to do exactly this. If you care about correctness, completeness, and synchronization of data, it's trash.

  • by sfcat ( 872532 ) on Wednesday May 13, 2015 @08:56PM (#49686633)
    The problem with "big data" is that there are no vendor specs and the implementations are sometimes questionable. There is a provider that does a better which is SQLStream (http://www.sqlstream.com) which has a streaming DB which is controlled via SQL. In addition to normal tables, you have streams which are relational typed conduits though which data flows and windows which are time (and row) based groups of tuples which can be used in agg queries with all the standard SQL functions (there's also Java UDXes and MED support). Designing your middleware on top of a SQL engine is a much better design pattern than doing it all with hand wired Java. All this and about 100x the throughput of a Hadoop program. Disclaimer: I'm an engineer at SQLStream.
    • I read your post but I still have no idea what your 'streams' are, or why anyone would want to use them.
  • by rockmuelle ( 575982 ) on Wednesday May 13, 2015 @08:59PM (#49686651)

    A scripting language with a good math/stats library (e.g., NumPy/Pandas) and decent raid controller are all most people really need for most "big data" applications. If you need to scale a bit, add few nodes (and put some RAM in them) and a job scheduler into the mix and learn some basic data decomposition methods. Most big data analyses are embarrassingly parallel. If you really need 100+ TB of disk, setup Lustre or GPFS. Invest in some DDN storage (it's cheaper and faster than the HDFS system you'll build for Hadoop).

    Here's the break down of that claim in more computer sciencey terms: Almost all big data problems are simple counting problems with some stats thrown in. For more advanced clustering tasks, most math libraries have everything you need. Most "big data" sizes are under a few TB of data. Most big data problems are also I/O bound. Single nodes are actually pretty powerful and fast these days. 24 cores, 128 GB RAM, 15 TB of disk behind a RAID controller that can give you 400 MB/s data rates will cost you just barely 5 figures. This single node will outperform a standard 8 node Hadoop cluster. Why? Because the local, high density disks that HDFS encourages are slow as molasses (30 MB/s). And...

    Hadoop has a huge abstraction penalty for each record access. If you're doing minimal computation for each record, the cost of delivering the record dominates your runtime. In Hadoop, the cost is fairly high. If you're using a scripting language and reading right off the file system, your cost for each record is low. I've found Hadoop record access times to be about 20x slower than Python line read times from a text file, using the _same_ file system for Hadoop and Python (of course, Hadoop puts HDFS on top of it). In Big-O terms, the 'c' we usually leave out actually matters here - O(1*n) vs. O(20*n). 1 hour or 20 hours, you pick.

    If you're really doing big data stuff, it helps to understand how data moves through your algorithms and architect things accordingly. Almost always, a few minutes of big-O thinking and some basic knowledge of your hardware will give you an approach that doesn't require Hadoop.

    tl;dr: Hadoop and Spark give people the illusion that their problems are bigger than they actually are. Simply understanding your data flow and algorithms can save you the hassle of using either.

    -Chris

    • by sfcat ( 872532 )

      Here's the break down of that claim in more computer sciencey terms: Almost all big data problems are simple counting problems with some stats thrown in. For more advanced clustering tasks, most math libraries have everything you need. Most "big data" sizes are under a few TB of data. Most big data problems are also I/O bound. Single nodes are actually pretty powerful and fast these days. 24 cores, 128 GB RAM, 15 TB of disk behind a RAID controller that can give you 400 MB/s data rates will cost you just barely 5 figures. This single node will outperform a standard 8 node Hadoop cluster. Why? Because the local, high density disks that HDFS encourages are slow as molasses (30 MB/s). And...

      Hadoop has a huge abstraction penalty for each record access. If you're doing minimal computation for each record, the cost of delivering the record dominates your runtime. In Hadoop, the cost is fairly high. If you're using a scripting language and reading right off the file system, your cost for each record is low. I've found Hadoop record access times to be about 20x slower than Python line read times from a text file, using the _same_ file system for Hadoop and Python (of course, Hadoop puts HDFS on top of it). In Big-O terms, the 'c' we usually leave out actually matters here - O(1*n) vs. O(20*n). 1 hour or 20 hours, you pick.

      Optimization is usually about creating a small inner loop at the expense of setup cost. You can see this in compilers/languages (creating an optimized binary vs a script interpreter), in databases (prepare vs execute), and in these types of big data systems. Hadoop can't and doesn't optimize its inner loop very well at all due to its basic programming interface. It stores each row in an array of Java objects. A better design would process buffers of data with non-copying access libraries to hide this ab

  • BSD is dying for how long again? It's still around and having monthly releases [openbsd.org]. For open source projects, popularity contests are much less important. With massive existing user base, Hadoop will be actively maintained for long time. So if you already familiar with it and it serves the needs of your project, go right ahead.

  • by Required Snark ( 1702878 ) on Wednesday May 13, 2015 @11:09PM (#49687219)
    Both Pointy Headed Bosses and Slashdot loooove talking about tools. As the posts generally show, both PHBs and Slashdoters have no clue about what Big Data is used for. It's all about the buzzwords and technology, not about use and utility.

    There are no references to any algorithms. Rank ordering? Nope. Social graph analytics? No. Netflix style recommendations? Uh-uh. Statistics? None.

    Without talking about data sets, algorithms and expected results, yammering about tools is meaningless. Hot air.

    But who cares, because you all get to call each other stupid, and try and prove that you are the biggest baddest tech weenie on the block. From here it seems that you don't even know where the block is. You don't even seem to know which direction you need to go to get to a street. (Like the implied car reference there?)

    I'm beyond unimpressed. It's obvious that no one has a clue what they are talking about. Go off and learn something, and then maybe you will be able to write a post that isn't a waste of time. Other then that STFU and get off my lawn.

    • I agree. There is a distinct lack of discussion that outlines where Hadoop shines versus a RDBMS and these other tools. I did some reading and it seems like a database system does better with data that is organized and has a distinct relationship between data sets. Hadoop and parallel processing seems to work better for data that is highly unstructured and for which you need to delve deeply to find relationships and create adhoc reports.

      Some have mentioned that one of the reasons for interest in Hadoops

      • Actually, the biggest problem with RDBMS and similar tools is the fact that you are expected to mutate data in place, and mash it into a structure that is optimized for this case. Most of the zoo of new tools are about supporting a world in which incoming writes are "facts" (ie: append-only, uncleaned, unprocessed, and never deleted), while all reads are transient "views" (from combinations of batch jobs and real-time event processing) that can be automatically recomputed (like database indexes).
    • by Bob9113 ( 14996 )

      Both Pointy Headed Bosses and Slashdot loooove talking about tools. As the posts generally show, both PHBs and Slashdoters have no clue about what Big Data is used for. It's all about the buzzwords and technology, not about use and utility. There are no references to any algorithms.

      Heh. I've been doing big data since 2000. Fifteen years experience in a field that's five years old, I like to say. And let me say this: You nailed it. Your whole post, not just the part I quoted. I've used the tools, from Colt t

    • by Schnee ( 743890 )
      +1. Without analysis, big data is just a bunch of data
    • Except, if you are talking about a centralized database tool, you already know that the default design of "everybody write into the centralized SQL database" is a problem. Therefore, people talk about alternative tools; which are generally designed around a set of data structures and algorithms as the default cases. A lot of streaming based applications (ie: log aggregation) are a reasonable fit for relational databases except for the one gigantic table that is effectively a huge (replicated, distributed
  • From 2010 to early on the year I was responsible for Big Data technical marketing at Microsoft, recently joined AWS. I won't comment of any of the specifics for my current or former employer, but it's a fact that other nosql technologies have a higher adoption rate. It's clear that the traditional datawarehouse had limitations, and that hadoop is not replacing the EDW. The largest companies are using proprietary technologies, not adopting hadoop. Hadoop 2.0 is much better, you should use it if you have the
  • Meaning the hype around big data has settled and its back to business. I'd say there less than 10 companies worldwide to whom big data actually might make sense. Others clean and aggregate their data in such a way that its actually useful. .... I don't want my bank guessing my balance with big data statistics, I want them to know it. And so do most other people.

  • Betteridge's law of headlines finally proven wrong?

  • Has anyone considered Joyent's Manta [joyent.com] ?

    This is a distributed object storage with integrated compute.
    Data is stored on a cluster of SmartOS hosts..
    And processed directly on each host inside a OS container (SmartOS zone), no data movement.

    Lot of APIs available: R, command-line, python, ruby, node.js etc..

    Available on their cloud and as a on-premises commercial product, opensourced [github.com] last November (simulteanously with smartdatacenter).

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...