Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software The Almighty Buck Hardware

Multi-Core Chips And Software Licensing 248

i_r_sensitive writes "NetworkWorldFusion has an article on the interaction between multi-core processors and software licensed and charged on a per-processor basis. Interesting to see how/if Oracle and others using this pricing model react. Can multi-core processors put the final nail in per-processor licensing?"
This discussion has been archived. No new comments can be posted.

Multi-Core Chips And Software Licensing

Comments Filter:
  • I doubt it (Score:1, Insightful)

    by Anonymous Coward on Tuesday July 20, 2004 @11:14PM (#9756230)
    I'm sure higher-end software will charge per physical chip if nothing else.
  • no (Score:5, Insightful)

    by dark404 ( 714846 ) on Tuesday July 20, 2004 @11:20PM (#9756266)
    Most likely per-"Physical Processor" will be changed to per-"Physical Processor Die" since the multi-cores still share a die.
  • this is all BS. (Score:4, Insightful)

    by rokzy ( 687636 ) on Tuesday July 20, 2004 @11:28PM (#9756320)
    demanding more money for multi-core is ridiculous. if you're going to do that, why not charge more for faster CPUs? why should it cost twice as much to use, for example, a 2-core 1GHz CPU than a 1-core 2GHz CPU?

    on the other hand it may push more people to OSS.
  • maybe not (Score:4, Insightful)

    by jdkane ( 588293 ) on Tuesday July 20, 2004 @11:28PM (#9756323)
    At issue is that software vendors such as Oracle and Microsoft that license software on a per-CPU basis are likely to consider each processor a separate CPU, a practice that means double the licensing costs for enterprise users

    Well, these rules are obviously not written in stone. "likely" is speculative. Let's wait and see what they *actually* decide to do. Rules can change as technology changes. The enterprise users should speak up about this issue and provide feedback.

    Obviously Oracle considers an n-core chip as n processors. However they are not going to be able to compete if another database company does the opposite with its licensing. However, maybe they'll all follow each other just for the sake of quick $.

  • by Fooby ( 10436 ) on Tuesday July 20, 2004 @11:30PM (#9756339)
    Writing multithreaded applications (or SMP-capable operating systems) that work well is hard work. It's always going to make sense for proprietary software vendors to charge extra for software that takes advantage of additional processors. Unless SMP and/or dual-core becomes ubiquitous, I something like per-processor licensing sticking around, unless the mythical day when free software eclipses proprietary software does in fact come about.

    And I think single-core, single-CPU systems will stick around for a long time, if not for the indefinitely foreseeable future. CPUs get faster all the time, and since it's much easier to engineer single-core, single-CPU systems, so single-processor systems will remain the preferred solution for the low end. Look at something as basic as pipelining, that is an ancient technology in terms of processor design, yet there is still a place for non-pipelined processors at the very bottom of the chain, where microcontrollers are concerned.

  • by Dark Lord Seth ( 584963 ) on Tuesday July 20, 2004 @11:33PM (#9756357) Journal

    That's because HyperThreading is a neat and very low level trick that makes it appear like there are two processors. A dual-core processor doesn't use any tricks and physically contains two processing cores on one chip. Of course, this could lead to some very interesting things such as an dual core AMD proc using one shared on-chip memory controller or Intel procs with dual-cores AND hyperthreading for a total of 4 procs.

    I'm looking forward to dual-cores.

  • Oracle & Intel HT (Score:1, Insightful)

    by Anonymous Coward on Tuesday July 20, 2004 @11:35PM (#9756369)
    I'm a network admin for a govt org. I'm about to buy a bunch of Oracle server licenses for a new records management system project. I specifically asked our Oracle govt sales rep about this issue and he unequivocally stated (and put in writing in the form of a formal price quote) that the Intel Xeon HT processors count as one processor per physical CPU each. He went on to explain that for other big-name Unix platforms, like certain IBM RS6000 boxes which have multiple processors included in their "cpu module" (i.e. ship with 6 procs in a module, but you may only buy the box with 2 or 4 actually enabled) that Oracle does indeed demand a per-processor license for even the dormant processors, becuae all it takes is a phone call and fee to IBM to run a firmware config utility to activate those dormant processors. If Oracle renegs on this deal, then I'll flat tell them to kiss my hiney, and kiss the $80K deal goodbye since the app I'm buying will run against MS SQL Server just fine too.
  • Re:I doubt it (Score:2, Insightful)

    by jbplou ( 732414 ) on Tuesday July 20, 2004 @11:38PM (#9756380)
    Last time I looked Linux wasn't a DBMS. Oracle, SQL Server, DB2 they have per processor licensing. How will Linux stop this?
  • hee hee (Score:3, Insightful)

    by MrChuck ( 14227 ) on Tuesday July 20, 2004 @11:53PM (#9756472)
    Yeah, postgres and linux do well on a pair of redundant 32CPU machine that's being HAMMERED, running with 32GB of memory in use and more waiting.

    I love the view that Linux can replace all machines. There's no place for proprietary software.

    Now, I'll mostly agree with Windows because too often Windows is being cobbled together and shoved into the data center (my servers need a windowing system just to boot? I have machines I've never seen or touched that I've installed from 12000 miles away and run for years.

    And yeah, BSD fills lots of places in the infrastructures, but BSD and Linux didn't come up with CrayLink or NUMA. And there's something kind of nice about when your $10million company has a problem with the $100,000 server that I can make a call and have a bunch of people answer who are PAID to run around and make my problem their high priority.

    But yeah, that my PDA runs Postgres and smokes the trading floor servers I used put up 10 years ago is pretty cool.

  • Re:Alternatives (Score:2, Insightful)

    by Waffle Iron ( 339739 ) on Wednesday July 21, 2004 @12:06AM (#9756554)
    So what models of licensing do you WANT that will keep the vendor and the buyer in business and happy?

    Why not have your software measure how much real work it's doing. If over time it exceeds the amount of processing that the user paid for, then it starts to throttle itself back. That would be a lot more accurate than going with a crude measure like "number of CPUs" anyway.

  • Correct me if I'm wrong, but this way you would lose almost all of the benefit of multiple cores. At least, you would if you'd run an OS inside that VM/VPL, since not only would you have to have both a host and a guest OS (more licenses), but the guest would not be able to take full advantage of the hardware (by definition of the VM), which means more complexity with (be realistic) lower performance. Not running an OS inside this VM/VPL is silly, since it is then not a VM at all, and the VPL would be doing exactly what a normal OS does (shuffling threads), making its existence somewhat absurd. Bah. Leave that to marketing.

    Although the valid point has been made elsewhere that it takes effort to make SMP-efficient apps, I think the multi-CPU licensing idea in many cases is crap because the OS should make it where the processes are running transparent to the application.

    I think what you want is an new HAL paradigm that makes whatever massively-parallel Neumann machine we run to look like a single processor, and *function as one* (I know about mosix -- I mean with performance proportional to size). I agree that this could be a good idea. Maybe. In a decade.
  • by chathamhouse ( 302679 ) on Wednesday July 21, 2004 @12:18AM (#9756609) Homepage
    Whoa. You're comparing RIAA tactics to non-free (as in _libre_ and dollars) software vendors.

    Your comparison is totally inappropriate.

    With per-cpu licensing, the assumption is that the software can do more for you on a multi-cpu system, hence you pay more for it. There's nothing terribly dodgy about this.

    After all, whey you're paying for performance, the vendor (and buyer) wants to find a useful billing metric that's easy for everyone to understand. Anyone who's dealt with Veritas's 20 or so tiers will appreciate this.

    Per cpu is the way to go then. The customer maximizes their investment when running on the fastest CPUs available, which isn't normally a big deal when the cost of the software far exceeds the cost of 3.2GHz Xeons or equivalent Athlons.

    Per-cpu also solves the issue of pricing a single-cpu x86 (little $), versus a 32+ cpu sparc box (big $), versus 32x single-cpu x86 clusters.

    So, when multiple-core chips come out, they'll essentially be multi-cpu. easy. Use them, pay more.

    Because of competition from free ($/libre) software, licensing arrangements have gotten a lot more sane in the past couple years. Vendors are trying to stay away from that line in the sand where it becomes cheaper to re-train,re-build,re-deploy than to re-license.

    This is very much unlike the RIAA,MPAA, and their friends in other countries who see it fit to take a much more extortive stance. Remember that most vendors let you move a per-cpu license around to different OSes and architectures, something that surely can't be said of the entertainment industry (oh, you own this on videocasette? You can have the DVD for media & packaging costs, or just download the content from http://videos.com)

  • license economics (Score:3, Insightful)

    by Doc Ruby ( 173196 ) on Wednesday July 21, 2004 @12:19AM (#9756611) Homepage Journal
    You're right in questioning the home user's cost:benefit analysis in terms of revenue, although many people do use eg. Windows XP to make money, or save money, at home. But I haven't heard of Microsoft requiring non-business customers to pay a per-processor license - are there any (working) dual processor gaming machines which cost more for their Windows license than their single processor versions? AFAIK, WinXP Home shuts off all but one processor to keep corporate customers from buying it for less than Pro. So it's really just a sloppy way to split the market based on their ability to pay. If it affected a large enough boundary market, Microsoft would adjust their pricing to exploit it better.

    Linspire is governed by the same basic dynamics. They're going to charge what the market will bear, but the market won't bear a Windows price for their product in 2004. Whether they keep their pricing model if they become a platform option on par with Windows in the market will remain to be seen. If they stay cheap, they will expand their market more - what the market will bear tends to resemble the ability of water to seek its own level.
  • Uh, okay (Score:3, Insightful)

    by NanoGator ( 522640 ) on Wednesday July 21, 2004 @12:21AM (#9756627) Homepage Journal
    I don't mind paying Intel a little more for dual core machines. I don't mind paying Microstar extra for a motherboard that supports that processor. I don't mind paying Microsoft extra for using dual core processors. But... on a per app basis? So.. I'm paying for 2x the performance, right? What if I buy a machine with ~twice the megahertz?

    Maybe I'm just knee-jerk reacting here. I'm just not all that impressed with this new scheme to wring money out of people, even if they are big corps etc. I mean, if the software did something special with more processors, that'd be a little different. I just don't want the double-dipping to happen. Hardware makes the speed.

    Okay, I'm done redundantly ranting. I'm just annoyed with the prospect in a year or two of adding new machines to the render farm and then having to 'upgrade' the software.
  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Wednesday July 21, 2004 @12:32AM (#9756678) Journal
    i_r_sensitive is extremely optimistic if he feels that multi-core processors are going to mean the end of per-processor licensing. I would think that most software licensors are looking toward multi-core chips as the gravy train finally pulling into town.

    When you think about it, any licensing deal is a contract between a software provider and a software user. If the price doesn't make sense, then the contract won't happen.

    Depending on the cost of the processor chips, the computer chassis they plug into, and the license cost -- per processor licensing could save people money when they move to multi-core machines -- assuming that the two-core machine really is twice as fast at the application as two single-core machines. If the chips don't cost much more, you save the hardware, energy, and cooling costs of the second chassis. This could be a big win.

    This is one of those cases where the market will decide. In [my] visual effects business, company policies are all over the map. Pixar allows you to run RenderMan on dual-proc machines with a single license. It believe (could be wrong, we have only 2 proc machines)) that Shake will run on however many processors you have in one box using just one license. Other software requires a separate license for each processor.

    But really, when I say "software requires", that's wrong and stupid. It's the contract you have with the software provider that requires it, and contracts are often quite malleable.

    Thad Beier

  • Innovation (Score:4, Insightful)

    by superpulpsicle ( 533373 ) on Wednesday July 21, 2004 @12:51AM (#9756763)
    It's a complicated subject that gets even more complicated as time goes on. Like the Xeon Pentium chips that count as multiple processors in windows.... but it's really 1 physical chip. What if they were emulating stuff thru vmware. Now 1 chip is really on multiple OSes. Etc Etc.

    No licenses today can contractually prepare for innovative stuff in the future. That's why 90% of hi-tech lawyers should quit and leave us techies alone.

  • by BiggerIsBetter ( 682164 ) on Wednesday July 21, 2004 @01:03AM (#9756815)
    If I had modpoints, I'd mod you up. It's as silly as charging more for *the same car* depending on how many passengers you want to carry.

    0: I need a car.
    1: Sure, how about this little one? Only $14000!
    0: Nice, my wife will love it!
    1: It's for your wife?
    0: No, but I give her a ride to work each morning.
    1: Oh, you want to drive with your wife in it too? That'll be another $6000.
    0: Huh? What do I get for the extra $6000?
    1: Well, we remove the factory installed passenger door lock that your key doesn't fit.
    0: That's it? I could do that myself!
    1: Yes, but we require you to sign this form giving us permission to check your car whenever we like to make sure you haven't bypassed our security and aren't driving with unauthorised passengers. And if we suspect you have been doign so, we'll prosecute to the fullest extent of the law for misuse of our product.
    0: But if I buy it, it's MY car!?
    1: Yes, but the design and processes are still ours. You're buying a license to use the implementations provided with the car, and unapproved use with a passenger therefor illegal. The car is yours, but we still own it's usage...

    Yes, arbitrary licensing and the current commercial software business model is complete BS.
  • by nothings ( 597917 ) on Wednesday July 21, 2004 @01:19AM (#9756911) Homepage
    Your implied claim ("hyperthreading isn't really two processors") only makes sense with an overly simplistic view of what a "processor" is.

    Let's say the basic components in a processor are: instruction fetch, instruction decode, load/store units (memory save/load), various execution units (that do the adds, multiplies, etc.), and a register file. Current hyperthreading allows for relatively fine grained switching between threads, so I believe there are two separate register files, but all the other units shared. (Are there two MMUs and TLBs? I'm not sure, but somehow they allow hyperthreading between two unrelated processes, supposedly.) Already we do have two of something (register file, and maybe the memory management hardware).

    There's a continuum of possibilities. What if there are two of everything except the execution and load/store units? Note that the whole machine is massively pipelined, so there are multiples of these even when there is just one procesor. So do you have two processors which share execution units, or one processor with super-hyperthreading?

    Assuming you consider it the former, then we can mix it up. Maybe there's two instruction fetch units, but a single instruction decoder. Etc. etc. Now, you could pick one thing, say instruction decode, and say 'there must be two of these to be considered two processors'. Oops, I forgot to mention, superscalar processors already can decode multiple instructions at once (just not from multiple instruction streams), and even so, different people are going to pick different definitions; there's no clear differentiator.

    A pure two-core approach is just easier/cheaper to design, since you basically just design them separately, or really, design one and clone it. But you can probably get more performance for the same chip area by pushing those two cores together and allowing them to share resources, even though that will look more like hyperthreading in terms of design. Normally you think of hyperthreading as being less efficient than pure two separate processes, yet I claim this more-like hyperthreading design gets higher performance that two separate processors; I can't see how you wouldn't be better off sharing the 2N execution units rather than using a fixed N in each core.

    In the end it boils down to (roughly) "I can get so many FP and integer units on chip; what's the best way I can feed instructions from any number of threads to maximize their usage?"

1 + 1 = 3, for large values of 1.

Working...