Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Software The Almighty Buck Hardware

Multi-Core Chips And Software Licensing 248

i_r_sensitive writes "NetworkWorldFusion has an article on the interaction between multi-core processors and software licensed and charged on a per-processor basis. Interesting to see how/if Oracle and others using this pricing model react. Can multi-core processors put the final nail in per-processor licensing?"
This discussion has been archived. No new comments can be posted.

Multi-Core Chips And Software Licensing

Comments Filter:
  • by Iesus_Christus ( 798052 ) on Tuesday July 20, 2004 @11:17PM (#9756247)
    If the efforts of other corporations bent on protecting their intellectual property (RIAA) are any indication, per-processor licensing will move to per-core licensing. If the RIAA can force you to pay multiple times for the same song (which you, unfortunately, cannot move between preferred mediums), then it would make sense that software companies bent on collecting money would make you pay multiple times for one processor. On the other hand, they are somewhat different issues: usage of music would be governed under fair use (in theory), while usage of software (at in terms of licensing per processor) would be governed by the EULA or another contract between the corporation and customer.
  • Re:I doubt it (Score:5, Interesting)

    by jarich ( 733129 ) on Tuesday July 20, 2004 @11:21PM (#9756275) Homepage Journal
    "Can multi-core processors put the final nail in per processor licensing?"

    no, but i bet linux can.

    Oracle runs on Linux.

    Oracle charges per CPU.

    Your point was?

  • by DaKrzyGuy ( 25850 ) on Tuesday July 20, 2004 @11:21PM (#9756276)
    As long as IBM is making mainframes there will be per processor fees...and they have been around for 40 years so I see at least another 40. Heck, now they even charge different amounts for a processor depending on what you are going to run on it.
  • Buy Robot (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Tuesday July 20, 2004 @11:23PM (#9756288) Homepage Journal
    Businesses charge the maximum they can, for maximum total profit: "what the market will bear". Per-processor prices are just a way to negotiate how much money the customer can make from the software, therefore how much is available from their revenue to pay the software supplier. Just like when an employee negotiates their income, they are negotiating for a share of their employer's revenue to which their work contributes. I'd like to see a software licensing model that treats the software's work as automated labor, and negotiates accordingly. Like some kind of profit sharing. People don't get paid up front, why should the software company? That allows a timeframe for a "test drive" during which both parties can get benchmarks on the actual value of the software.
  • Alternatives (Score:3, Interesting)

    by MrChuck ( 14227 ) on Tuesday July 20, 2004 @11:45PM (#9756420)
    I worked at a company and we busted our butts making software that was core-enterprise type software.

    To help envision it, lets say its a firewall - the firewall has no concept of "users" really, it routes packets. (it's not a firewall, but the situation is close enough).

    Now our basic question, which we reluctantly answered with per-processor licensing, was how to charge for it.

    If you buy our software and your company of 20,000 people is RELYING on it you'd pay more than if your company of 50 people was RELYING on it.

    We could have priced into the middle - but then companies under 2,000 people would feel (rightly) ripped off, while the GMs are getting a steal.

    Charge per "user behind it"?
    Charge by your corporate revenue?
    "Pay what you feel is about right"?

    On not so minor goal was to be able to make a living for 40 people and continue to develop a product that had, by and large, come up pretty short in the open source arena.

    So what models of licensing do you WANT that will keep the vendor and the buyer in business and happy?

    (and yes, I've slipped in a 4CPU license for 1-2 CPU price at a place with old, slow machines in use. We tried to do "right".)

  • Re:I doubt it (Score:5, Interesting)

    by globalar ( 669767 ) on Tuesday July 20, 2004 @11:46PM (#9756424) Homepage
    Oracle [oracle.com] charges for cores individually. (see the Processor section)

    Perhaps a compromise will result. Eventually a 2CPU license could entirely replace a single CPU license. At such a stage licenses could be bundled as 2CPU, 4CPU, etc. As multicores become the norm, naturally 1CPU licenses should phase out entirely.

    This would allow companies to keep their per core licensing scheme. Customers would get the feeling of a deal by getting a muticore license. Perhaps the market would lower the cost of 2CPU license to what a single CPU would be worth.

    HT is another matter - architecturally and performance-wise.
  • I think it is interesting that, Windows running on a 2 CPU machine requires a 2 CPU license, but, say, 5 instances of VMWare running on a single CPU, each hosting an instance of Windows, requires five licenses. (Six if the instances of VMWare are themselves running on Windows)

    Also, what if there was a VMWare-like program that simulated a SMP machine? Would that require a multiple CPU license to run Windows? Even if this program that emulated a SMP machine was running on a single CPU?

  • Re:I doubt it (Score:3, Interesting)

    by The Snowman ( 116231 ) * on Tuesday July 20, 2004 @11:55PM (#9756481)

    I'm sure higher-end software will charge per physical chip if nothing else.

    I am sure that newly licensed software will explicitly state whether it means physical chips or cores, but remember, companies exist to make money. By licensing per core instead of physical chip, they make more money. The software is the same no matter how many chips, only the price varies.

    The real issue is how current licenses handle multiple cores per chip. This may wind up in the courts, or licensees may wind up being extorted for extra money they probably do not owe.

    Despite being dead, BSD scales well with SMP and runs SMP apps very well, plus it is free. I know what license I will use...

  • Re:I doubt it (Score:3, Interesting)

    by halowolf ( 692775 ) on Wednesday July 21, 2004 @12:02AM (#9756521)
    We shouldn't forget that competitive products can also bring down the price. There have been a number of beat ups between DB2 and Oracle for instance, so all we need is a competitor to significantly undercut Oracle on per processor licensing and have customers switch to a different database platform.

    Losing money, normally gets a companies attention, that perhaps their customers think that their licensing is getting too expensive for them to consider Oracle.

    I havn't looked into database pricing for a long time (ignoring MySql type "free" databases), but from what I remember, Oracle was one of the more expensive ones. Is it so now?

  • Re:hee hee (Score:1, Interesting)

    by Anonymous Coward on Wednesday July 21, 2004 @12:04AM (#9756543)
    The Altix 350 incorporates the same high-performance shared-memory SGI® NUMAflex(TM) architecture and optimized Linux tools originally implemented in the award winning Altix 3000. It supports up to 16 processors in a single system image, and features the industry leading 6.4GB/second SGI® NUMAlink(TM) interconnect.

  • Re:Alternatives (Score:3, Interesting)

    by topham ( 32406 ) on Wednesday July 21, 2004 @12:12AM (#9756581) Homepage
    And I, as the end user, can trust the measurement of the software ' cpu usage, how?

  • Re:license economics (Score:2, Interesting)

    by afidel ( 530433 ) on Wednesday July 21, 2004 @12:25AM (#9756646)
    Nah, the biggest thing keeping business's from running Home Edition is the fact that it can not join a domain. This isn't an issue for small business's, but neither is the lack of multi-cpu support. Btw there are basically zero games that take real advantage of a second CPU, the reason are varied but basically come down to the GPU being the limiting force, multi-threaded code being harder to code and debug, and finally a lack of demand.
  • by nettdata ( 88196 ) on Wednesday July 21, 2004 @12:33AM (#9756684) Homepage
    However, Oracle is free to change their licensing once again.

    Oracle Licensing is like mountain weather... if you don't like it, wait 10 minutes and it'll change.

    Seriously, though, Oracle changes their licensing more than any other software company I've ever dealt with.

    I won't be surprised to see their licensing change after they get some push-back from their customers.

    The other thing they DO have a history for, though, is NOT helping customer out when it comes to a license change. I've seen customers sign the deal on a Monday, only to have new pricing come out on the Tuesday. If they'd waited a single day, their software licensing would have been around half of what they paid.

    Joy.
  • MIPS rating (Score:2, Interesting)

    by kiwirob ( 588600 ) on Wednesday July 21, 2004 @01:15AM (#9756880) Homepage
    I can't remember exactly, but back when I was working as a IBM mainframe software engineer I had a feeling the IBM and CA who provided various software for our mainframes charged some software based on MIPS (Million Instructions Per Second) ratings of the virtual machines that the software was running on. Why don't software companies just do the same thing. Establish a performance benchmark and charge based on that. That way you can use single, dual or multi core processors or multi CPU machines and not have to worry about all this licensing drama. If you real machine or "virtual machine" is bench-marked at x MIPS you pay y dollars, who cares what architecture you are running.
  • This is old news (Score:2, Interesting)

    by owsleyd ( 656706 ) on Wednesday July 21, 2004 @01:39AM (#9757037)
    Both HP and IBM have had dual core chips for a while now. There are a number of advantages to moving to dual core processors. The most important is that it helps to improve performance without as much heat generation as two single core processors. Another advantage to dual core processors is that you can share caches, which have some very distinct advantanges in multiprocessor environments. By sharing cache processors can check the shared cache without interupting eachother. By improving the performance of the processors, server vendors can actually cut software costs on a per processor basis, as fewer processors are required to perform the same workload.

    The real issue for software licensing will be when virtualization becomes more widely used in the Risc and Intel space. How will software vendors charge for 2 tenth's of a processor? This will be the real challenge from a cost perspective, as there will be a number of applications that really only require that much of a processor.
  • by TheLink ( 130905 ) on Wednesday July 21, 2004 @01:45AM (#9757065) Journal
    But my Gov charges different road tax/license fees depending on the car engine's cubic capacity.

    It does make the RX7 road tax rather cheaper. And I wonder how they'd deal with fuel cell electric cars.
  • by Anonymous Coward on Wednesday July 21, 2004 @06:15AM (#9757939)
    Multi-core, on the other hand, gives multiple independent physical processors that just happen to fit into one socket.

    True, but I doubt that a multi-core chip will be on par with a similar dual-cpu setup, you still need to get the heat away from that single cpu. It's very possible you will only get about the same 15-30% boost in speed you get from HT.

    From what I understand multi-core designs [ibm.com] have all cores on a single piece of silicon at the center of the CPU just like uni-core CPUs.

  • by walt-sjc ( 145127 ) on Wednesday July 21, 2004 @07:36AM (#9758193)
    I actually ran into the per-processor licensing with database connector software on Linux. A Xeon shows up in linux as two processors due to the hyper threading. Of course hyperthreading is not as fast as 2 distinct CPU's either. It threw the salesman for a loop - he had no idea what the license would be. Turned out they were way overpriced anyway, and a FOSS driver worked fine.

    Oracle was licensing based on power units a while back. Any idea if they are stiill doing that? From what I understand, they basically benchmarked certain machines and price the software based on the performance of the box rather than pure # of CPU's. That solves the issue completely. Course we use MySQL and Postgres anyway, with a smattering of MS SqlServer (Yeah I know, but it IS a pretty good DB, and needed by some apps.)

  • by Epistax ( 544591 ) <<moc.liamg> <ta> <xatsipe>> on Wednesday July 21, 2004 @07:48AM (#9758243) Journal
    I was recently involved in a conversation about the usefulness of dual core machines as home machines. The typical home machine is only really giving focus to one CPU intensive program at a time, max. Intel and AMD are obviously moving in that direction (and it doesn't stop at dual-core) and the reason is a little surprising. According to an Instat article published recently, Intel is doing it to overcome leakage current / power. As technology gets progressively better, leakage power has become progressively worse. I do not understand how designing machines with two cores is supposed to help this. Even when one core is not in operation leakage will still occur (thus leakage).
  • Per person pricing (Score:1, Interesting)

    by Rudy-Omega ( 524540 ) on Wednesday July 21, 2004 @08:36AM (#9758519) Journal
    Why not just take a tip from Sun's new pricing model and offer and infinite right to use and have the pricing based on a per employee cost.

    Y employees * $X = resulting software cost

    Do this on a year over year basis and you have recuring revenue.
  • by LWATCDR ( 28044 ) on Wednesday July 21, 2004 @09:39AM (#9758975) Homepage Journal
    You may be right about the one CPU intesive program at a time but it may be running more than one intensive task at a time. If a program is written with hyperthreading and or SMP in mind you can spilt it to multiable threads. For a game you could have one tread handle AI and another what graphics and game play the Video card does not.
    When ecoding video one thread could handle the images and the other the sound.
    There are lots of times when a home system could use more than one processor. Most systems already have more than one cpu they just tend to be specialized. The GPU in your video board is one. The DSP in good audio cards is another. I really do not like the idea of dumping more load on the CPU. Things like onboard audio and win modems are what I condsider to be bad ideas.
  • Re:Alternatives (Score:3, Interesting)

    by chthon ( 580889 ) on Wednesday July 21, 2004 @09:53AM (#9759100) Journal

    Asking the vendor to open the source about the measurement system and using programs like ethereal or iptraf to compare what is being measured.

  • by yakovlev ( 210738 ) on Wednesday July 21, 2004 @12:53PM (#9760984) Homepage
    The thing you have to realize is that in modern processors, what few execution units exist are starved already. Adding more doesn't really make that much difference. The performance problems come from the caches, and we already build the fastest L1 caches we can for the single processor case.

    While your statement "I can get so many FP and integer units on chip; what's the best way I can feed instructions from any number of threads to maximize their usage?" is mostly correct, it really doesn't fully recognize just how hard the processor works to feed instructions into those execution units. A more accurate description would be: "I can get so many transistors on a chip; what's the best way I can maximize the number of instructions executed (amount of work done), by any number of threads, on those transistors?" Currently the best way is to have a few execution units, and a LOT of cache.

    Getting back to your original point, in general, a processor is any number of threads that share an L1 cache. Whether that processor shares execution units with another one is really irrelevent, and probably wouldn't offer the performance benefits necessary to make the added complexity worth it.

    There are designs for which this wouldn't apply, but they would be "throughput computing" designs with big, slow L1 caches that have *dismal* uniprocessor performance. With poor uniprocessor performance the "work done" per instruction executed starts to go down, so these designs have their own set of problems.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...