Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Technology

NVIDIA's nForce Professional and Tyan's Words 138

CoffeeJunked writes "There's a lot of buzz about dual-core CPUs and with the release of the nForce Professional chipset from nVidia, there's a lot of buzz about the future of SMP machines as we know them. LinuxHardware.org has just published a couple of articles that get to the heart of the new chipset and what board manufacturers will be doing with them. The first article covers the chipsets and boards, while the second article is an interview with Tyan about what to expect from them this year. It's a good read all around."
This discussion has been archived. No new comments can be posted.

NVIDIA's nForce Professional and Tyan's Words

Comments Filter:
  • by RebelWebmaster ( 628941 ) on Friday January 28, 2005 @12:40AM (#11500736)
    Where are the SATA connectors?!?!?! I find it amazing that the K8WE only has 2 and the K8SER 4. While we're on the topic, having at least 1 PCIe x1 slot would be nice. These high end server boards are being outclassed by nForce4 SLI motherboards. (And for the record, using more than 4 SATA ports is very doable)
    • by lachlan76 ( 770870 ) on Friday January 28, 2005 @12:46AM (#11500760)
      High end servers sure aren't gonna be using SATA...
    • by Anonymous Coward
      Well the nForce Pro 2200 and 2050 each have 4 SATA ports, so a 4 chip solution could have 16 SATA connectors on the motherboard. Why a motherboard maker doesn't try and fit on the connectors I do not know.
    • The K8WE has 4 (Score:4, Informative)

      by attemptedgoalie ( 634133 ) on Friday January 28, 2005 @01:18AM (#11500871)
      The picture doesn't label the other two. They're down by the SCSI controller pointing forward instead of up. They're also on the RAID with the ones in the picture.

      Trust me.

      (I have one of these boards at my desk.)

    • If you really know anything about high end workstations and server design you would never ask why there are notmore than 4 sata connectors. When it comes to servers sata drives are used for storage of data that is not accessed often and scsi drives are used for data that needs to be easily available. Too bad serial scsi is not out yet but that will come too. Plus 4 sata connectors give you way over a terrabyte of storage and that would be enough for the kind of application that you are thinking about. After
    • You'd use dedicated proper raid boards with them maybe, instead of soft-raid sillyness?

      • Why the fuck, oh why, oh why, cannot these damn onboard RAID or Hardware RAID chipsets provide a standard IDE emulation interface so that volume one, consisting of a RAID 1 mirror of physical disk 1 and 2 appear as one logical disk 1 to the Operating system? WHY!?!?!?!?!

        I'm set to dump my Promise SuperTrak for an IDE enclosure with built in mirroring that presents the disk as a single IDE mirror. I'm sick of being unable to do kernel upgrades because my vendors driver is randomly incompatible with certa
    • yeah, the asus sli motherboard has 8 SATA ports... anyway, the x16 slots are compatible with the x1 ones, so no hard done there. In fact you can turn a x16 card into a x1 one by taping the extrac connectors. Saw it done somewhere on grapics cards, to assess exactly how much bandwidth they ACTUALLY used from those x16 lanes :)
    • Could someone give me a link to something that uses a PCI-E x1 slot? So far all the PCI-E cards I've seen have been graphics cards using the x16 slot.
    • Did you see the PCI-X slots? That's where you put your high-end SCSI RAID controller, and get much faster access than you'll ever see over the SATA ports.
    • The onboard SATA isn't as good as the add in Cards and most people with be add these cards if they can.

      The reason there are more SATA ports is because you may want to support 4 drives but not want to require the use of your PCI-X slot because you need it for a Myrinet card or other additional add in cards. K8WE gives you a lot more options.
  • by Anonymous Coward on Friday January 28, 2005 @12:42AM (#11500739)
    So, they are designing a chipset for servers, which will run linux or bsd, but they refuse to provide docs or hardware to linux and bsd developers, meaning their shit is always poorly supported. Hooray.
    • by Anonymous Coward
      What makes you think that nVidia is desinging these boards to run anything but Windows? They're still afraid that someone is going to copy their hardware through their driver source. Ridiculous.
    • by Anonymous Coward on Friday January 28, 2005 @03:24AM (#11501257)
      I'm a bit concerned about the nforce4, from what i read already, there are 3 models, a "normal" nforce4, a "ultra" nforce4, and a "sli" nforce4. But altough you can't use SLI on the non-sli models, there are ways to enable SLI, on at least the ultra model ( http://www.anandtech.com/printarticle.aspx?i=2322/ [anandtech.com] ).
      To quote the article, "Just as quickly, we learned that nVidia was not happy with this "SLI hack" and they changed their drivers quickly so that "semi-SLI would not work with current and later Forceware drivers." It appears that the later Forceware drivers check the chipset ID and if the driver sees "Ultra", then SLI is not enabled. MSI decided to kill the "semi-SLI" board because it would be a nightmare supporting a board that would only run with older nVidia SLI drivers."

      So, how will this be (un)supported by the opensource community? Is nvidia doing to chipsets what they did to graphic cards? Everyone remembers how they locked out rgb overlays and unified front+back buffers from the geforce4 cards, altough the chips had the funcionality built-in, the drivers would disable these features, and save them for the more expensive quadro cards (there were some quick fixes for this, for windows, mainly rivatuner and softquadro4).
      Does this means that now they're going to lock-out funcionality available on the chipset to maximize profit? I can't imagine how (linux) kernel developers will support a chipset which relies on closed drivers to enable or disable a specific funcionality, and judging by nvidia's attitude in the graphic cards department (which has a point, up to a certain extent nevertheless), i can't imagine nvidia releasing the specs for opensource drivers for this chipset, therefore loosing the income from the sli model, which would become redundant.
      Do we now have to taint the kernel with chipset drivers? If so, i'm out of it, this is certainly a chipset to avoid.
      • Just want to point out that they're not using the drivers to disable features on the lower chipsets by default. If you read the article you link to, you have to do a hardware mod to enable the functionality first; they're then disabling support for these hacked chipsets in their drivers. So it sucks, but it's not like they're relying on cheap software hacks to differentiate their products; rather using them to prevent people from removing their cheap hardware hacks.
    • However, since they release drivers (that work great) for linux (dunno bout bsd, I don't use it) how is this a problem. Ok, since the NIC is part of the chipset I need to have the drivers before I can get network, no big deal, use a seperate machine and a blank cd.

      Where's the problem?
  • Free Drivers (Score:5, Interesting)

    by gustgr ( 695173 ) <(gustgr) (at) (gmail.com)> on Friday January 28, 2005 @12:50AM (#11500780)
    Finally, NVIDIA's SLI has been a hot topic here because, as of yet, we haven't seen Linux drivers that support this hot new feature. When we talked to NVIDIA about this we were finally given a time-line which stated that it may be a couple of months still.

    If the drivers were free software someone skilled enough would hack the missing features. Isn't about time to nVidia change its mind and release the sources?
    • Re:Free Drivers (Score:4, Interesting)

      by Jah-Wren Ryel ( 80510 ) on Friday January 28, 2005 @01:05AM (#11500824)
      If the drivers were free software someone skilled enough would hack the missing features. Isn't about time to nVidia change its mind and release the sources?

      Tell that to David Kirk [extremetech.com] nvidia's chief scientist whose, "sense is that developers on those platforms are quite happy with our efforts" as a justification for not going open source. Plus some totally bizarro bullshit about "hackers tak[ing] bad advantage of raw hardware interfaces."

      It is telling that he did not pull out the old, tried and true "competition sensitive" bullshit that so many hardware vendors have been hiding behind since day one.
      • How about the old "we have licensed tech in there that disallows us from opensourcing it" line.

        That one made a bit of sense.
        • How about the old "we have licensed tech in there that disallows us from opensourcing it" line.

          That one made a bit of sense.

          It does if they explicitly state what the "licensed" tech is that's blocking the opensourcing. If they don't then it's just BS.

          ---

          Are you a creator or a consumer?

        • Re:Free Drivers (Score:2, Insightful)

          by Vanders ( 110092 )
          That makes a little bit of sense for 3D graphics drivers, maybe. It doesn't make the slightest bit of sense for a Gigabit ethernet controller, a SATA controller or even an audio DSP. These sorts of components are being churned out by different manufacturers across the world, and most of them have freely available documentation.

          Even more bizare, nVidia contributed Gigabit patches to the forcedeth driver. Yet they not only continue to produce their own closed driver, they still refuse to release specs.
      • by Anonymous Coward on Friday January 28, 2005 @01:37AM (#11500928)
        "We use our drivers to cheat on benchmarks, and if we released info for people to write a driver, it would show our hardware's not as good as we pretend."
      • Plus some totally bizarro bullshit about "hackers tak[ing] bad advantage of raw hardware interfaces."

        Actually it's not all that bizarro. DMA allows hardware devices to access memory without the cpu knowing about it. So a malicious user that can get a graphic card (or a nic or any other DMA device) to manipulate memory through the DMA mechanism could circumvent mechanisms like pax and other security mechanisms to some degree. And with GPU's getting more accessible to programmers that risk is increasing. Im

        • I knew exactly what he was referring too and it is still a dumbass point for a variety of reasons:

          a) Relies on security through obscurity, so if a dedicated hacker reverse engineers a vulnerability in the nvidia proprietary driver, it will never get fixed unless the vulnerability is used in a virus or something widespread enough to get noticed by nvidia versus getting the many eyeballs effect and possibly nipping it in the bud before an exploit is ever created.

          b) As you point out, applicable to, in lesser
  • by FiberOpPraise ( 607416 ) on Friday January 28, 2005 @01:21AM (#11500878) Homepage
    Just a few years ago, Nvidia was practically unheard of in the motherboard market. They slowly crept in with the relase of nforce/nforce2/nforce3/nforce4 chipsets. Having an integrated video card and chipset is somewhat advantageous despite the driver troubles that linux users face. Nvidia is slowly gaining market share over motherboard chipsets, I see this as a good thing. My NForce systems are working great and so far everything has been smooth. If Nvidia keeps up with the great work and frequent updates of their chipset, I will be a satisfied customer. How do you feel about Nvidia presence in the motherboard market?
    • However, you forget that Nvidia hasn't actually integrated a GPU in their core logic since the nforce2 chipset. ATI, Intel, SIS and sometimes VIA release IGP solutions for every chipset revision. Perhaps Nvidia found that IGP sales hurt their discrete solutions? </tinfoil hat>
      • by MojoStan ( 776183 ) on Friday January 28, 2005 @04:45AM (#11501517)
        However, you forget that Nvidia hasn't actually integrated a GPU in their core logic since the nforce2 chipset... Perhaps Nvidia found that IGP sales hurt their discrete solutions?

        Perhaps. But I think another possibility is that the nForce3 chipset was not meant for "budget/mainstream" users, but for "enthusiasts." As we all know, enthusiasts don't want integrated graphics that share memory with the system.

        The nForce4 chipset, on the other hand, does look like it's aimed at budget/mainstream users as well as enthusiasts. But with PCI Express and TurboCache [nvidia.com], NVIDIA might have a cheap solution that's better than integrated graphics.

        PCI Express x16 has more bandwidth than AGP (4 GB/s upstream and downstream) and allows writes directly from the GPU to system RAM. This allows a non-integrated graphics card to share memory with the system without the huge performance hit that AGP would have caused.

        Instead of integrated graphics, maybe NVIDIA is planning to "bundle" their cheap TurboCache cards with nForce4 motherboards. That seems cool to me.

        • Instead of integrated graphics, maybe NVIDIA is planning to "bundle" their cheap TurboCache cards with nForce4 motherboards.

          I concur. It doesn't sound cool to be but it does sound like progress.
    • by Anonymous Coward
      "How do you feel about Nvidia presence in the motherboard market?"

      I feel a disturbance in the force as suddenly a thousand motherboard makers are snuffed out.*

      *Jokes aside. the problem isn't with the chipset, but the attitude towards specs and other information that developers need that's being propogated. Today it's Nvidia. Next could be VIA. Then Intel after them, and so on down the line were if you want OSS to run on that hardware. It will not be on our terms, but theirs (DRM, Trusted computing).
    • Just a few years ago, Nvidia was practically unheard of in the motherboard market... How do you feel about Nvidia presence in the motherboard market?

      Before NVIDIA entered the chipset market with nForce, I didn't seriously consider buying AMD Athlon CPUs because I thought the previous "consumer" chipsets (VIA, SiS, ALi) sucked ass. Maybe I'm being a little harsh about the pre-nForce Athlon "cheapsets." However, I felt a lot more comfortable using the relatively reliable and robust Intel chipsets, even thou

      • by Anonymous Coward
        Waaaay agreed! I had previously only run Pentiums because the AMD systems I built for other people (pre-nForce) were always flakey (although I have had good luck with VIA/Pentium platforms. Go figure.).

        Then I built and AMD box with a nForce chipset.

        And it rocked!

        And I built another one. And another one. And another one. Some weirdnesses with the nForce chipset aside, the boxes were as stable as any of my Pentium boxes (maybe even more stable). That's including Intel chipsets in Intel boxes.

        So now I'm wa
    • You would find it ironical that the integrated video on the Tyan server board is an Ati Rage, then.
  • Can I config a dual-P4 machine to run X clients on one CPU, and my X server on the other CPU, with the nVidia machine displaying the server output? That's the kind of Linux multiprocessing I like.
    • by mvdw ( 613057 )
      You don't have to. It all happens transparently: I have a dual-CPU athlon MP setup at home, and I can confirm that it happens just like that. Each process starts on the processor with the least loading.

      • Me too. I'm not moving to dual Opterons until Windows and more applications are 64 bit/multi-threaded. Overall, I've found dual CPUs to be a bit overrated but I don't use the machine as a server, just for lower level CAD as the most stressful use. Dual CPUs won't help while digitizing video (I.E., you won't be able to do something else while it happens).
        • That, IMHO, is untrue. Remembering that most modern OSes, like Windows NT from 3.1 to Windows 20003 and recent incarnations of the Linux kernel are fully preemptible, ANY user will benefit from the addition of a second CPU or Dual Core processor.

          I can state emphatically that with the most demanding graphics application I have ever used, Pro/ENGINEER (CAD/CAM), I was successfully able to run a full regression suite while building the software tools that build Pro/ENGINEER.

          You will definitely be able to di

          • I've only ever used the machine with AutoCAD (overkill, really) but I might be installing PDS and Design Review.

            My experience with digitizing old VHS tape of my son two years ago was that I couldn't touch anything while the process was happening. Perhaps everything wasn't configured properly and now that I think back the problem might have been with converting the avi files to another format.
    • by XMichael ( 563651 )
      Quote: Can I config a dual-P4 machine to run X clients on one CPU, and my X server on the other CPU, with the nVidia machine displaying the server output? That's the kind of Linux multiprocessing I like.

      Of course you can, this isn't a NVidia, Intel or AMD thing, its a Linux thing. The operating system is responsible for deciding which processor to assign the work too.
  • Normal view: http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2327 [anandtech.com]

    All in one page/"print" version: http://www.anandtech.com/printarticle.aspx?i=2327 [anandtech.com]

    Lots of intersting possibilities. Seems to me that given a motivated/visionary motherboard maker, the only real limits are based on the form factor. Is there a super-ATX out there that would allow for say 8 PCI-e slots, 16+ hard drives, and all the rest of the goodies, all in one case?

    Some will ask if there really is a need for this. Anandtech's Derek Wilson points out that having all the onboard disk controllers could add up to substantial savings-- apparantly expansion card controllers are quite pricey.

    Now, if only those Opteron 8XX processors didn't cost $8XX... (or thereabouts... you get the idea!)
  • by jgarzik ( 11218 ) on Friday January 28, 2005 @01:58AM (#11501006) Homepage
    As a for-what-it's-worth from a Linux driver author...

    nvidia SATA status [linux.yyz.us] and other Linux SATA info [linux.yyz.us].

    nvidia wrote the SATA driver that's current in the Linux kernel, and has generally been helpful in addressing problems that arise in it.

    Although the ethernet driver ("forcedeth") was indeed reverse-engineered, nvidia eventually lent their support behind the effort: they contributed gigabit ethernet support to the driver.

    The video stuff is still closed, of course.

    Jeff
  • by BrookHarty ( 9119 ) on Friday January 28, 2005 @02:09AM (#11501036) Journal
    Dual core support on Tyan's Opteron platforms, is a feature we are very much looking forward to providing to all of our current and future customers. Unfortunately while its not possible at this time to directly comment on whether support will be implemented on the S2885, S2895 or other models from Tyan, customers should be pleased to know we are working to ensure compatibility on platforms going forward.

    Dual cores are such a major upgrade, why buy any SMP motherboard when 2 months it cant support the next generation SMP cpus...
    • Think I read somewhere that dual core CPUs from AMD are supposed to fit in the same slots
    • You'd be a brave man to invest in first-generation dual-core hardware...
    • Sounds like they're just covering their arses to me.

      I see no particular reason why they couldn't add dualcore support to their BIOS - the hardware is pin-compatible, their customers are certainly going to want it, and AMD would probably help them ahead of anyone to get it right.

      But, as always, buy hardware for only what it does today, and you'll never be disappointed tomorrow.

    • by drw ( 4614 )
      AMD has gone to great lengths to make sure their dual-core processors work in current Opteron motherboards. The worst case would be that you would need to upgrade your BIOS, but the power requirements for these chips will be under the maximum that AMD has been telling motherboard makers to support.

      The only downside is that they will always be behind in regards to clock speed compared to their single-core processors. I think somewhere in the 2.0 GHz range at initial launch.
  • by setagllib ( 753300 ) on Friday January 28, 2005 @02:09AM (#11501037)
    ...who previously had an nVidia Go 5200FX (or whatever order those tokens are meant to come in), and now a Radeon 9000, I can only say I'd rather have out-of-tree drivers that work perfectly for a good card than half-baked drivers for an average card (where good/bad are measured in usability, not necessarily performance).

    The Radeon under Linux (and I assume anywhere with an XOrg server) is a huge pain. Doesn't manually switch output displays with Fn+F8 like it should, and xv [the direct output mode, not the graphics program] only goes to the lappy panel, never to an external monitor. It might be a really trivial change in the driver source, but in the mean time it's an uneccessary frustration.
    • Maybe it's a matter of your X config or the hardware itself. I use a compaq notebook with a Radeon 9000/9200 under xorg and radeonfb. The external monitor port just happened to be on when I first tried it. In fact, hitting Fn+F4 doesn't even turn off the signal. (btw, there's also the 'flgrx' driver, but i haven't tried it.)
  • If you check out the Tezro from SGI you'll notice it has 4 pci-x controller chips to get the throughput high enough for reatime editing of multiple streams of HD 4.4.4. I wounder if any of these configurations can handle that kind of thoughput?
  • In a world full of cheap rush to production "enthusiast" motherboards packed to the gills with all kinds of generic no frills crap (i.e. cheap onboard sound, useless on board NIC's and 2 or even three cheap software RAID controllers), it's nice to know that my favorite motherboards manufacture is still producing quality rock-solid no-nonsense motherboards.

    I've been a big fan since the pentium II days. Nary a reboot or even a hickup with these motherboards.

    The only thing that concerns me is the Nvidia chip
    • I don't know what you're talking about. Bundling nice and important features (increasing value) is a good thing. Cheap onboard sound? Most these motherboards have 8 channel digital sound. A comparable creative (no thanks!) sound card cost almost as much as the motherboard alone. Useless onboard NIC? I don't know which ones you've tried, but I have yet to see one give me problems, from crappy ECS K7S5A motherboards to nice GBit lan on Asus boards. They just work. Cheap software RAID? If you want hardware RAI
      • You never did listen to the "output" of onboard-Sound, didn't you? I still like my PCI-Sound cards for the better sound (and yes, they cost almost as much as the mobo) but if you only use on better speakers, you will hear the difference. and i saw no 16x SATA on these two boards you mentioned. And what should I do with 2GB NICs connected via PCI-Bus??? This IS crap!
        • Who said ANALOG onboard sound? (anybody uses that still? all spdif here). spdif wise, unless you count jitter, there is no difference - actually, the nforce chipsets have the lead on this point. Speakers are a non issue. About the 16x SATA, I guess I shouldn't have trusted what the store I checked mentionned ("ASUS A8N-SLI Deluxe Dual DDR 2x PCI-E, 16x SATA RAID, 2x GB Lan"), but it still has quite a few connections. Which is always nice to have... 2 GB Nics? It is useful. We have servers with redundant N
          • I too have many of my servers with GB-NICs. but only PCI-X (Hooray on my Macs *g*). Tell me how you want to use SPDIF on multi-channel-output. As far as I know, no Chipset can do realtime-encoding to DTS or Dolby-Digital.
            • PCI-E GB Nics aren't cheap, and we're really just starting to have boards that have the slots. The normal PCI ones I have (at home) still work very well, and it definately beats having 100BT instead. I'm all for taking expensive hardware and making it a commodity on most motherboards, like they already added USB2 and Firewire. As for the spdif output, you're wrong. You can either pass AC3 or DTS audio from such as source to it's output, or play normal stereo sources as 2.0 - even on 20$ cards. The added s
  • Very fast machines (Score:3, Interesting)

    by tinrobot ( 314936 ) on Friday January 28, 2005 @02:20AM (#11501066)
    I have a pre-release Dual Opteron/NForce machine from an unnamed manufacturer sitting right here next to my desk. We haven't finished benchmarking, but so far, it's wicked fast.
  • On one hand, they keep mentioning SLI, SLI, SLI.
    On the other hand, These mobos are server mobos, loaded with stuff I frankly could do without, like SATA2 (IIRC, the fastest hard-drives out there are barely 50% of the way saturating a SATA1 link), Firewire, 8 memory slots and PCI-X.

    What SLI croud need is a simple mobo with a simple feature set, a couple of PCIex1 slots, the two full x16's, the USB, audio, double GbE & the works as offerd by the 2200, and a couple of 939-pin sockets coming from a decent
    • What SLI croud need is a simple mobo with a simple feature set, a couple of PCIex1 slots, the two full x16's, the USB, audio, double GbE & the works as offerd by the 2200, and a couple of 939-pin sockets coming from a decent mobo maker like Gigabyte that doesn't charge double for it's badge (read: ASUS, TYAN, etc.)

      Please explain. The Opteron uses Socket 940. Newer Athlon 64s use Socket 939. The Athlon 64 won't work in an MP system unless some fancy hack is performed (if at all).

      What exactly are you a
    • Tyan isn't targeting you. Tyan is targeting people who want to build PC workstations. These are people who have uses for multiple gigabytes of RAM (think simulations) or PCI-X slots (SCSI RAID controllers). Tyan isn't really targeting PC gamers here. They're obviously trying to appeal to the market, but Tyan has traditionally gone for the workstation and low-end server crowd (people who do work with their computers) with their Thunder series.

      Asus is generally the maker of boards in your target range.
    • A 2200 chip doesn't have the necessary 32 PCIe lanes, nor does it have dual GbE. Only way to get that is with a 2200 and a 2050 as well, and you won't see that under $200.

      Besides, if SLI gaming is your big thing, two full x16 slots is overkill, and won't affect your framerates more than 1% at best. Two x8 slots will be plenty for anything around or on the horizon.

      Sounds to me like you want one of the existing nForce4 SLI boards from Gigabyte, with a drop-in dual-core Opteron to go with it. All you need,

  • Did anyone else read the summary title as NVIDIA's nForce Professional and Cyan's Worlds? No more Uru for me...
  • Wow?

  • Just check out the work Yinghai Lu has been doing on LinuxBIOS for Tyan boards. He even has it working for nVidia Crush K8s based Tyan boards.
  • The S2885 They talk about is unstable.

    I know I am running it. It has alot of problems with the AGP. Alot of games crash it. Not very compatable with RAM. (check their website to find "authorized" ram models) the drivers that were supposed to fix this made it worse. They don't even uninstall cleanly! Their BIOS is Still in beta! This motherboard was released years ago. There is no excuse for this, it's a $500 motherboard

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...