Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Sun to Give Niagara Servers to Reviewers 182

abdulzis writes "Sun Micro's Jonathan Schwartz says that Sun is giving away free servers to bloggers who do a good job reviewing their servers. From the blog article: 'if you write a blog that fairly assesses the machine's performance (positively or negatively), send us a pointer, we're likely to let you keep the machine'" Mr. Schwartz, if you're reading this, feel free to send us one with "Attn: CowboyNeal" on the label.
This discussion has been archived. No new comments can be posted.

Sun to Give Niagara Servers to Reviewers

Comments Filter:
  • Now I know what to do tomorrow.
    • Why do you need one for anyway? You've already gotten FP with what you've got.
    • Well, all the better for you. I don't think CmdrTaco will get one as /. is about as fair and balanced as Fox News.
  • by Anonymous Coward on Thursday February 23, 2006 @11:02PM (#14790250)
    Sun to Give Nigeria Servers to Reviewers
    hello my friend! i am a humble nigerian prince with millions of dollars and have selected you to....
    • by Anonymous Coward
      and this, folks, is why i try not to post anything funny logged in. +3 funny, +0 karma. -1 overrated, -1 karma. being funny just aint worth the risk
    • Sun to Give Nigeria Servers to Reviewers
      hello my friend! i am a humble nigerian prince with millions of dollars and have selected you to...


      It came accross to me as "Sun to give Nigerians Servers" and I shrieked in horror at the empowerment Nigerian scammers were about to receive.

      Then I realized that they just had fallen prey to one of those emails.
    • Sun to Give Viagra Servers to Reviewers

      Now I see why SUN logo is blueish.

  • by raehl ( 609729 ) <raehl311.yahoo@com> on Thursday February 23, 2006 @11:02PM (#14790255) Homepage
    Pinochet used to have this deal for journalists too - if you wrote an article that fairly reviewed the Chilean government, he wouldn't kill you.
  • Comment removed based on user account deletion
  • Server vs PC (Score:3, Interesting)

    by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Thursday February 23, 2006 @11:03PM (#14790262)
    In this day and age of super fast personal computers, what is to differentiate a server from a PC?

    Is it the CPU architecture? That can't be the case because many servers run on plain old x86 motherboards.

    Is it the OS? While you can say that we can delineate Windows servers into Windows Server and non-Windows Server versions, many places stick Linux on as the OS which blurs the line completely.

    Is it the speed? A decade ago, we were looking at servers which weren't half as fast as our low end PCs today. If it is speed, do we have some magical cutoff which just keeps moving forward?

    So I get a server from Sun. Does that just mean I get a fast computer with a shitty audio and video card? Limited expansion slots?

    I'd rather get a PC.
    • Re:Server vs PC (Score:3, Informative)

      by mmclure ( 26378 )
      The primary differentiator is not CPU power, but I/O bandwidth. Even with SATA drives, PC architectures still don't handle the I/O bandwidth that servers can handle. That's the same reason mainframes are still around - although raw CPU power on a mainframe is not as much as on a server or even a workstation, they can throw data around like nobody's business.
      • Give me a break. Bandwidth? What bandwidth? Network I/O? Throw in as many Gb network cards as you need. Disk I/O? Throw in a relatively cheap RAID card and 5+ drives in RAID 5, and you too can throw around as much data as your system can handle. (Hint, in a PC system, you can easily exceed the CPU data bandwidth capability with well-designed RAID5 systems, even on "economy" PCs) Memory bandwidth? That's a much harder problem, and you only get major gains when going to big iron.

        At this point, servers are wha
        • Re:Server vs PC (Score:3, Informative)

          by _Quinn ( 44979 )
          Yes, bandwidth. Up until the PCIe bus, a pair of GigE ethernet cards saturated a PC's expansion bus. Until AMD built memory controllers into their chips, servers (read: non-x86 UNIX) crushed PCs in memory bandwidth. Until NCQ, SCSI drives crushed IDE drives in effective bandwidth.

          So basically, yes, until very recently, there were very large and substantial bandwidth differences. They've gotten smaller. More important, however, are the "lights-out management" features. If you can't reinstall the OS fro
          • As you just noted, all those bandwidth issues are no longer the case for off-the-shelf PCs. There are some plusses for non-PC machines, but those plusses are becoming less relevant every couple of months, as the next wave of PC hardware comes out for most uses.

            Also, you can buy for little cash some of the same eq for PCs that are used for specialty servers. And with some knowledge, PCs can be setup to emulate many of the hot-swap features of big-iron, and sometimes even exceed them. (It's not trivial, but
    • When done right, servers should have considerably faster I/O and considerably higher reliability than desktops. And yeah, shitty audio and video. Video, if you've got it on a server, is there for administration, not for playing games. And who could hear anything from audio on a box that belongs in a noisy server room?
      • And on a "real" server, you have neither audio nor video :-) Audio is pointless, and Video is replaced by a serial console.

        Serial consoles have always "just worked" and have been a standard means of console connectivity on Suns and most machines in their category. (Heck, even Apple Xserve machiens support serial consoles.) In the PC world, it is less common at the BIOS level, but that depends on what sort of PC you buy. My friend's 1U Dell server (which I'm hosting for him) does support serial console s
    • Re:Server vs PC (Score:3, Insightful)

      by elmegil ( 12001 ) *
      I'd rather get a PC.

      I think there's a good reason your name is "BadAnalogyGuy". Can you say "you're not Sun's target market"? There are plenty of bloggers who aren't just some slashdot reader sitting in his parent's basement, but actually use real equipment in real datacenters and they're the ones Jonathan is probably trying to reach out to (can't read his mind after all). By all means, get the tool you need. Server class x86 systems are typically way louder than you'll want to play World of Warcroft o

    • by Fished ( 574624 ) <amphigory@@@gmail...com> on Thursday February 23, 2006 @11:14PM (#14790318)
      Obviously, you've never been a sysadmin in an enterprise environment. First of all, I don't give a shit what kind of audio or video card a server has. In fact, if it's my server, it doesn't even have a monitor or speakers. Instead, it has a serial cable plugged into a terminal server, and that's all. All your fancy video card does is burn power and make heat that I have to spend money to pump out of the rack.

      The difference between a server and a PC is:

      1. A server is designed to serve data, and has nothing I don't need for it. That means that that damn video card that's not even hooked to a monitor can't break and take my website down with it's million dollars a day revenue.
      2. A server is designed to serve data reliably, and has enterprise class components. That means no cheap-ass western digital hard drives. If you don't think there's a difference, you've never used Enterprise hardware.
      3. A server is designed to serve data cheaply. This means low TCO, not low purchase price. Which means an OS that pushes the most bits per cpu, while requiring the least system administrator time. Is Solaris that OS? Debatable, since time has ensured that Apache is highly optimized for Linux. But if you can't run Linux on these yet, you will be able to soon. However, the CPU architecture on these is pretty highly parallel, and Solaris may work better than Linux. Sun is presenting some impressive numbers for these. And they're cheap (as servers go).
      In other words, this may be a good time to buy SUNW, at least if you can grow a beard.
      • Well stated. Funny that the anonymous coward called you names. I didn't think you were being arrogant at all. Since I work on the i/pSeries firmware, it's good to be reminded there are Enterprise-minded people that read this site once in a while. The large percentage of slashdotters have no clue beyond the "server" they built from spare parts.
        • That server from spare parts is fine for a home system and/or a departmental server (that is _not_ for mission crit data). For example I support a development lab and we have a set of SPAs that have some wierd requirements for data transport (it's that or floppy) so they sit on an isolated network with an IBM workstation (running SOLinux and SAMBA). Technically this is acting as a server, though I stress that all files are deleted in 24 hours (they are really just moved to a folder, and aged out slowly, b
      • by King_TJ ( 85913 ) on Friday February 24, 2006 @12:41AM (#14790662) Journal
        Yeah, I basically agree - although what are these "enterprise class" hard drives you refer to? Last time I checked, companies like Sun were charging outrageous prices for hard drives that were just your run-of-the-mill Seagate SCSI's in proprietary hot-swap trays.

        Sure, you wouldn't build an "enterprise server" with SATA just yet, but I'd say some form of SATA2 (or who knows, maybe SATA3?) will be the future replacement for SCSI. The hard drive makers are consolidating and IMHO, will soon reach a point where everything is either "budget priced" (EG. junk, suitable for PC resellers to use in low-cost systems for consumers and so-ho settings), or "better quality" which is used for everything from the largest enterprise systems to hobbyist PC's built with performance and quality parts in mind.

        Right now, you pay a ridiculous premium for all things SCSI, simply because it's a dying standard, only used and respected by those building large servers for people with deep enough pockets to pay the prices without question. SCSI has disadvantages though, including the difficulty in making the high-density cables and connectors. (Ever try crimping a connector onto a SCA-80 cable, for example?)

        The drives themselves tend to be built from pretty much the same parts as their SATA counterparts, lately. They can just stick a different type of controler board on the bottom and call it SATA vs. SCSI. We're no longer in the era where companies like Micropolis and Fujitsu built obviously better-constructed and better warrantied drives intended for server use only.
        • Micropolis

          Now there is a name I haven't heard in a while. I still remember the 700MB ESDI tank that was my second server's drive. (First one had 3 160 MB WrenIII ESDIs).
          ESDI, now that was performance.
          -nB
        • Right now, you pay a ridiculous premium for all things SCSI, simply because it's a dying standard

          WRONG

          The main cost driver for SCSI/Fiber drives is testing.

          The drives themselves tend to be built from pretty much the same parts as their SATA counterparts, lately. They can just stick a different type of controler board on the bottom and call it SATA vs. SCSI

          WRONG

          Before leaving the factory, the platters on every single enterprise class drive receive extensive testing. That is why SCSIs still have a 5 year war

          • by nacturation ( 646836 ) <nacturation&gmail,com> on Friday February 24, 2006 @02:03AM (#14790928) Journal
            Before leaving the factory, the platters on every single enterprise class drive receive extensive testing. That is why SCSIs still have a 5 year warranty from Seagate, because every single drive has been tested and meets certain criteria.

            In case you weren't aware, Seagate's SATA drives also come with 5 year replacement warranties.
             
            • by msobkow ( 48369 ) on Friday February 24, 2006 @08:05AM (#14791908) Homepage Journal

              I'm not aware of any manufacturer outside the milspec arena that guarantees to test every component individually.

              Modern manufacturing is statistical. You test n components out of each lot of 1000. If more than m fail, the lot is "rejected". In the case of high-cost manufacturing, the "rejected" lot will be individually tested so any good pieces can be salvaged.

              If you want tested components, the "grey" refurb/retest units are the ones that have actually been tested. Those which "passed" the lot sampling were not individually tested.

              Warranties are also purely statistical. They don't guarantee the drive will actually last that long, they just provide MTBF numbers, figure 24x7 server operation, and that provides the number of years the drive is expected to survive. You still get occasional failures, hence RAID-5/6 storage servers.

              • Thinking about how the manufacturing and MTBF stats actually work, I think the real difference between enterprise and PC-class systems is that enterprise systems assume everything is going to fail sooner or later, and make allowance for it. PC systems are disposable components with downtime acceptable during replacement.


          • The previous poster said that the drives are pretty much identical except for the controller board. You shouted WRONG, and then said nothing to prove him wrong. Testing the drives does not have anything to do with them being nearly identical.

            And do you have a problem with ignorance? It's just a long word for "I don't know". Perhaps you leaped out of your mothers womb knowing everything, but the rest of us are learning as we go along.
        • There are differences in the hardware of mainframes, unix/as400 servers, pc servers and pc's. Most differences surround reliability (redundancy etc.) and parallel processing (multiple CPU's, multiple specialized processors controlling IO, etc. etc.) Here are a couple examples:
          IO controller cache with error correcting checksum in memory and redundant power supply to ensure zero loss of data short of taking a sledgehammer to the thing.
          Mainframe CPU - parity checking with automatic transaction rollback on e
        • Sure, you wouldn't build an "enterprise server" with SATA just yet, but I'd say some form of SATA2 (or who knows, maybe SATA3?) will be the future replacement for SCSI.

          Yeah, or maybe it'll be Serial Attached SCSI [wikipedia.org].

          OT aside: I tried to comment on this article, because they claim that SAS allows "small 2.5 inch hard drives" using SCSI - but I have owned no less than three laptops which used 2.5 inch SCSI drives. I wonder what the connector looked like, I never opened any of them. One of them was an IB

      • I would like to add to the above post that Sun equipment has more than just a simple serial cable output. Lights out management (Sun's name for out of band management via RJ-45 serial port and Ethernet) is a must have for anyone that does serious enterprise server administration. Console ports allow you to power on and off the machine, and run diagnostics even if the machine is otherwise dead. Sure you can get it for some PC servers, often via an expensive add-on card, but every Sparc machine has this built
        • It's standard too on all HP servers these days too. I believe Dell have a RAS console as standard, although I'm not big on Dell servers so I could be wrong.

          This isn't a SPARC thing though, even the Sunfire v20z has it as well and that is Opteron. Of course, technically it was made by Newisys and not Sun. I've not seen the newer Sun Opteron server's LOM yet, but from the specs it looked the same.

          • Parent never claimed it was a SPARC thing, but a Sun thing. IIRC even the old-ass Sun/386 (whatever it was called, I forget) had the same familiar Sun boot loader.
            • GP did claim it was a Sparc thing:

              Sure you can get it for some PC servers, often via an expensive add-on card, but every Sparc machine has this built-in from the desktops to the servers. Until PC servers break from the legacy BIOS, and add features like this as standard equipment they will just be PCs that happen to be running a server OS.

              See but every Sparc machine has this built-in, after saying that only some PC servers have it. That isn't really fair, as I think these days you'll struggle to find a serv

      • Obviously, you've never been a sysadmin in an enterprise environment..

        I couldn't read beyond the parent's first paragraph without being reminded of this video [gamekillers.com].
      • If it's my server, I don't bother with the stupid serial cable - no serious server today ships without some kind of lights-out management. Most even support SSH, remote display, and media emulation for doing recovery/installs remotely.
      • "A server is designed to serve data reliably, and has enterprise class components. That means no cheap-ass western digital hard drives. If you don't think there's a difference, you've never used Enterprise hardware."

        I think Google would argue with you there. They designed their business around not using expensive hardware, but instead the principals of RAID applied across all of their hardware (they believe it's cheaper to have a LOT of less reliable, cheaper systems than a few, super reliable systems).
        • the principals of RAID applied across all of their hardware

          What, so they ground up David Patterson, Garth Gibson and Randy Katz [wikipedia.org] to a puree and smeared them all over their data center? Hmmm, a blood sacrifice, could explain their success, although it does conflict with their official "do no evil" policy....

          Or did you mean they applied the principles of RAID across their enterprise? Now that would make more sense, but it's not what you said.
        • Google has a big, complicated solution. It works, because they can lose nodes without having to care. On the other hand, deciding where data has to live in order to be distributed widely enough both a> to survive hardware failures and b> to fulfill demand has got to be one serious bitch of a decision.

          Clusters vs. Megalithic hardware. There's reasons for both. Sounds almost like a commercial:

          Some things yuo just can't parallelize.
          For everything else, there's Beowulf.

        • Google can afford to lose data. Financial groups in organisations can't.
        • In this specific case, the Sun Niagra Servers are high power, high throughput machines, tasked better for a mega dollar installation where speed is critical, or in your business where uptime seems to be the more desired feature.

          From what I read, these servers are only $5k. That's not in the super-cheap realm, but it's not expensive either. These servers are also supposed to get a lot more throughput, meaning that maybe the combined processing power of 10 cheap systems in a cluster might be less than 1 Niaga
      • I thought it was "To Serve Man".
    • Re:Server vs PC (Score:2, Insightful)

      by Brandybuck ( 704397 )
      So I get a server from Sun. Does that just mean I get a fast computer with a shitty audio and video card? Limited expansion slots?

      I don't know anything about these Niagara servers, but if they're anything like other Sun servers, here's what you'll get: a power supply that will last longer than two years; a motherboard with a chipset and layout designed for high high data throughput; harddrives that are hot-swappable and will handle years of heavy use without crapping out; etc. In short, they're designed for
    • In this day and age of super fast personal computers, what is to differentiate a server from a PC?

      Most of the time: The case.
    • Re: (Score:2, Informative)

      Comment removed based on user account deletion
      • One T2000 server (T1/Niagara based) running at 1 GHz trounces a two-way Xeon server at 2.8 GHz for serving dynamic websites (PHP in this case). This was discussed at OSNews this week. [osnews.com]

        Sun really needs to get the message out about the T1 servers. I'd like to make some money off of my SUNW shares sometime this decade...

      • The advantage you get from the Niagara servers depends very much on the workload you put on it. Remeber, this is the first stop on the move into "Throughput Computing", with the Niagara architecture being "network facing", while the subsequence Rock architecture will be "Data facing".

        What this means in practice is that Niagara has blistering performance for stuff which is basically integer intensive, but ain't so exciting for anything which is floating point intensive. But for the right workloads, a single
    • - Server mobo usually have 2 or more separate PCI bus(not slot, but 2 channels with dedicated bandwidth). This separates high traffic systems(HDD,NIC) from the rest of the system. Single PCI bus will cause a lot of contentions in server environment. - Server mobo often don't have AGP but have integrated cheapo GPU. This save some wiring which would otherwise need a extra layer on mobo. Who need 3D gaming on a god damn server anyway. - Server usually are MP. They usually have larger no. of applications/threa
    • If you're an average Joe, a good PC will make a good web/email/file server. But if you're a corporate IT department, and your server runs anything "mission critical," then you get a purpose-built server. Which should have, depending on your specific requirements:

      - Blinkenlights. Not pretty neon-and-blue-LED light shows, but honest-to-god diagnostic lights for disks, NICs, and CPUs. On the front, where you can see them.

      - Hot-swappable everything. Every component, from individual CPUs to the power supplies, s
    • Re:Server vs PC (Score:3, Informative)

      by timeOday ( 582209 )
      In this day and age of super fast personal computers, what is to differentiate a server from a PC?
      The Niagra is about the most specialized server chip around: it can't run a single thread especially fast, but it can run 32 of them concurrently! That makes it a server chip if ever there was one.
    • Re:Server vs PC (Score:5, Informative)

      by ChrisGilliard ( 913445 ) <christopher.gill ... m minus language> on Friday February 24, 2006 @12:49AM (#14790686) Homepage
      So I get a server from Sun. Does that just mean I get a fast computer with a shitty audio and video card? Limited expansion slots?

      Since this particular server is a Niagara Server, it has the Ultrasparc T1 chip [wikipedia.org]. That's the big difference. This chip has 8 cores and each core can run 4 threads at the same time for a total of 32 threads of execution. So, IF you're running a web or application server, you will be able to support a LOT more users than a single core or even dual core processor for about the same price of a high end Wintel or Lintel box. Also, this chip uses a fraction of the power that a PC uses. Since servers are always on, this is a big deal for saving money in a data center. The total power consumption is about 70 watts. The Intel Chips use more than 100 watts. I don't know about expansion slots or video card actually, but if you care about that on this box, you're missing the point.
    • One thing that's simple to understand and obvious just from looking at basic specs is that servers are built to handle a lot of data and a lot of tasks at once. Instead of one or two fast CPU cores they have 4 or 8 (The idea with the Niagara is to consolidate all of these onto a single die, making the whole machine cost considerably less). Memory sizes are around 16-32 GB (servers have used 64-bit alpha/ultrasparc/itanium architectures for years in order to support this). I haven't seen a PC motherboard tha
    • "Is it the speed? A decade ago, we were looking at servers which weren't half as fast as our low end PCs today. If it is speed, do we have some magical cutoff which just keeps moving forward?"

      I'm not sure what you consider a server, but even the low to midrange servers I deal with (including the 10 year old ones) blow away any PC in transaction volume and data throughput. You have no idea how much hardware exists in these servers (mainframe, as400 and unix) aside from the CPU to do all of the tasks requ
    • I will use the classic car analogy for you :)
      PC = Corvette.
      Workstation = Porsche.
      Server = Semi.
      A server has to run forever and move as much data as it can for as little money as it can. Imagine if every time you had to reboot your PC, install a new driver, replace a hard-drive, or replace your power supply it cost you $10,000 a minute. That is the enterprise world that servers live in.
      As far as video cards or sound cards? Servers don't need sound cards and often don't need video cards. You use ssh or some
    • When you first look at a server, you see a PC. I did wonder that too, why are we paying $10,000+ for x86 IBM servers each?

      Look at a nice big server, minus the CPU, RAM and disks. Its cheap. So the basic structure is not TOO much expensive over desktops.

      Next open it up and take a look around. Its designed to keep running. I run Linux and solaris at home and most of my downtime is caused by either a hardware problem or hardware change. Servers have redundant everything, plenty of space to upgrade and designed
    • A server is a machine that serves more than one user. A Personal Computer is a machine that is generally used by just one person at a time. The types of applications that are better suited to multi-user environments (serving web pages, databases) are different from single-user apps (word processors, web browsers, games). Servers machines tend to have hardware that is optimized for multi-user apps. This usually means memory, I/O bandwidth, storage space (if local and not SAN or NAS), and fast integer ope
  • Bold Move (Score:3, Interesting)

    by Comatose51 ( 687974 ) on Thursday February 23, 2006 @11:06PM (#14790276) Homepage
    I'm keeping my eyes on SUNW. I've been eyeing that stock for a long time now. Sun has a lot of valuable assets. Their intellectual assets and knowledge are first class. I think some analysts don't understand the value of it and count Sun out too early. They also have a ton of cash that give them a lot of time and resources to develop a good long term strategy and take risks like this. It's not as incredible/stupid as it sounds. This shows confidence in their own product. What is $5000 to SUNW? Say they send them to 100 reviewers (probably less since we tend to concentrate on a few popular sites) who basically help them get the word out. Sun losts $5mil. That's drop in the bucket, less expensive than a Superbowl ad but with more credibility among those who count. Their return will be many times that cost. More importantly, once a relationship with a customer is established, more products will follow. It's getting the floor in the door that's tough. My company is a customer and their reps are very willing to work with you, unlike some other vendors.
    • Sorry bro, $5000 * 100 = $500,000, not 5mil. Valid points though.
    • What is $5000 to SUNW? Say they send them to 100 reviewers (probably less since we tend to concentrate on a few popular sites) who basically help them get the word out. Sun losts $5mil.

      I can see you're using a pentium to do your math.
    • What is $5000 to SUNW? Say they send them to 100 reviewers (probably less since we tend to concentrate on a few popular sites) who basically help them get the word out. Sun losts $5mil.

      I think you need to re-do your math. You really mean "Sun losts $500,000." Better yet, run a grammar check while you're at it... ;-)
  • They could get this participation just by asking for it [in #debian]. I bet what's going on here is Sun is hopping on the wagon of companies that are "reaching out" to The Community, just like Yahoo did recently by handing out those relatively obscure web programming tools. I'm not sure why this is a valuable thing to do.
    • No, they are sending you the hardware on the assumption that you will buy it. If you keep it past the terms of the agreement, you've bought it.

      It's more like the Mafia where they do one little nice thing for you and you're beholden to them for the rest of your life.
  • I wonder what they'll send if someone submits a poor review (not just "negative") - a pre-paid return label and a ticking consolation prize?
  • by account_deleted ( 4530225 ) on Thursday February 23, 2006 @11:16PM (#14790326)
    Comment removed based on user account deletion

    • Tabs were killed by the compression filter

      From mucho experience posting code to the Slashdot filter, you also need to use an " & l t ; " ["left tag escape character" or whatever - oh, and lose the spaces] so that your less-than sign " < " doesn't get mis-interpreted as the beginning of an HTML tag and thereby deleted:

      for (int x = 0; x 5000000; x++)

      becomes

      for (int x = 0; x < 5000000; x++)

  • 1. Give away hardware for free to geeks who live with their parents and blog from the basement
    2. ???
    3. Profit

    If I were a Sun shareholder (which thanks god I'm not) I'd be pissed.
    • 1. Have high up person working for Sun talk about giving away hardware for free to geeks...
      2. Get posted on Slashdot
      3. Profit

      If I were a Sun shareholder (which I wish I was) I'd be pleased.
  • by SuperBanana ( 662181 ) on Thursday February 23, 2006 @11:26PM (#14790369)
    Mr. Schwartz, if you're reading this, feel free to send us one with "Attn: CowboyNeal" on the label.

    But you wouldn't be able to run World of Warcraft on it...

    • I doubt they can run WoW on their current servers.

      http://slashdot.org/faq/tech.shtml#te050 [slashdot.org]
      What kind of hardware does Slashdot run on?

      Type I (web server)
      PIII/600 MHz 512K cache
      1 GB RAM
      9.1GB LVD SCSI with hot swap backplane
      Intel EtherExpress Pro (built-in on moboard)
      Intel EtherExpress 100 adapter

      Type II (kernel NFS with kernel locking)
      Dual PIII/600 MHz
      2 GB RAM
      (2) 9.1GB LVD SCSI with hot swap backplane
      Intel EtherExpress Pro (built-in on motherboard)
      Intel EtherExpress 100 adapter

      Type III (SQL)
      Quad Xeon 550 MHz,
  • if you write a blog that fairly assesses the machine's performance (positively or negatively), send us a pointer

    ok

    SERVER *SUN;
    char HappyNerd[1337];
    HappyNerd = sendToBlog(*SUN);

    ....
    Don't forget the & on the other side :)

    • What you did would not have sent a pointer. If "SERVER" was anything other than an integer-type, this wouldn't even have compiled under C. And if you did send "*SUN" to the "sentToBlog" function, there is no way that function could get the address, since using "&" there would just return an address pointing to the temporary copy on the stack, which is not what was intended.

      What you meant was this:

      SERVER *SUN;
      char HappyNerd[1337];
      HappyNerd = sendToBlog(SUN); ....
      void sendToBlog( SERVER *s )
      {
  • by mytrip ( 940886 ) on Thursday February 23, 2006 @11:31PM (#14790392) Homepage Journal
    I've been doing AIX admin work for years. IBM has long had a program to let people try out their stuff first that they thought was very compelling. Most people wound up buying rs/6000 gear because it simply toasted other unix boxes. IBM actually let a dot com I worked for try out a fully loaded M80 ($250,000), 2 B80s and an F80 and we bought the M80 and 2 B80s because their Java implementation and 64bit copper chips toasted Sun at the time and IBM was willing to put their money where their mouth was... Sun has to be very confident that this will generate much needed postive press and reviews for them.
  • by Anonymous Coward on Thursday February 23, 2006 @11:45PM (#14790451)
    What we have here is an eight core CPU, running four threads on a single core. Total power consumption at peak? 80 watts. (CPU only, of course; the system itself will need more -- 220 to 400 watts, depending on the specs.) Clock speed is "only" 1 GHz or so. One floating point unit on the entire chip.

    So for scientific work, or other stuff that's seriously hammering the FPU, it's going to be a dog. Sun has never denied this. You're not going to take weather simulations and throw them on this thing; it'd be a waste of money. But for other applications -- database; web server; maybe financial simulations -- there's a hell of a lot of grunt, for very little power consumption.

    Sun has effectively opened up a new niche. Anything you have written for Sparc before will still run on this thing, but if you can manage to get a good degree of parallelism in your workload, it will positively fly.

    In my opinion (not having seen one of these in action), it's going to be either a massive flop, or a massive win for Sun. My money's on a massive win. They've thought long and hard about common workloads, and have come up with a CPU optimised for those workloads, without too much overhead from making a "general purpose" CPU that can handle anything you throw at it reasonably well. I can't help but wonder how long it will be before we see similar designs out of IBM and Intel.

    The other question I have is: what's the IO on these systems like? Poor IO would cripple it, but again, it depends on your workload. The T1000 has a single expansion slot (PCI-E), but four gigabit ethernet ports; the T2000 has three PCI-E and two PCI-X with four gigabit ethernet. On paper, it looks good; time will tell, though, if the systems live up to the expectations.
    • Don't forget the on-chip encryption - and now you're really flying! Dave Miller [kernel.org] has got Ubuntu Linux [ubuntu.com] running on this thing too.

      Niagara version 2 has taped out and will have 8 floating point units (or so I hear). It should arrive in early 2007,

      The later "Rock" processor offers true SMP capabilities, as a Sparc IV+ replacement for the really big boxes. (But expect a Fujitsu Sparc processor to fill in the gap while we wait for this).

      PS I hold a few SUNW shares

    • They've thought long and hard about common workloads, and have come up with a CPU optimised for those workloads

      In order for this chip to take over the world, it needs to push developers to parallelize their applications more. That's a good possibility, since every chipmaker is moving toward multiple cores, etc., and so developers need to change their ways eventually. If this chip is what Sun says it is, it may give developers that real push into parallel applications.

      In 5 years, it's possible that making ev
      • Niagara isn't designed for a single highly parallised workload, it's designed for where you have hundred or thousands of autonomous threads which can be horizontally scaled - think Google, or eBay, or Wikimedia. If your current architecture needs dozens of middleware and web servers, Niagara is likely to massively reduce the amount of power needed for this function, here and now. Some of the most striking results have been for web facing middleware running on top of a JVM, as so many of these things do, wit
        • From the processor's perspective, how is a parallelized workload different than autonomous threads? How are they different at all, except for the points in a parallel workload that require communication between the threads?
          • Hmm, maybe I could have been clearer. I see two distinct situations:

            a) a workload which is essentially a single task which has been parallelised
            b) a workload which is a single server handling a whole bunch of different requests

            Now, when the parent posting suggested a need for app developers to parallelise their applications, I would see that as falling into category (a), while the bulk of middleware servers are actually running flat out doing type (b) workloads. The big difference is that for a type (a) wor
      • In 5 years, it's possible that making everything parallel will be a basic principle just like making modular code.

        Uh, no. Parallelism is just a special case of concurrent programming [wikipedia.org], and trust me, that will never be as basic as modular programming. Not that it won't be important, what with cheap multicore systems. But breaking your program down into threads will always be much harder than breaking your program down into modules. You will see more use of compilers and runtimes that handle common multithr

    • Just a quick reply here... i've been beta testing the T2000 for 2 months now, and recieved our shipment of 13 for production recently (ebay have been buying all that they can get their hands on!). On the slots, there are 2 PCI-X and 2 PCI-E slots. However at the moment 1 of the PCI-X slots is take up with a SAS disk controller - this controller will be build on to the motherboard in the next hardware update (march to april time), so freeing up the other PCI-X slot. On the benchmarking front, it's pretty i
    • So for scientific work, or other stuff that's seriously hammering the FPU, it's going to be a dog. Sun has never denied this. You're not going to take weather simulations and throw them on this thing; it'd be a waste of money. But for other applications -- database; web server; maybe financial simulations...

      Provided, of course, you don't make the common mistake of using floating point to represent currency values! Of course, if you do that, your numbers will come out all wrong anyway. Makes me wonder abou

  • Yeah. Read the comments on the post.

    If they decide not to let you keep it, which the agreement apparently doesn't say they will, you have 5 days to get the unit back to Sun at your own expense.

    It's not unlike the trial magazine subscription where you get the first six months free, but can cancel just by sending the seventh issue back. They say they never got it, and stick you with a year's bill.

    Here you'd have to pay through the nose to get insured, confirmation of delivery shipping back to Sun.
  • by Anonymous Coward
    https://www.sun.com/secure/servers/coolthreads/tnb /agreements/index.jsp [sun.com]

    Try and Buy Agreements

    Australia, Austria, Belgium, Canada, Denmark, Finland, France, Germany, Netherlands, New Zealand, Spain, Sweden, Switzerland, United Kingdom (Great Britain), United States of America
  • I'll be only too happy to give Sun's customer service a piss-poor review!
    We ordered one of Sun's new Galaxy servers and after nearly 4 months, it still hadn't shown up, with nothing resembling an adequate explanation, coming from from Sun. I had to chase them to get them to promise new delivery dates that they then failed to follow through on. So we cancelled our order and we went and ordered a couple of boxes from a competitor! We wanted a hardware support contract to go along with it, but if they can't
  • Oh, *Niagara*... my bad! :)
  • Free N1_agra click here!
  • Too bad the newer SAS SCSI drives don't have a proper RAID controller yet. The company I work for was just about ready to get one until I found out you need a 10k storage array, worth more than the whole server.
  • by Ancient_Hacker ( 751168 ) on Friday February 24, 2006 @07:01AM (#14791722)
    Ahem, it's not a free server, if you read the fine print.

    You get a LOANER server. At the end of 30 days, you have the option of buying it, or mailing it back, insured, at your expense, or taking the chance they like your bribed-for review. For 99% of the people that read Slashdot, that means you're out $60 bucks. That's a *long* way from getting a free server.

  • Sweet, I'm starting my new tech review blog right away, it's called Niagra For Me, and it's hosted at http://niagraforme.blogspot.com/ [blogspot.com]. Everybody go visit it and leave a comment so it will look popular and get me a Niagra server!

You are always doing something marginal when the boss drops by your desk.

Working...