Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Cellphones Hardware Technology

Modular Smartphones Could Be Reused As Computer Clusters 82

itwbennett writes The promise of modular smartphones like Google's Project Ara is that buyers will be able to upgrade components at will — and now Finnish company Circular Devices has come up with a use for discarded computing modules, which they're calling Puzzlecluster. Drawings of the Puzzlecluster architecture show a chassis with slots for the reused modules, which can then be interconnected with others to create the cluster. Just one unit could also be used as a desktop computer."
This discussion has been archived. No new comments can be posted.

Modular Smartphones Could Be Reused As Computer Clusters

Comments Filter:
  • Thank God I'm not rich enough to own a bunch of stock in companies that make traditional PCs. I think the desktop and laptop companies are about to crash big time. Mini PCs that sell for under $100 already can take care of most user needs. Check out the Banana Pi as an example.
    • Sorry but a Banana Pi is not a desktop computer. It is a motherboard at best. Most computer users do not have the skills and/or do not want to spend the time to turn a Banana Pi into a functional computer. Most people just want to open a few boxes, plug things together and have it work.

    • by gweihir ( 88907 )

      That is not a PC. That is an embedded ARM system. And really, there is no problem with the PC industry. The days of growth are over, but that is _not_ a problem and everybody sane did expect it. A far smaller PC industry 20 years back managed to have several manufacturers for each component and several models for each and prices where comparably lower than today.

    • Check out the HP Stream 7 tablet - $100 for a 7" tablet with full Windows 8.1 and 12 months of Office 365 for the tablet and ANOTHER PC (2 installations). But, oh wait - it comes from a traditional computer manufacturer - will they go under or ride the wave?

  • Next thing you know, you'll tell me that the modern smartphone has more processing power and data storage than all the spacecraft we've sent to other planets combined, and all the computers we built up to the year 2000.

  • Rudy Rucker has some pretty crazy stories [rudyrucker.com] that always a blast to read (even though, or because you wonder what he was smoking when he wrote them).

    One of those stories, Hormiga Canyon [rudyrucker.com], has his protagonist build a computer cluster out of old cell phones, even using the phone's built-in voice recognition to control the cluster.

    Does that count as Prior Art? :)

  • pointless (Score:5, Insightful)

    by itzly ( 3699663 ) on Monday January 26, 2015 @02:44PM (#48907345)

    Clusters of underpowered processors are not nearly as useful as a single powerful processor.

    • That rather depends on the use you're putting them towards, doesn't it?

      Cell phone processors might tend to be slow, but they're rather power efficient per operation. Always good in a data center, especially if the single powerful processor gets a lot fewer operations per watt.

      I can see it being useful for highly parallelized tasks. Google searches, serving HTML pages and even video streams, re-compressing audio/video streams*, etc...

      • by unrtst ( 777550 )

        Cell phone processors might tend to be slow, but they're rather power efficient per operation. Always good in a data center, especially if the single powerful processor gets a lot fewer operations per watt.

        So what's the answer? Can networking a bunch of these low power cell phone cpu's together (along with their supporting components) end up producing more (useful) operations per watt than a new and beefy cpu? I bet the answer is no, and that was (part of) itzly's point.

      • by itzly ( 3699663 )

        Cell phone processors might tend to be slow, but they're rather power efficient per operation

        Not particularly. The latest Intel designs are better. Also, the CPU needs big and fast memory, and plenty of I/O bandwidth to do useful stuff. Not typically the stuff you find on a cell phone.

        • Also, the CPU needs big and fast memory, and plenty of I/O bandwidth to do useful stuff. Not typically the stuff you find on a cell phone.

          Maybe not your cellphone...

        • At how much cost compared to salvaged cell phone CPUs? Secondly, the 'needs' you list rather depends on the task they're being asked to do. There are still lots of tasks out there that aren't particularly CPU dependent.

          Oh, now that's an idea: Said CPUs tend to be fairly ruggedized. What if we're talking about micro-servers intended for use in neighborhood locations for whatever function?

          As Kenh says, "maybe not your cellphone".

          My cell phone is more powerful than the old domain controller at one of my pr

    • by Anonymous Coward

      That's great, but a cluster of underpowered processors that you have will beat a single powerful processor that you don't have any day of the week.

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday January 26, 2015 @02:53PM (#48907447) Journal
    Assuming that the obsolete compute modules are of standard size/pinout (or, more likely, that compute chassis are only produced for phones that ship in sufficiently massive volume to assure a supply of board-donors), this scheme would work; but I have to imagine that a phone SoC would make a pretty dreadful compute node: Aside from being a bit feeble, there would be no reason for the interconnect to be anything but abysmal. For efficiency's sake, SoCs tightly integrate all the parts that need to chat at high speed with one another(along with whatever else fits, just to save board space), and only such interfaces as are absolutely necessary are brought out of the package, much less broken out on the board in some sort of civilized connector. In terms of dedicated interfaces you'll have some dubiously appropriate wireless stuff, a USB slave or host/slave interface, and a few GPIOs. The only options with really serious bandwidth or low latency would probably involve creative(and not necessarily possible, depending on the situation) abuse of camera and screen interfaces.

    For all those nice, tractable, problems that behave well on loosely coupled nodes, each individually quite feeble, I guess it'll work; but that certainly doesn't include most of the really obnoxious computational crunching problems.
    • Yeah, yeah, yeah. That's all correct.

      But you can make a Beowulf cluster out of this.

      And that has to count for something.

    • by lkcl ( 517947 )

      Assuming that the obsolete compute modules are of standard size/pinout (or, more likely, that compute chassis are only produced for phones that ship in sufficiently massive volume to assure a supply of board-donors), this scheme would work; but I have to imagine that a phone SoC would make a pretty dreadful compute node: Aside from being a bit feeble, there would be no reason for the interconnect to be anything but abysmal.

      the nice thing about a modular system is that just as the modules may be discarded from the phones and re-purposed (in this case the idea is to re-purpose them in compute clusters), so may, when there are better more powerful processors available, the modules being used in the compute clusters *also* discarded... and re-purposed further once again down a continual chain until they break.

      now, you may think "phone SoC equals useless for compute purposes" this simply is *not true*. you may for example colocat

  • Wow (Score:5, Funny)

    by grimmjeeper ( 2301232 ) on Monday January 26, 2015 @02:58PM (#48907481) Homepage

    Imagine a Beowulf cluster of these.

  • by itamblyn ( 867415 ) on Monday January 26, 2015 @03:08PM (#48907565) Homepage
    Until we reach a point where compute per watt stabilizes, it is highly unlikely that anyone would be interested in using old components to build a cluster. The fact that the parts would all be slightly different would be a headache too.
    Older gear typically uses more power / FLOP, and is slower, so your time-to-solution takes a hit too.
    If we get to the point where the power usage / FLOP for an N+1 device is basically the same as N, then you might see people do this, so long as they are okay with waiting longer for a result. Until then, don't hold your breath
    • by ihtoit ( 3393327 )

      if compute efficiency were really an issue, we'd all be using RasPis and running RISC OS. As it is, we are all, each and every one of us, using what is available to us "until something we can afford which is in our own mind better comes along". Right now, personally speaking, the most efficient thing for me to do is keep my AMD APU until it burns out and THEN worrying about specifying my next hardware purchase. I'm not about to go buy the latest greatest >1.0-efficient process platform just because it's

      • Compute efficiency between new and old machines (say 4 years) is a big deal in the power and cooling budget of a datacenter. I'm not talking about your personal desktop, I'm taking about big iron. Trust me, power costs matter at that scale.
        • by ihtoit ( 3393327 )

          I get the difference bwtween a 40W brick for a laptop and half a rack of isolators and switchgear delivering 17kW for a five Petabyte cluster, I've dealt with both. I don't imagine for one minute that TFA is talking about competing with Big Iron either in terms of power efficiency or in terms of raw computing power. It's talking about using existing hardware that would otherwise find its way into landfill simply because Johnny Facebook has no further use for it after buying his iPhone 20z, for processes whe

    • Until we reach a point where compute per watt stabilizes, it is highly unlikely that anyone would be interested in using old components to build a cluster.

      The first clusters WERE 'old components'. Not all of us had budgets that allowed the latest technology. Some of us didn't HAVE a budget, just a roomfull of 'old components'.

  • by Paul Fernhout ( 109597 ) on Monday January 26, 2015 @03:27PM (#48907705) Homepage

    I suggested this at IBM Research around 1999, and built a proof-of-concept speech-controlled 3X3 display wall of old ThinkPads otherwise destined for "the crusher". Wow, was my supervisor surprised (to put it mildly) when he got back from a two week vacation, as I had built it when he was away so he could not say "no". :-) Another contractor in the lab described his reactions to me though, and helped calm him down. :-)

    A couple regular employees associated with the lab had helped me get the equipment. Every laptop had to be officially tracked with an owner and even locked down to comply with IBM policy, even though they had been discarded/scrubbed and were heading for destruction. Ignoring time costs, the laptop locks were the most expensive part of the project in a sense given pretty much everything else was recycled, and a regular employee coworker got them for me out of his own budget (thanks, David!). Another regular employee helped with the networking aspects and tracking (thanks, Mel!).

    The people are IBM who dealt with old equipment were very interested in the idea. Who wants to see useable equipment get scrapped? And there was so much older equipment from such a big company, plus from leases and such. But I guess, within Research itself, the project then was not that exciting to people focused on "new" things.

    I even wrote up a mock commercial for such display walls with a female executive mother working from home in front of a huge display wall, and her little daughter came by to say hello, and the mom had programmed something fun to show up on the wall for her daughter.

    Before we got treadmill workstations, my wife also liked the idea as a way to keep fit -- that you would be walking around all day in front of this display wall you were talking to, rather than sitting in one place and typing.

    ThinkPads were interesting in that they could fold flat, so you could layer them on top of each other. However, I also suggested back then that ThinkPads could eventually be designed for reuse in this specific way.

    But as just a contractor, and about then hitting the 1.5 year limit for contractors at IBM Research (a rule to prevent them being ruled as employees), the idea sort of fizzled. There was some preliminary negotiations about hiring me as a regular employee, but I probably asked for too much as I had mixed feelings then about the all embracing IP agreements that IBM had and similar things (although I really liked the speech group -- great people), and I also had hopes to even then get back to educational and design software my wife and I had been writing. I did go back a couple more times at IBM as a contractor, but it was for other groups unrelated to speech. Anyway, so that idea faded away.

    The display wall looked a bit like part of a Jeopardy set, and you would tell it what specific screens you wanted to do what with. Another speech researcher asked me to set it up in a new lab when I was leaving. So I can wonder if, indirectly, the idea floating around sparked something at IBM Research eventually related to Watson and Jeopardy? :-)

    My major use case for the wall was to use as a design tool to make complex engineering projects, like a self-replicating space habitat. However, I also tried to get the IBM Legal department interested in using such a speech-activated display wall for reviewing legal documents and tracking cases, with using such systems backed by a supercomputer becoming a perk for IBM lawyers, but also did not get far with that.

    I'm now past the expiration of my non-disclosure agreement on such things that I did or learned at IBM Research back then, thankfully! :-)

    Anyway, one could probably do much the same with discarded cell phones...

    • Having worked at IBM Research and wondering if your contribution played a role in them developing Watson... You should check out this book. [amazon.com] I'm reading it now and am enjoying reading about how the team(s) developed all the tech beneath Watson in preparation for the televised match.
      • Thanks for the pointer! I doubt I'll find my name there. Also, I said the 3X3 display wall panel may have sparked an interest in combining speech research and Jeopardy (perhaps, in an unconscious way?) -- but Watson itself is a much broader system. I wanted to work on such systems then, and talked a bit about "wouldn't it be nice if..." like with a display wall connected to a supercomputer for solving tough problems, but I said nothing detailed as to how it would really work, beyond creating a simple syst

  • by GameboyRMH ( 1153867 ) <gameboyrmh@@@gmail...com> on Monday January 26, 2015 @03:35PM (#48907765) Journal

    The big problem with building a cluster out of anything but bleeding-edge processors is that the flops-per-watt is going to suck so much compared to a new cluster, that you might not save any money over buying that new cluster.

  • imagine... (Score:1, Funny)

    by spectrum- ( 158197 )
    ...a Beowulf cluster of these! :)
  • by Anonymous Coward

    This is the same concept as dropping a bunch of nearly obsolete IDE and SATA drives in a NAS for cheap storage. It just does not scale. The cost of the "glue" (NAS) tying it all together is more expensive than the cost of a new large capacity hard disk. The cost of the dock for the processors and displays is more than the cost of a decent motherboard, processor, and UPS.

    The only way this would work would be if we had a single standard interface that lasted for more than a decade, and the "glue" devices got

    • by ihtoit ( 3393327 )

      depends, what's your data worth to you sitting on a drive that's not spinning while you wait for your shiny new WD Red?

      (I use "obsolete" commodity components in building my NAS gear. My current one uses a 2-port SATA riser on a Via Eden board, mounted in a Shuttle XPC case (equipped with a 100W PSU) and running LAMP docuwiki headless. Total hardware worth: £12 for the SATA riser, total cost: + about £200 for the mainboard and case back in 2006. Just because it's obsolete doesn't mean I should si

  • by kenh ( 9056 ) on Monday January 26, 2015 @04:51PM (#48908275) Homepage Journal

    ...the Raspberry Pi board, you know, that $30-35 "PC" that not needs a keyboard, mouse, SD card, TV, case and power up ply to be usable as a desktop...

    What makes his project not cost efficient IMHO is going to be the collection and testing of recycled modules.

  • by Anonymous Coward

    Oh boy. I can't wait to deal with the overhead of distributed memory on underpowered, outdated processors. Such excitement.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...