Modular Smartphones Could Be Reused As Computer Clusters 82
itwbennett writes The promise of modular smartphones like Google's Project Ara is that buyers will be able to upgrade components at will — and now Finnish company Circular Devices has come up with a use for discarded computing modules, which they're calling Puzzlecluster. Drawings of the Puzzlecluster architecture show a chassis with slots for the reused modules, which can then be interconnected with others to create the cluster. Just one unit could also be used as a desktop computer."
Re: (Score:2)
citations needed.
Re: (Score:2)
The security problem is mostly solved. Or at least it is possibly and economically feasible to make breaking in prohibitively hard. The "cheapest bidder" and "Microsoft"/"Adobe"/etc. and "cheapest possible programmer" problems are not. For software to improve to acceptable levels of security, my guess would be that you would need to sack 95% of programmers and 95% of their bosses.
Re: (Score:2)
Re: (Score:2)
From my experience, that is a rather simplistic point of view. Sure, there will be some programmers that are actually only lacking the time to learn about security, but I have seen security being messed up time and again when it was an explicit requirement and the people doing it had "secure" in their job title.
Oh Boy! (Score:2)
Re: (Score:2)
Sorry but a Banana Pi is not a desktop computer. It is a motherboard at best. Most computer users do not have the skills and/or do not want to spend the time to turn a Banana Pi into a functional computer. Most people just want to open a few boxes, plug things together and have it work.
Re: (Score:2)
Do you know any SBC that can drive two HDMI displays independently?
Re: (Score:2)
x86 ones with an Intel or AMD CPU can.
Re: (Score:2)
A bit overkill for what I need (driving two LCDs as poster displays).
Re: (Score:2)
It sure is.
For stationary images best would be for a "USB graphics card" to work (that's how I call a USB to HDMI adapter)
If an SBC comes with USB 3.0 then it's even better. Weirdly my googling turns up cheaper USB 3.0 adapters than 2.0 ones, like newer tech is more viable, newer, higher volume, cheaper.
But while the hardware is less overkill that way, cost of SBC + USB adapter gets nearer to the cheapest dual output PC (NUC, new fit-PC). Bigger hurdle : I have no clear idea of CPU, GPU and OS requirements.
Re: (Score:2)
Those USB/HDMI converters are almost as expensive as a Raspberry Pi.
So, if we forget the dual monitor output, what's out there with SD card access and an HDMI port? Anything with a lower price tag than a Raspberry Pi that's still worth buying?
Re: (Score:2)
Bottom-of-the-barrel 'HDMI stick computer' might be it. You're saving on the HDMI cable at that point. Or even the software built into a low end TV with an unkwnown brand : a very recent one might come with a USB port and video playback / music playback / picture gallery functions. Seen a low end 32" like that, as weird as "low end" and "32\"" belonging in the same sentence may sound.
Re: (Score:3)
That is not a PC. That is an embedded ARM system. And really, there is no problem with the PC industry. The days of growth are over, but that is _not_ a problem and everybody sane did expect it. A far smaller PC industry 20 years back managed to have several manufacturers for each component and several models for each and prices where comparably lower than today.
Re: Oh Boy! (Score:2)
Check out the HP Stream 7 tablet - $100 for a 7" tablet with full Windows 8.1 and 12 months of Office 365 for the tablet and ANOTHER PC (2 installations). But, oh wait - it comes from a traditional computer manufacturer - will they go under or ride the wave?
Oh please, you act as if they're computers (Score:2)
Next thing you know, you'll tell me that the modern smartphone has more processing power and data storage than all the spacecraft we've sent to other planets combined, and all the computers we built up to the year 2000.
Re: (Score:2)
A modern smart phone can barely compete with a desktop PC from 2000 (CPU wise anyway, smartphones do have much better GPU's).
Gee, is that all? I remember doing quite a lot on my desktop in 2000.
I wouldn't be surprised at all if a 1GHz Pentium 3 could beat a dual core 2GHz ARM CPU. Sure the P3 would be chewing 30W and the ARM only 6.
I'm betting it would depend on which benchmark you were running.
Re: Oh please, you act as if they're computers (Score:2)
Re: (Score:1)
Urr.. Android?
Re: (Score:2)
Oh! Shiny! sorry, I need to go buy something, this online offer is limited duration!
Re: (Score:2)
There are engineering apps. There are fluid dynamic apps [google.com], wind tunnel simulation apps [google.com] and graphic calculator apps [google.com].
In the more practical domain there are function generators [google.com] and oscilloscopes [google.com]
It's just that those aren't downloaded as often as Candy Crush.
Re: (Score:2)
specific use cases which would be classed as mundane for a ten million Dollar rack of blades, easily done with a box of donated mobile phone parts otherwise destined for the landfill. I've been saying this for a couple days now, you're the first one to actually post something in agreement with what I've said.
Sounds like something out of Rucker's sci-fi (Score:2)
Rudy Rucker has some pretty crazy stories [rudyrucker.com] that always a blast to read (even though, or because you wonder what he was smoking when he wrote them).
One of those stories, Hormiga Canyon [rudyrucker.com], has his protagonist build a computer cluster out of old cell phones, even using the phone's built-in voice recognition to control the cluster.
Does that count as Prior Art? :)
pointless (Score:5, Insightful)
Clusters of underpowered processors are not nearly as useful as a single powerful processor.
Depends on use (Score:2)
That rather depends on the use you're putting them towards, doesn't it?
Cell phone processors might tend to be slow, but they're rather power efficient per operation. Always good in a data center, especially if the single powerful processor gets a lot fewer operations per watt.
I can see it being useful for highly parallelized tasks. Google searches, serving HTML pages and even video streams, re-compressing audio/video streams*, etc...
Re: (Score:2)
Cell phone processors might tend to be slow, but they're rather power efficient per operation. Always good in a data center, especially if the single powerful processor gets a lot fewer operations per watt.
So what's the answer? Can networking a bunch of these low power cell phone cpu's together (along with their supporting components) end up producing more (useful) operations per watt than a new and beefy cpu? I bet the answer is no, and that was (part of) itzly's point.
Re: (Score:3)
Cell phone processors might tend to be slow, but they're rather power efficient per operation
Not particularly. The latest Intel designs are better. Also, the CPU needs big and fast memory, and plenty of I/O bandwidth to do useful stuff. Not typically the stuff you find on a cell phone.
Re: Depends on use (Score:2)
Maybe not your cellphone...
Re: (Score:2)
At how much cost compared to salvaged cell phone CPUs? Secondly, the 'needs' you list rather depends on the task they're being asked to do. There are still lots of tasks out there that aren't particularly CPU dependent.
Oh, now that's an idea: Said CPUs tend to be fairly ruggedized. What if we're talking about micro-servers intended for use in neighborhood locations for whatever function?
As Kenh says, "maybe not your cellphone".
My cell phone is more powerful than the old domain controller at one of my pr
Re:Depends on use (Score:5, Insightful)
unless you're talking about a shop in middle Africa or even Outback, China that proposes to utilise such a system in a mesh network to bring remote communities one step closer to being Facebook zombies.
Hell, for that matter - how much processing power do you need to run a DHCP router?
Or a DVR?
Or a home automation system? Something as simple as an automatic garage door opener?
An RFID reader?
There's a BUNCH of uses for low power/small iron that Big Iron would be utterly WASTED on. The aforementioned is not, by any means, exhaustive.
Re: (Score:1)
unless you're talking about a shop in middle Africa or even Outback, China that proposes to utilise such a system in a mesh network to bring remote communities one step closer to being Facebook zombies.
Hell, for that matter - how much processing power do you need to run a DHCP router? Or a DVR? Or a home automation system? Something as simple as an automatic garage door opener? An RFID reader?
There's a BUNCH of uses for low power/small iron that Big Iron would be utterly WASTED on. The aforementioned is not, by any means, exhaustive.
Keep in mind that modern Android phones have quad-core processors and will continue to grow in performance.
Re: (Score:2)
yeah, modern Android phones have no chance in hell in keeping up with a dual core APU in a laptop, though - in benchmarks. They might be faster in specific use cases, like being phones, but database crunching? Hope you got a good fridge.
Re: (Score:1)
As someone who did some research on these sorts of ARMv7 clustered computers, when I compared GFLOPS/watt to a desktop GPU the AMD GPUs KILLED the ARMv7 system(even when Neon/Adreno/Mali ARM GPUs were factored in) on a $$$/GFLOPS/watt ratio.
Desktop GPUs are just better(unless you need compact). If you want mA standby current: ARM is great. If I've learned anything from Litecoin mining it's that $$$/GFLOPS is WAY less important than GLOPS/watt which is why the fact that used Cell phone parts are practically
Re: (Score:1)
That's great, but a cluster of underpowered processors that you have will beat a single powerful processor that you don't have any day of the week.
I suppose... (Score:3)
For all those nice, tractable, problems that behave well on loosely coupled nodes, each individually quite feeble, I guess it'll work; but that certainly doesn't include most of the really obnoxious computational crunching problems.
Re: (Score:3)
Yeah, yeah, yeah. That's all correct.
But you can make a Beowulf cluster out of this.
And that has to count for something.
Re: (Score:2)
But will it run Linux?
Re: (Score:3)
Assuming that the obsolete compute modules are of standard size/pinout (or, more likely, that compute chassis are only produced for phones that ship in sufficiently massive volume to assure a supply of board-donors), this scheme would work; but I have to imagine that a phone SoC would make a pretty dreadful compute node: Aside from being a bit feeble, there would be no reason for the interconnect to be anything but abysmal.
the nice thing about a modular system is that just as the modules may be discarded from the phones and re-purposed (in this case the idea is to re-purpose them in compute clusters), so may, when there are better more powerful processors available, the modules being used in the compute clusters *also* discarded... and re-purposed further once again down a continual chain until they break.
now, you may think "phone SoC equals useless for compute purposes" this simply is *not true*. you may for example colocat
Wow (Score:5, Funny)
Imagine a Beowulf cluster of these.
Compute per watt (Score:3)
Older gear typically uses more power / FLOP, and is slower, so your time-to-solution takes a hit too.
If we get to the point where the power usage / FLOP for an N+1 device is basically the same as N, then you might see people do this, so long as they are okay with waiting longer for a result. Until then, don't hold your breath
Re: (Score:2)
if compute efficiency were really an issue, we'd all be using RasPis and running RISC OS. As it is, we are all, each and every one of us, using what is available to us "until something we can afford which is in our own mind better comes along". Right now, personally speaking, the most efficient thing for me to do is keep my AMD APU until it burns out and THEN worrying about specifying my next hardware purchase. I'm not about to go buy the latest greatest >1.0-efficient process platform just because it's
Re: (Score:2)
Re: (Score:2)
I get the difference bwtween a 40W brick for a laptop and half a rack of isolators and switchgear delivering 17kW for a five Petabyte cluster, I've dealt with both. I don't imagine for one minute that TFA is talking about competing with Big Iron either in terms of power efficiency or in terms of raw computing power. It's talking about using existing hardware that would otherwise find its way into landfill simply because Johnny Facebook has no further use for it after buying his iPhone 20z, for processes whe
Re: (Score:2)
heat? We're talking about a technology reuse in places such as rural India and middle Africa where daytime temperatures often exceed 45C, I don't think they're going to be overly concerned about something they're a: not pushing that hard anyway - use cases for these things are going to be about as mundane as GDOs and routers - and b: costs them next to nothing to obtain and deploy. If your use case requires investment in fluid pumped cooling I would say that your use case also calls for something with a bit
Re: (Score:1)
Until we reach a point where compute per watt stabilizes, it is highly unlikely that anyone would be interested in using old components to build a cluster.
The first clusters WERE 'old components'. Not all of us had budgets that allowed the latest technology. Some of us didn't HAVE a budget, just a roomfull of 'old components'.
You could also make display walls (Score:5, Interesting)
I suggested this at IBM Research around 1999, and built a proof-of-concept speech-controlled 3X3 display wall of old ThinkPads otherwise destined for "the crusher". Wow, was my supervisor surprised (to put it mildly) when he got back from a two week vacation, as I had built it when he was away so he could not say "no". :-) Another contractor in the lab described his reactions to me though, and helped calm him down. :-)
A couple regular employees associated with the lab had helped me get the equipment. Every laptop had to be officially tracked with an owner and even locked down to comply with IBM policy, even though they had been discarded/scrubbed and were heading for destruction. Ignoring time costs, the laptop locks were the most expensive part of the project in a sense given pretty much everything else was recycled, and a regular employee coworker got them for me out of his own budget (thanks, David!). Another regular employee helped with the networking aspects and tracking (thanks, Mel!).
The people are IBM who dealt with old equipment were very interested in the idea. Who wants to see useable equipment get scrapped? And there was so much older equipment from such a big company, plus from leases and such. But I guess, within Research itself, the project then was not that exciting to people focused on "new" things.
I even wrote up a mock commercial for such display walls with a female executive mother working from home in front of a huge display wall, and her little daughter came by to say hello, and the mom had programmed something fun to show up on the wall for her daughter.
Before we got treadmill workstations, my wife also liked the idea as a way to keep fit -- that you would be walking around all day in front of this display wall you were talking to, rather than sitting in one place and typing.
ThinkPads were interesting in that they could fold flat, so you could layer them on top of each other. However, I also suggested back then that ThinkPads could eventually be designed for reuse in this specific way.
But as just a contractor, and about then hitting the 1.5 year limit for contractors at IBM Research (a rule to prevent them being ruled as employees), the idea sort of fizzled. There was some preliminary negotiations about hiring me as a regular employee, but I probably asked for too much as I had mixed feelings then about the all embracing IP agreements that IBM had and similar things (although I really liked the speech group -- great people), and I also had hopes to even then get back to educational and design software my wife and I had been writing. I did go back a couple more times at IBM as a contractor, but it was for other groups unrelated to speech. Anyway, so that idea faded away.
The display wall looked a bit like part of a Jeopardy set, and you would tell it what specific screens you wanted to do what with. Another speech researcher asked me to set it up in a new lab when I was leaving. So I can wonder if, indirectly, the idea floating around sparked something at IBM Research eventually related to Watson and Jeopardy? :-)
My major use case for the wall was to use as a design tool to make complex engineering projects, like a self-replicating space habitat. However, I also tried to get the IBM Legal department interested in using such a speech-activated display wall for reviewing legal documents and tracking cases, with using such systems backed by a supercomputer becoming a perk for IBM lawyers, but also did not get far with that.
I'm now past the expiration of my non-disclosure agreement on such things that I did or learned at IBM Research back then, thankfully! :-)
Anyway, one could probably do much the same with discarded cell phones...
Book about Watson & Jeopardy (Score:2)
On sparks and credit and muses etc. (Score:2)
Thanks for the pointer! I doubt I'll find my name there. Also, I said the 3X3 display wall panel may have sparked an interest in combining speech research and Jeopardy (perhaps, in an unconscious way?) -- but Watson itself is a much broader system. I wanted to work on such systems then, and talked a bit about "wouldn't it be nice if..." like with a display wall connected to a supercomputer for solving tough problems, but I said nothing detailed as to how it would really work, beyond creating a simple syst
Epic power wasteage (Score:3)
The big problem with building a cluster out of anything but bleeding-edge processors is that the flops-per-watt is going to suck so much compared to a new cluster, that you might not save any money over buying that new cluster.
imagine... (Score:1, Funny)
Been tried (Score:1)
This is the same concept as dropping a bunch of nearly obsolete IDE and SATA drives in a NAS for cheap storage. It just does not scale. The cost of the "glue" (NAS) tying it all together is more expensive than the cost of a new large capacity hard disk. The cost of the dock for the processors and displays is more than the cost of a decent motherboard, processor, and UPS.
The only way this would work would be if we had a single standard interface that lasted for more than a decade, and the "glue" devices got
Re: (Score:2)
depends, what's your data worth to you sitting on a drive that's not spinning while you wait for your shiny new WD Red?
(I use "obsolete" commodity components in building my NAS gear. My current one uses a 2-port SATA riser on a Via Eden board, mounted in a Shuttle XPC case (equipped with a 100W PSU) and running LAMP docuwiki headless. Total hardware worth: £12 for the SATA riser, total cost: + about £200 for the mainboard and case back in 2006. Just because it's obsolete doesn't mean I should si
Because this is so completely unlike... (Score:4, Insightful)
...the Raspberry Pi board, you know, that $30-35 "PC" that not needs a keyboard, mouse, SD card, TV, case and power up ply to be usable as a desktop...
What makes his project not cost efficient IMHO is going to be the collection and testing of recycled modules.
Obligatory Beowulf Comment (Score:1)
Oh boy. I can't wait to deal with the overhead of distributed memory on underpowered, outdated processors. Such excitement.