Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Power

Time-Shifted Computing Could Slash Data Center Energy Costs By Up To 30% (arstechnica.com) 66

An anonymous reader quotes a report from Ars Technica: Recently, two computer scientists had an idea: if computers use energy to perform calculations, could stored data be a form of stored energy? Why not use computing as a way to store energy? What if information could be a battery, man? As it turns out, the idea isn't as far-fetched as it may sound. The "information battery" concept, fleshed out in a recent paper (PDF), would perform certain computations in advance when power is cheap -- like when the sun is shining or the wind is blowing -- and cache the results for later. The process could help data centers replace up to 30 percent of their energy use with surplus renewable power.

The beauty of the system is that it requires no specialized hardware and imposes very little overhead. "Information Batteries are designed to work with existing data centers," write authors Jennifer Switzer, a doctoral student at UC San Diego, and Barath Raghavan, an assistant professor at the University of Southern California. "Some very limited processing power is reserved for the IB [information battery] manager, which manages the scheduling of both real-time computational tasks and precomputation. A cluster of machines or VMs is designated for precomputation. The IB cache, which stores the results of these precomputations, is kept local for quick retrieval. No additional infrastructure is needed."

In the model Switzer and Raghavan created to test the concept, the IB manager queried grid operators every five minutes -- the smallest time interval the operators offered -- to check the price of power to inform its predictions. When prices dipped below a set threshold, the manager green-lit a batch of computations and cached them for later. The system was pretty effective at reducing the need for expensive "grid power," as the authors call it, even when the pre-computation engine did a relatively poor job of predicting which tasks would be needed in the near future. At just 30 percent accuracy, the manager could begin to make the most of the so-called "opportunity power" that is created when there is excess wind or solar power. In a typical large data center, workloads can be predicted around 90 minutes in advance with about 90 percent accuracy, the authors write. With a more conservative prediction window of 60 minutes, "such a data center could store 150 MWh, significantly more than most grid-scale battery-based storage projects," they say. An equivalent grid-scale battery would cost around $50 million, they note.

This discussion has been archived. No new comments can be posted.

Time-Shifted Computing Could Slash Data Center Energy Costs By Up To 30%

Comments Filter:
  • by Flexagon ( 740643 ) on Wednesday February 09, 2022 @07:12PM (#62254395)
    ... and all of the excitement this "feature" has caused in major CPU architectures? It'll be interesting to see how this plays out.
    • Re: (Score:3, Funny)

      by Tablizer ( 95088 )

      You're just speculating.

    • Comment removed based on user account deletion
    • by AmiMoJo ( 196126 )

      This is nothing like speculative execution. Speculative execution uses otherwise un-used parts of the CPU to execute instructions that might possibly be needed later, in order to improve performance.

      Here they are suggesting that servers might speculatively do work that it likely to be needed, at times when energy is cheap and abundant. The examples they give are machine learning and video transcoding, but they don't give much detail on how these things might be speculated to be needed. If someone uploads a

    • Because you have a hardon against speculative execution and you compare everything in your life to it?

      Like seriously this has more to do with running your pool pump off peak than it does with speculative execution.

    • It does. Instead of trying to save time, it tries to save on energy bills.

  • Invention (Score:5, Insightful)

    by Anonymous Coward on Wednesday February 09, 2022 @07:13PM (#62254399)
    did I hear "nightly batch jobs" ?
    • by Tablizer ( 95088 ) on Wednesday February 09, 2022 @07:34PM (#62254439) Journal

      did I hear "nightly batch jobs" ?

      and let's put them on a new kind of server called a "mainframe"; sounds important and reliable. Further, we'll make a language optimized for biz & CRUD batch jobs; it would be business oriented. I got it: "Common Business Oriented Language": COBOL! Make it English-like so it's easier to learn, or at least convince management it is. And use all-capital key-words to go with our "important and reliable" theme.

      We'll swipe from Grace Hopper's draft language and not give her credit so we don't have to pay her, like all the other ladies we stiffed, including Rosalind Franklin. It's our manly privilege. COBOL will be big and last many many decades, I just feel it!

    • by AmiMoJo ( 196126 )

      It's different to batch jobs. Batch jobs are requested in advance, the user knows that they want the data the day after.

      This paper suggests speculative processing based on guessing what the user is likely to want. I once worked on a system that produced a lot of graphs, which were often not viewed by anyone. The graphs were produced when the user requested the page. The paper suggests that the graphs could be produced when energy is cheap and then stored. On the one hand the server does some unnecessary wor

      • You do realize it's the same concept. Running jobs at certain times.
      • by q_e_t ( 5104099 )
        That is generally different to typical batch job requests in which you'd create a directed acyclic graph, and then set overall cost and time metrics (or cost and urgency) and let the scheduler then work out the best options for running that totally workload. The TFS seems to be adding more speculative execution and a much more dynamic view of power costs. It's definitely something that's been discussed for a long time, but not something I recall having seen a practical system for before this, so kudos to th
    • Yes, it's basically that. It's generalised into speculatively pre-calculating things the user might want, as well as the more traditional batch job. But it's much of a muchness really.

      What I can see in this is that instead of writing a Cron time schedule as "17 1 * * *", you'd write something more like "can start after 9pm, must be done by 6am". The underlying scheduler then (from historical measurements) can work out that your batch job will take 3 hours, so it must start at (say) 2.30 at the latest. From

      • by q_e_t ( 5104099 )

        Some cloud providers are offering a measurement of the carbon your cloud account has cost (along with your invoice)

        We did this at a high performance computing site about 15 or more years ago for a while.

    • by q_e_t ( 5104099 )
      This was discussed relative to batch job schedulers about 20 years ago. Unless there is something I am missing from TFS this doesn't seem especially new in this space unless they've written plugins for the popular schedulers to make it easy to implement. Since few high performance computing sites seem to have differential day/night pricing for their electricity feed I don't think anyone has done much work on the plugins, or at least I've never looked for them.
  • by superdave80 ( 1226592 ) on Wednesday February 09, 2022 @07:13PM (#62254401)

    would perform certain computations in advance when power is cheap -- like when the sun is shining or the wind is blowing -- and cache the results for later.

    Wouldn't you be 'storing' the computations, and then waiting for a drop in power prices to perform the computations? Even the article itself gives an example of this:

    computer scientists can queue up the training data and let the information-battery manager decide when to run the training.

    • by Fallen Kell ( 165468 ) on Wednesday February 09, 2022 @07:29PM (#62254433)
      Yeah. I was trying to figure out how in the world we could "predict" future requests from the end user using the computers/data center. I mean, aside from scheduled/cron jobs (which are probably scheduled for other reasons such analysis of events that occurred across the day/week/month or collection of logs/data during a time period and thus can't be run ahead of time because it needs the data from a time period that would then be in the future that has not yet happened), the only other type of workload that could be "predicted" are things sent to batching/queuing systems.

      The big problem I see with this is that many of those batching/queuing systems don't have the capacity to run everything all at once (hence why jobs are batched/queued). So to get more energy efficiency, you instead shutdown processing when costs are high and ramp up processing when costs are low, but the result of which is that end user job throughput now takes even longer, meaning you need more processing power capacity to attempt to offset this. The other issue is that many of the types of jobs that are batched/queued may be things that run for longer periods of time. So now the jobs or operating systems need to have checkpointing so that a run can be paused/stopped so that the system can be shutdown when power costs go up and start back up where they left off when processing is turned back on when the power costs drop down...

      I see just a ton of headache for the IT department fielding tickets from the users asking why their jobs are taking longer and are not done yet...
      • by hazem ( 472289 )

        Yeah. I was trying to figure out how in the world we could "predict" future requests from the end user using the computers/data center.

        I could see this working well in HPC situations. They already have job queues and schedulers that watch for available processing and memory resources for running jobs. This could likely be easily adapted to include variable energy costs in the scheduling priorities.

        In the corporate work I've done, I think it would be less applicable since most of the jobs that get scheduled have dependencies on previous jobs as well as needing to be run at certain times of day; and most systems are running batches 24x7, w

      • users asking why their jobs are taking longer and are not done yet...

        Obvious solution: Tell them if they want their jobs done sooner they have to pay more.

  • I'm not a business genius but If electricity costs more or less depending on the time then shouldn't you have been running those cron jobs when it was cheaper? I mean, I know it's highly technical with terms like "time" and "running" but it seems like this was a major oversight.

    • Because then everyone uses the same cron time schedule and the problem is just moved to different time of day.

      • by mi ( 197448 )

        Because then everyone uses the same cron time schedule and the problem is just moved to different time of day.

        Some tasks either do not allow a delay at all, or are profitable enough to justify running now.

        For everything else it makes sense to run, when the computations cost less — if you can...

        Reminds me of the joke about a blind golfer invited to play at night: the fees are much lower, and he cannot see anything anyway...

    • by darkain ( 749283 ) on Wednesday February 09, 2022 @07:35PM (#62254441) Homepage

      We already are. They're called AWS Spot Instances.

  • Decades old idea with a new name. It is interesting that their paper uses the very old example of square again. Check out here: https://en.wikipedia.org/wiki/... [wikipedia.org]
  • They Wrote a PAPER! which is really what science is about.

  • The ENERGY cost of a calculation is the same. Only the $$ amount is changed. It's not a battery. It's just buying cheaper power. Doesn't mention the additional storage costs of the cached data - ram? SSD? Spinning disk? Spit into a long copy loop over global fiber and really bung thing up? This is stupid.
    • The ENERGY cost of a calculation is the same.

      Yes, but the energy comes from wind instead of coal.

      Only the $$ amount is changed.

      Indeed. Do you think that isn't important?

      It's not a battery. It's just buying cheaper power.

      It has the same end result as a grid-scale battery.

      Doesn't mention the additional storage costs of the cached data

      Perhaps because this is 2022 and storage costs are negligible.

      ram? SSD? Spinning disk?

      The storage time will be in hours, so spinning disk.

      This is stupid.

      It is an old idea with new lipstick, but otherwise a smart technique.

      • It's not a battery. It's just buying cheaper power.

        It has the same end result as a grid-scale battery.

        Not at all. Electricity is fungible. Computations are not. If I store electricity overnight when it's cheap, I can use it for whatever I decide to use it for the next day. If I make computations overnight I have to use *those* computations or just throw them away, I can't decide tomorrow that I should have computed something else.

        This is stupid.

        It is an old idea with new lipstick, but otherwise a sma

  • It know what you'll need before you need it. And why wouldn't we just do everything faster rather than storing it?

  • by wakeboarder ( 2695839 ) on Wednesday February 09, 2022 @08:12PM (#62254517)

    Asked themselves in the 60s, "what if information could be a battery man?"

  • Why canâ(TM)t we just answer all the questions now and never use computers again? Or use data to bring the future to today?

  • From a grid perspective, this is just demand response. A battery in contrast is a production shifting asset.

  • I've worked in 3D animation.
    We sometimes use remote render farms when we need to meet a deadline and the computers we have locally are insufficient, I'm sure plenty of other fields use cloud-based compute or whatever the current buzzwords are.
    This is just adding an electricity price coefficient to the remote node operation time.
    For animation it probably wouldn't make much of a difference because we're always against the deadline, once the render is complete we have to iterate on it, show it to the client, f

    • by pjt33 ( 739471 )

      This is just adding an electricity price coefficient to the remote node operation time.

      I don't think so. My understanding is that it guesses what scene you'll ask it to render tomorrow, renders it today, and caches it so that if you do ask for it tomorrow it can send it straight back.

      • Specifically for rendering 3D scenes there's no point in adding any guesswork, either a scene is ready or it isn't, and if it's ready it's going to be submitted to the queue and have a priority number.

        For more general computation, you still need some indication of what's going to be coming down the pipeline in the future, which I'd say is pretty equivalent to adding jobs to a queue.

        Anyway, this seems to assume an abundance of hardware that's otherwise sitting idle, which just seems like a problem of resourc

  • And then I shop around for who has done the precalculations to answer it inexpensively. It could be sort of like talking to older relatives where I ask my question and get the answer to the one they have been thinking about. So, knowing who to ask is always beneficial.
  • Any sufficiently large "cloud" does this already. It is called "batch" execution, and some latency insensitive jobs will be run at "off-peak" time, which highly correlates with "off-peak" energy costs as well: specifically: at night.

    What they seem to be doing is running partial calculations, and pausing if costs go higher. Even this is old news, probably as old as IBM built their first mainframe computers. Today any cloud provider will give you "spot" instances, which more or less does the same thing.

    Also,

  • by DrMrLordX ( 559371 ) on Wednesday February 09, 2022 @10:28PM (#62254737)

    How many datacentres have hardware that's active during the day but is idle overnight?

    • by AmiMoJo ( 196126 )

      Most I would imagine. Many servers exist simply to be closer to where the users are, to reduce latency. Content Distribution Networks, caches, web servers, DNS servers, database servers, streaming game servers etc.

      Often they are running in virtual machines too, so the same underlying hardware can run another low priority VM that soaks up unused capacity.

      • by q_e_t ( 5104099 )
        Many of those are selling availability off-peak for other tasks. It's how AWS got started, after all.
    • How many datacentres have hardware that's active during the day but is idle overnight?

      Many. While software companies which use datacentres typically have high and constant 24 hour load requirements, they vary greatly by location, and as such their workloads are often split up by location to reduce latency. While you're sleeping, largely so is your datacentre.

    • How many datacentres have hardware that's active during the day but is idle overnight?

      My experience has been that most general-purpose datacenters are like that. At my current and previous jobs, we have always seen fluctuations in power consumption, computing capacity and network throughput during business/peak hours. Everyone is hitting servers, services and infrastructure as they do work. This is not limited to software development industries. Customer service centers, claims/insurance departments and hospitals, they all have peak hours during daylight.

      In some cases, it is so pronounced

      • Okay. Given that, how realistic is it to time-shift jobs from peak usage hours to off-peak, given that many of these jobs are time-sensititve/on-demand?

    • Ross, is that you?
  • A processing cluster than runs only half of the time must be twice as big. It might save energy costs, but higher capital costs should not be ignored.
    • Or you buy the capacity when needed. Many Big Pharma companies run simulations on public clouds often using thousands of instances for only a few days and then they release them all once they're done.

  • This is dumber than Energy Vault and solar freakin' roadways.

    Instead of something expensive and inefficient that could release magic smoke, use pumped energy storage (PES).

    This is someone's invention wet dream.

  • by bradley13 ( 1118935 ) on Thursday February 10, 2022 @01:32AM (#62254885) Homepage

    Predicting loads is easy, but predicting the computations that will make up those loads? Not so much.

    The paper refers to this as speculative execution, which it certainly is. Very speculative. Imagine: most employees start work in an hour - let's predict which database queries they will generate. Your success rate better be high, or you will have just wasted your time.

    • > most employees start work in an hour

      There's your window right there! Precompute their authorized login, you then just send them the auth token, no computation needed!

      • by giuntag ( 833437 )
        > Precompute their authorized login, you then just send them the auth token

        So, something which used to only move between the server's ram, its network card, and the end user's browser is now stored on disk for a non-negligible amount of time. You'd better rethink your whole attack perimeter when doing that change, and ask every single vendor of security libraries you use to rework their threat model. Good luck with that... :-D

        Hash calculations aside, let's try to imagine what cost saving there might be f
    • Predicting loads is easy, but predicting the computations that will make up those loads? Not so much.

      Depends on the nature of the computations, or rather, operations. ETLs, data processing systems and CI pipelines, they all end up showing some sort of performance/requirements characteristics over time. Good NOCs can tell if something is going sideways if they detect that an ETL job begins to use, say, 15% more CPU or see a decrease in response time, even when that same job is running using the same resources and at the same time windows as it has had for the last 12 months.

      With good and consistent teleme

  • of pre-calculated values with enough significance for the application, that I used in programming microcontrollers, just not to waste any time doing the same calculations over and over again.

    It is an interesting concept perhaps on a large scale as well.
  • Isn't this exactly like cloud spot pricing which is used to process workloads at some unknown time?
  • by Virtucon ( 127420 ) on Thursday February 10, 2022 @10:27AM (#62255659)

    For those of us that used early timesharing services, i.e., GE Timesharing Services, this is nothing new. Primetime CRU costs forced a lot of shops to move the processing of big jobs to after-hours when rates were 75% less. Yes all across 4800 baud modems too.

    • Shh...don't disturb the delicate geniuses that think they've discovered something knew nor the lemmings that follow in their wake.
  • ...
    3. Just mine the latest shitcoin at offpeak hours

  • I often see natural gas flaring to the sky at places where it is drilled and pumped- even on the outskirts of major cities where it gets pumped in.

    I never understood why utility companies didn't just build their own crypto mining rigs next to solar fields and windmill farms to fire-up at times of overproduction...
    build a pc that can run slowly or quickly without damage from moment to moment,
    and process crypto mining for all the electricity in excess of exact need at that moment.

  • Right. Because people don't mind waiting until tomorrow for their web page to load.

    • by q_e_t ( 5104099 )
      If your query is 'show me all of Adele's latest hits' then a cached entry may be sufficient. The options are then whether you calculate it on first request for $0.02 in cost, then cache, or predict it will be requested and that it can be done two hours earlier and cached at a cost of $0.01.
  • ROTFL. This reads like nothing more than old-fashioned timesharing, and job scheduling. Next up: maybe JCL wasn't so bad after all.

  • If you (your system) can schedule your jobs such that things that are low priority will be benched when there's not enough power available, wouldn't you get more or less the same when you let your cores run freely at times of cheap power and limited speed fewer cores at expensive power slots?

The 11 is for people with the pride of a 10 and the pocketbook of an 8. -- R.B. Greenberg [referring to PDPs?]

Working...