Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Power HP Hardware

HP Looks To Improve Power Management Coordination 63

tringtring writes "Computer World reports on an HP Labs researcher who foretells a future in which power management features will be built into the processor, memory, server, software and cooling systems. Coordination will be paramount. 'What happens if you turn all these elements on at the same time?' the principal research scientist at HP Labs asks. 'How do I make sure that the system doesn't explode?' This future is the vision of Parthasarathy Ranganathan, the man behind the "No Power Struggles" project at Hewlett-Packard. Power management systems will have to operate holistically, without one component conflicting with another, Ranganathan says. Ranganathan is just one of many researchers at the tech industry's biggest labs researching on how future data centers will handle increasing demands for processing capability and energy efficiency while simplifying IT."
This discussion has been archived. No new comments can be posted.

HP Looks To Improve Power Management Coordination

Comments Filter:
  • Amen. (Score:5, Insightful)

    by BronsCon ( 927697 ) <social@bronstrup.com> on Sunday March 02, 2008 @09:26PM (#22619292) Journal
    My 10 year old HP laptop gets 5hr 45min on a freshly charged battery. The one I'm sitting at right now barely gets 2hr. It's about time they get back to where they were.
    • Re:Amen. (Score:5, Insightful)

      by dreamchaser ( 49529 ) on Sunday March 02, 2008 @09:41PM (#22619368) Homepage Journal
      So you have 10 times the computing power (to be conservative)but over a third the battery life as your old unit. It's called a tradeoff. You can't compare apples to oranges.
      • Re:Amen. (Score:4, Insightful)

        by BronsCon ( 927697 ) <social@bronstrup.com> on Sunday March 02, 2008 @09:56PM (#22619438) Journal
        I also have a battery with twice the volume and a chemistry with 4x the energy density of the old laptop's battery. Not to mention that this battery is much newer and in better overall condition. I have 8x the battery capacity and ~3x the CPU power (1600Mhz vs 475Mhz). Factor in the fact that the older laptop has an internal floppy drive as well as DVD drive, where the newer laptop lacks the floppy drive, a hard disk that draws 5 watts more than the one in the newer laptop. I should be seeing nearly thrice the battery life by your logic.

        You made the assumption that I had a 4750Mhz CPU, the same peripherals, the same size battery with the same battery chemistry and that similar peripherals use the same amount of power. You also failed to account for power management systems that are present in current laptops, which did not exist 10 years ago. Yet another thing you failed to account for is the supposed increase in efficiency (and decrease in overall power consumption) claimed by PC manufacturers, especially with regard to laptops. You even forgot to account for the age of the battery; 10 years vs. a week-old warranty replacement of a less-than-nine-month-old battery.

        I have a battery with 8x the capacity in a system with less hardware and a supposedly more efficient CPU which is only about 3x faster, components which claim lower power consumption and over all better power management than my 10 year old laptop from the same manufacturer. Why am I seeing 1/3 the battery life of the old system rather than the 3x increase logic and mathematics tell me I should be seeing?

        Someone, somewhere, is lying and it's not me.

        Oh, and... first post! :)
        • The clock speed doesn't matter unless you're comparing the same architectures, which they are not. The performance differential is not what you said.
        • by rm999 ( 775449 )
          "You made the assumption that I had a 4750Mhz CPU"

          You made the assumption that to be 10x faster, a CPU needs to run at 10x the speed. The top Intel CPU on this chart is 3200 Mhz, and is 10x faster than the bottom (2800 Mhz).

          Remember - Moore's law is about transistor density, not transistor speed.
        • by hakey ( 1227664 )
          If you are really interested in why, then this classic paper on chip scaling S. Borkar, Design Challenges of Technology Scaling, IEEE Micro, pages 23-29, July 1999 [berkeley.edu] explains why todays chips consume so much power.
        • Why am I seeing 1/3 the battery life of the old system rather than the 3x increase logic and mathematics tell me I should be seeing?
          It's the processor, you have to power that huge pipeline, it's also the motherboard. And for Quad core, heck that's 95W
        • by maxume ( 22995 )
          By volume, lithium-ion batteries only have 1-2x the energy density of nimh:

          http://en.wikipedia.org/wiki/Rechargeable_battery#Battery_types [wikipedia.org]

          (and closer to 3x by mass). It would be better to compare the stated capacities, rather than your assumptions.

          I imagine the newer screen is also faster and brighter, both of which increase power draw(LED backlights improve brightness per watt though, so if one of those is involved...). So you aren't lying, but you aren't being very careful.
      • No, but we can (and will) compare watts to power!
    • Re: (Score:1, Troll)

      by casualsax3 ( 875131 )
      Where is that, a 33MHz chip and a drive that spins at 500 RPM? All for TWICE the battery life? No thanks.
      • Re: (Score:3, Insightful)

        by Anonymous Coward
        The real question is: what the hell is software doing with all these resources? Why is it always on the shoulders of hardware to improve power specs? I have an idea: how about not requiring billions of processor cycles to support the 12 layers of indirection, redirection, abstraction, obfuscation, 12 megs of NOPs just to change the color of an icon? It is mind-boggling to think about what a modern processor does, I suspect most of it is crud left over from poor software decisions that we must drag around fo
        • There is absolutely nothing that can be done about this now. Software and abstractions are a lost cause.

          In the whole picture, hardware is just another layer of abstraction, built of more interacting layers. But, todays hardware comes from several magnitudes lower number of suppliers than software and is much more tightly controlled and built to specs.

          Another thing: hardware engineers are usually taught in universities. Software "engineers" are usually not.

          • by jfim ( 1167051 )

            Another thing: hardware engineers are usually taught in universities. Software "engineers" are usually not.

            This depends on where you are. In Canada, the title of engineer is protected by law(see wikipedia [wikipedia.org] or Engineers Canada on MSCEs [engineerscanada.ca]).

            As for abstractions, they allow other things that were simply impossible before. Abstractions allow tuning a design on criteria such as maintainability, extensibility, supportability, etc. Yes, making software more maintainable can reduce performance, but it also reduces t

            • by cgenman ( 325138 )
              Would you rather pay more for software that has less features but is faster?

              I'd rather pay more for software that had the same amount of features but less years of krufty hack layered upon krufty hack.

              Quite simply, we're talking about Windows here (and maybe Norton). Mac OS7 did a great job of providing both abstraction and speed in a maintainable environment on a 68030: a chip so slow that you wouldn't notice it if it was working as a co-processor on a modern machine.

              Vista, on the other hand, requires a p
              • by jfim ( 1167051 )

                I'd rather pay more for software that had the same amount of features but less years of krufty hack layered upon krufty hack.

                You seem to be downplaying the costs that are incurred when throwing away working code to build new one. Non-trivial code takes a lot of time and effort to build. For example, let's look at Mozilla [wikipedia.org]. The Wikipedia article mentions the decision to scrap the codebase somewhere in 1998. When did the 1.0 version of Mozilla came out? 2002, four years later.

                It clearly is not a viable opt

    • Re: (Score:2, Insightful)

      All they have to do is look at the work being done on OXs by OLPC, cause that is exactly what they are doing to get their extra long battery life.
    • What kind of laptop is it? My girlfriend always used a Powerbook, and got about 2 hours with it...then she had to get a Dell, and was amazed to discover it got 6+ hours on a charge. To which I responded '...yea...your mac didn't?'
      • My Toshiba gets ~2.5 hours with the backlight dimmed.
      • Both are HP. I thought I made that clear in my original post. I should also state that the CPU is less than 2x as fast as the older laptop when running on battery (it scales back to 800Mhz) and I have the backlight set to drop to 40% when running on battery. Wireless is configured to drop from 54mbps to 24mbps and halve its transmit power when running on battery.

        The 10 year old laptop does none of this, everything runs at full power, full brightness, full speed, all the time.
    • Re: (Score:3, Funny)

      by robogun ( 466062 )
      Wait till you plug in an HP All In One printer. You'll get 15 desktop icons and a bunch of Taskbar quick launch icons. With 30 new high priority processes using half your CPU and all your memory, your battery life will drop to minutes, assuming your machine even meets the OS requirements.

      I would not recommend HP to write power mamagement software.
  • by rde ( 17364 ) on Sunday March 02, 2008 @09:27PM (#22619296)
    "What happens if you turn all these elements on at the same time?" the principal research scientist at HP Labs asks. "How do I make sure that the system doesn't explode?"

    That's certainly a worry for me. The last thing I want when I turn on a "processor, memory, server, software and cooling systems" is for the system to explode. Being a dedicated slashdotter, and therefore Linux user, I have little worry that the software will cause any manner of combustion event, but I'd never really considered the dangers of using a processor and memory at the same time. I was thinking of getting more RAM, but given that I'm already running a dual-core, perhaps I should hold off on the extra gig until I hear from HP.
    • Stealing yout thoughts right from mid-stream, I quote "That's certainly a worry for me. The last thing I want when I turn on a "processor, memory, server, software and cooling systems" is for the system to explode. Being a dedicated slashdotter, and therefore Linux user" . That LINUX user crack is really uncalled for , under these circumstances :)
  • I can see the arguments between brands already:
    "-Your chip is sucking all the power and making mine look bad!
      -No, yours is!"

    I mean, we have enough problems with benchmarking as it is; I can't see how they would make that kind of "coordination" work, when not all pieces of the computer are of the same brand. Sure you can test what component takes the more power, but they can always say the others aren't sending enough info, etc...
    • I don't care about squabbling about who's better, I care about proprietary, not easily replaceable parts. Whatever moron at HP said "hey, let's start putting specialized hardware in our computers" should be fired. It's like with Dell's crap that you can't replace with standard parts so that they can charge 3x the real price to buy replacement parts from them directly. Guess how much a replacement motherboard was for a 6 year old Dell Dimension? $130! (I bought one used on ebay for $50 though)
  • by Spazmania ( 174582 ) on Sunday March 02, 2008 @10:17PM (#22619524) Homepage
    Step 1 is user control for turning up the cooling features. If the user determines that the fans should run faster then the fans should run faster regardless of what the "holisitic" system thinks.

    Seriously, this is the single biggest problem with the current HP DL360. The fans turn down to 30% and the memory overheats. A simple BIOS option to set the minimum fan speed to 60% would solve this.
    • as long as there is override for when you set a max speed and the system gets to hot.
    • Automatic is better (Score:4, Interesting)

      by EmbeddedJanitor ( 597831 ) on Sunday March 02, 2008 @10:34PM (#22619626)
      The crap design you mention is jsut that: a crap design. It is possible to make a good automatic design.

      How many cars these days have manual chokes, advance/retard, mixture settings etc? None. They are all automatic. Give a user a knob and they will fiddle with it and break the system.

      • The downside of not having those "manual" systems is that the user, no matter how well-versed they are, cannot adjust the system to do what tey want.

        Yes, your analogy is very valid for the Average Joe in terms of cars, but when a real user needs to make there car or truck do more, they have no way of doing it. If I want to give my truck more gear ratios for better mileage, I just slap on an over/under drive. Plus, automatic isn't always better, such as is the case with four-wheel-drive.

        The very real, AND HO
  • What happens if you turn all these elements on at the same time?' the principal research scientist at HP Labs asks. 'How do I make sure that the system doesn't explode?
    I can't say as I've ever worried about that.
  • dont make it dependent on ACPI...
  • by zappepcs ( 820751 ) on Sunday March 02, 2008 @10:26PM (#22619578) Journal
    There is something to this. In a data center, if you have a brown-out or full power drop, the strain on power systems to restore power are what can only be described as epic.

    When you take a 1400 amp back up system and drop it up and down like a yo-yo in a lightning storm, stress tends to bring out the worst of Murphy's Law. If all the components in a data center were orchestrated, that can be mitigated. It can be mitigated into nearly 'not a worry' status.

    Monitors? low priority in most cases. Redundant supplies, in some cases bring them up separately. Cooling fans could be delayed by some seconds depending on usage. It may seem negligent power use, but on startup each system will draw it's max current, and when all do at the same instant, the peak draw can be overwhelming. In fact, computers themselves could bring up hardware in an orchestrated manner to reduce the startup surge.

    In addition to this, by adding power management, it's possible to reduce data center power use also. If you monitored temp and turned off fans when not needed, less power used, less heat generated, less cooling needed overall. If all hardware were built in such a way the hardware on a quad nic card that is not used could be powered off after configuration... as an example. Nic cards could be the last thing to be powered up.

    This type of design is practically rocket science. If you look at systems that go into space you will see that they count every milliamp of current draw and manage it with precision. Power use is a big concern for space craft.
    • by ClamIAm ( 926466 )
      If all the components in a data center were orchestrated

      Aw, now I want power redundancy systems that play the 1812 Overture as they fight epic brownout conditions. That would be sweet. Although, it would use a bit more power...
      • Believe it or not, I like to have audio indication on many systems. I am at the point now that hearing certain sounds in conjunction with other events lets me know instinctively what is happening. I'm reasonably certain that one or two power outages would have a symphony of things going on with your proposal, and that symphony is easier to determine what is happening than reading several thousand kilobytes of log files. I like the idea. Even just knowing when something 'not normal' has happened by audible s
    • Nic cards could be the last thing to be powered up.
      Breaking WoL.
    • In addition to this, by adding power management, it's possible to reduce data center power use also. If you monitored temp and turned off fans when not needed, less power used, less heat generated, less cooling needed overall. If all hardware were built in such a way the hardware on a quad nic card that is not used could be powered off after configuration... as an example. Nic cards could be the last thing to be powered up.

      If heat and energy usage were that much of a problem then the laws of capitalism dic

  • dropping ac to dc psu in each system and replacing them with dc to dc ones will drop heat and power use.
    • by Detritus ( 11846 )
      How so? You've replaced the AC input to the power supply with a DC input, at a lower voltage and with increased transmission losses. Whatever the input, it gets converted into AC in the voltage regulator.
  • How do I make sure that the system doesn't explode?
    Don't connect the detonating charge, unless of course you just can't handle another reboot...
    • Don't connect the detonating charge...
      Or in this case, don't complete the electrical circuit if you have a Sony battery.
  • by ejoe_mac ( 560743 ) on Sunday March 02, 2008 @11:13PM (#22619800)
    So when you're purchasing power from the grid and you're metered not on use, but on peak draw, this will save you a LOT of money. Coordinating the power on of a number of systems which draw a lot at power on verses their normal draw (think turning on 100 laser printers all at the same time!).
  • You do not want to spin up a bunch of motors all at once if you can avoid it.
  • Someone is finally dealing with the mess that is power cord management.

    Oh wait... nevermind.
  • Improving power management in the hardware is a good idea, then again the problem is probably simpler. Currently PCs uses a power management protocol that doesn't seem to be easy to understand and in certain case just badly implemented. It really gets on my nerves when I buy a new motherboard and there is no way to get the system to go to sleep. I am not too sure whether to blame this on Windows, the hardware or a bad specification?

    Can anyone tell me whether EFI (replacement of BIOS), provides a better way
    • Can anyone tell me whether EFI (replacement of BIOS), provides a better way of talking with the hardware for power management needs?
      I think EFI still uses ACPI for power management, so it's the same old fail.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...