# 'Approximate Computing' Saves Energy

#### Soulskill posted 1 year,2 days | from the 1+1=3-for-sufficiently-large-values-of-1 dept.

154
hessian writes *"According to a news release from Purdue University, 'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption. "The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency," said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. "Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter – understanding us, being more context-aware and having more natural user interfaces. ... The nature of these computations is different from the traditional computations where you need a precise answer."' What's interesting here is that this is how our brains work."*

## meanwhile... (3)

## i kan reed (749298) | 1 year,2 days | (#45729633)

The majority of CPU cycles in data centers is going to be looking up and filtering specific records in database(or maybe parsing files if you're into that). They can possibly save energy on a few specific kinds of scientific computing.

## Numerical computation is pervasive (4, Informative)

## l2718 (514756) | 1 year,2 days | (#45729711)

This is not about data centers and databases. This is about scientific computation -- video and audio playback, physics simulation, and the like.

The idea of doing a computation approximately first, and then refining the results only in the parts where more accuracy is useful is an old idea; one manifestation are multigrid [wikipedia.org] algorithms.

## Re:Numerical computation is pervasive (2)

## DutchUncle (826473) | 1 year,2 days | (#45729973)

## Re:Numerical computation is pervasive (0)

## Anonymous Coward | 1 year,2 days | (#45730345)

No.

## Re:Numerical computation is pervasive (0)

## Cryacin (657549) | 1 year,2 days | (#45730429)

## Re:Numerical computation is pervasive (2)

## mrbluze (1034940) | 1 year,2 days | (#45730575)

Welcome to half assed computing.

Which half?

## Re:Numerical computation is pervasive (5, Interesting)

## raddan (519638) | 1 year,2 days | (#45730195)

The only new idea here is using approximate computing specifically in trading high precision for lower power. The research has less to do with new algorithms and more to do with new applications of classic algorithms.

## Re:Numerical computation is pervasive (1)

## tedgyz (515156) | 1 year,2 days | (#45730617)

Holy crap dude - you hit the nail on the head, but my brain went primal when you brought up the "Big O".

## Re:Numerical computation is pervasive (1)

## raddan (519638) | 1 year,2 days | (#45730881)

## Re:Numerical computation is pervasive (-1)

## Anonymous Coward | 1 year,2 days | (#45730299)

This is another "good enough" trade-off. We saw it with NoSQL, and look how well NoSQL worked for the ACA backend. In theory, this type of computing will be used for things that are "good enough", similar to using a genetic algorithm for the travelling salesman problem. However, in real life, because it is cheaper, we will see this used on things where it shouldn't be, such as financial transactions.

I'm tired of halfway measures becoming the norm in computing, and how they creep in, just because it is cheaper.

## Re:Numerical computation is pervasive (1)

## egcagrac0 (1410377) | 1 year,2 days | (#45730567)

we will see this used on things where it shouldn't be, such as financial transactions.

I, for one, am OK with 1/10000th of a dollar accuracy.

Heck, within 1/640th of a cent ought to be good enough for anybody.

## Re:Numerical computation is pervasive (1)

## parkinglot777 (2563877) | 1 year,2 days | (#45730713)

Then you are creating a problem if it goes into financial transactions. If each transaction is computed 1/10000 less (truncate), how much would it be off by 1 million transactions? Also, only 1 cent difference in accounting will need adjustment in multiple places in order to prove the difference. You are thinking too narrow.

Back to the GP, I do NOT see how this will be put in financial transaction anywhere? Also, it is a rule of thumb to separate decimal from the whole value currency. Simple transaction computation would not need this kind of approximated scientific computation from the article.

## Re:Numerical computation is pervasive (1)

## egcagrac0 (1410377) | 1 year,2 days | (#45730913)

how much would it be off by 1 million transactions?

That depends entirely on the dataset.

If all the transactions are dealing in units no smaller than $.01, you should see no error truncating to the nearest $.0001, no matter the number of transactions.

If you're worried, just create an account called "Salami" and post the rounding errors there.

## we will just drop that leftover parts of cent to o (1)

## Joe_Dragon (2206452) | 1 year,2 days | (#45730931)

we will just drop that leftover parts of cent to our own account.

## Re:Numerical computation is pervasive (0)

## Anonymous Coward | 1 year,2 days | (#45730863)

Won't work for currencies like Bitcoin where a chunk that is in the 10^-8 digit range can have a solid value in the near future as the currency matures and increases in value.

## Re:meanwhile... (1)

## K. S. Kyosuke (729550) | 1 year,2 days | (#45729739)

Even OLAP could probably profit from this. Sometimes, it doesn't matter whether the response to the question "does my profit increase correlate strongly with my sales behavior X" is "definitely yes, by 0.87" or "definitely yes, by 0.86", the important thing is that is isn't "most likely no, by 0.03".

Also, in the era of heterogeneous machines, you ought to have a choice in that.

## Re:meanwhile... (4, Informative)

## ron_ivi (607351) | 1 year,2 days | (#45730035)

The majority of CPU cycles in data centers is going to be looking up and filtering specific records in database

Approximate Computing is

especiallyinteresting in databases. One of the coolest projects in this space is Berkeley AMPLab's BlinkDB [blinkdb.org] . Their cannonical exampleshould give you a good idea of how/why it's useful.

Their bencmarks show that Approximate Computing to 1% error is about 100X faster than Hive on Hadoop.

## Re:meanwhile... (4, Informative)

## ron_ivi (607351) | 1 year,2 days | (#45730079)

DB: Queries with Bounded Errors and Bounded Response Times on Very Large Data

## Re:meanwhile... (5, Funny)

## lgw (121541) | 1 year,2 days | (#45730071)

Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

## Re:meanwhile... (1)

## camperdave (969942) | 1 year,2 days | (#45730127)

Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

We've had approximate computing since the earliest days of the Pentium CPU.

## Re:meanwhile... (1)

## FatdogHaiku (978357) | 1 year,2 days | (#45730723)

We've had approximate computing since the earliest days of the Pentium CPU.

My favorite joke of that era was

I am Pentium of Borg.Arithmetic is irrelevant.

Division is futile.

You will be approximated!

## Re:meanwhile... (1)

## nospam007 (722110) | 1 year,2 days | (#45730181)

" Perhaps "approximate computing" is farther along than I imagined!"

Indeed, Excel has been doing it for 20 years.

## Re:meanwhile... (0)

## Anonymous Coward | 1 year,2 days | (#45730751)

The reason Excel is a large program is because the cpu is imperfect. And Excel would give correct calculations then some programming languages!

## Re:meanwhile... (5, Funny)

## formfeed (703859) | 1 year,2 days | (#45730599)

Currently Slashdot is displaying ads for me along with the "disable ads" checkbox checked. Perhaps "approximate computing" is farther along than I imagined!

Sorry, that was my fault. I didn't have my ad-block disabled. They must have sent them to you instead.

Just send them to me and I will look at it.

## It's a nice thought (0)

## Anonymous Coward | 1 year,2 days | (#45729659)

But it's ultimately impossible to build a computer that calculates with arbitrary precision. The closest approximation would be to have a pair of FPUs, one for lower precision and one for higher precision. Many GPUs already function this way.

## Re:It's a nice thought (1)

## Drethon (1445051) | 1 year,2 days | (#45729719)

## Re:It's a nice thought (1)

## mrchaotica (681592) | 1 year,2 days | (#45730171)

Congratulations, you've just described fixed-point arithmetic [wikipedia.org] .

## Re:It's a nice thought (3, Funny)

## K. S. Kyosuke (729550) | 1 year,2 days | (#45729751)

## Re:It's a nice thought (1)

## viperidaenz (2515578) | 1 year,2 days | (#45729837)

Except that's what these researchers are doing. They're building new instructions that perform faster but produce lower precision results.

## Re:It's a nice thought (1)

## Anonymous Coward | 1 year,2 days | (#45730291)

Pshaw, I had one in the 4th grade. It was called a "slide rule" and I used it because I suck at memorization. Who needs multiplication tables when you have a handy tool the teacher doesn't know how it's used or what it actually does?

## Re:It's a nice thought (2)

## russbutton (675993) | 1 year,2 days | (#45730935)

## Re:It's a nice thought (1)

## bobbied (2522392) | 1 year,2 days | (#45730333)

But it's ultimately impossible to build a computer that calculates with arbitrary precision.

Excuse me but not quite, assuming you don't mean absolute precision, we already use multiple precision calculations based on need, speed or memory foot print. We have multiple sizes of floating point number representations as well as integers of varying sizes. Plus, there is nothing that prevents you from doing X-Bit floating point number calculations if you wanted.

## Analog (5, Interesting)

## Nerdfest (867930) | 1 year,2 days | (#45729675)

This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.

## My Dad targeted naval antiaircraft missiles with- (0)

## Anonymous Coward | 1 year,2 days | (#45730021)

- analog computers.

A Talos missile, a two stage, first stage solid booster, second stage air breathing ramjet could take out a wildly evasive supersonic north vietnamese-piloted soviet MiG with an analog computer calculating the missile trajectory.

## Re:My Dad targeted naval antiaircraft missiles wit (1)

## wisnoskij (1206448) | 1 year,2 days | (#45730295)

Seems like a totally impractical system.

You are probably already using an algorithm that produces approximate results, add on top of that a computer that makes mistakes routinely in the name of speed.

You would think that sometimes the stars would just align and you would get a result that is just completely wrong.

## Re:My Dad targeted naval antiaircraft missiles wit (0)

## Anonymous Coward | 1 year,2 days | (#45731309)

This was during Vietnam

Digital computers would have been too slow.

## Re:Analog (2)

## wavedeform (561378) | 1 year,2 days | (#45730261)

## Re:Analog (1)

## egcagrac0 (1410377) | 1 year,2 days | (#45730669)

Absolute accuracy may not be possible, but plenty-good-enough accuracy is achievable for a lot of different types of problems.

The same can be said of digital computers.

## Re:Analog (1)

## ezdiy (2717051) | 1 year,2 days | (#45730373)

## Re:Analog (0)

## Anonymous Coward | 1 year,2 days | (#45730417)

Analogue computers are a nice idea.

In closed position a transistor has zero current and therefor doesn't burn off heat.

In open position a transistor has zero voltage across it and therefor also doesn't burn off heat.

In partial open position a transistor has a non zero current and a non zero voltage across it and therefor burns off heat.

So using a transistor digitally is very energy efficient, because either the transistor is fully closed or fully open, only during the transitions energy is wasted as heat.

In analogue computer a transistor will spend most of its time in a partial open position waisting energy as heat.

Cooling of an analogue computer will be a significant issue. On the other hand switching frequency will not add any more heat to the system, power consumption will be as bad as it gets during idle.

## Re:Analog (0)

## Anonymous Coward | 1 year,2 days | (#45730605)

Note that ECL logic also uses non saturating transistors and burns power independently of the frequency.

All receivers on differential busses (PCIe) are fundamentally analog parts, since they amplify a small differential signal to internal logic levels. But analog electronics is much funnier than digital (I do both), and you can't do anything digital at 60GHz and higher.

## Re:Analog (3, Informative)

## bobbied (2522392) | 1 year,2 days | (#45730493)

This is also how analog computers work. They're extremely fast and efficient, but imprecise. It had a bit of traction in the old days, but interest seems to have died off.

Analog is not imprecise. Analog computing can be very precise and very fast for complex transfer functions. The problem with Analog is that it is hard to change the output function easily and it is subject to changes in the derived output caused from things like temperature changes or induced noise. So the issue is not about precision.

## Re:Analog (1)

## wisnoskij (1206448) | 1 year,2 days | (#45730503)

Since in an Analogue computer every bit now contains an infinite amount of information, instead of just one, I imagine it would be incredibly fast.

And since every decimal is already stored in approximate form, in a normal computer, I cannot imagine it being that different.

## Re:Analog (1)

## wonkey_monkey (2592601) | 1 year,2 days | (#45730893)

Since in an Analogue computer every bit now contains an infinite amount of information, instead of just one, I imagine it would be incredibly fast.

What is this I don't even.

## Re:Analog (1)

## wisnoskij (1206448) | 1 year,2 days | (#45731027)

Binary: 1 bit can be 2 values, and contains that absolute minimal amount of information possible (true or false).

Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit. So information is sent and computer far faster.

Analogue: 1 bit can be an infinite amount of values, so an infinite amount more information can sent in a single bit. So information is sent and computed far far faster.

## Re:Analog (1)

## Ferrofluid (2979761) | 1 year,2 days | (#45731449)

Decimal: 1 bit can be one of 10 different values, so five times more information is present in a single bit.

No, that's not what a bit is. 'Bit' is short for 'binary digit'. A bit can, by definition, only hold one of two possible states. It is a fundamental unit of information. A decimal digit comprises multiple bits. Somewhere between 3 and 4 bits per decimal digit.

## Re:Analog (1)

## DerekLyons (302214) | 1 year,2 days | (#45730999)

On the contrary - they can be extremely precise. Analog computing elements were part of both the Saturn V and Apollo CSM stack guidance & navigation systems for example. Analog systems were replaced by digital systems for a wide variety of reasons, but accuracy was not among them.

## Accuracy isn't important anymore (4, Insightful)

## EmagGeek (574360) | 1 year,2 days | (#45729681)

We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?

## Re:Accuracy isn't important anymore (1)

## l2718 (514756) | 1 year,2 days | (#45729785)

I don't think you appreciate the point. In most cases, rather than multiplying 152343x1534324, you might as well just multiply 15x10^4x15x10^5 = 225x10^9

. And to understand this you need to be very comfortable with what 2+2 equals exactly.

## Re:Accuracy isn't important anymore (1)

## Em Adespoton (792954) | 1 year,2 days | (#45729851)

We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?

Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?

## Re:Accuracy isn't important anymore (0)

## Anonymous Coward | 1 year,2 days | (#45729929)

We're teaching our kids that 2+2 equals whatever they feel it is equal to, as long as they are happy. What do we need with accuracy anymore?

Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?

No, it's zero.

## Re:Accuracy isn't important anymore (0)

## Anonymous Coward | 1 year,2 days | (#45730477)

Indeed... what's 3/9 + 3/9 + 3/9 after all? Does it approach 1, or is it actually 1? Do we care? Are we happy?

Not trolling, clueless. What do people say that's equal to, other than 1. I can only see it equaling something else with some imprecise math.

## Re:Accuracy isn't important anymore (0)

## Anonymous Coward | 1 year,2 days | (#45730715)

Not trolling, clueless. What do people say that's equal to, other than 1. I can only see it equaling something else with some imprecise math.

.3 repeating * 3 = .9 repeating. There is a significant amount of people in the world who think that .9 repeating != 1

## Re:Accuracy isn't important anymore (2)

## Em Adespoton (792954) | 1 year,2 days | (#45731397)

3/9 is 0.3* in decimal, which is an infinitely repeating 3. Add 3 of those together, you get an infinitely repeating 9, which, while it approaches 1 using concrete values, is not precisely 1, for the standard definition of 1. However, using approximate computing or general notation, they're the same for all intents and purposes.

This gets even more interesting when you use a different base such as binary, that doesn't have the same issues with notational conversion as base 10. Base 12 is also useful here.

In my original comment, I was pointing out that we're already teaching partial answers, and we're also already doing approximate computing. Doing both intentionally though is a different matter altogether.

Time for a few mathematicians to completely refute what I said; it's mostly a thought experiment after all -- hence the "do we care?"

## Re:Accuracy isn't important anymore (1)

## Em Adespoton (792954) | 1 year,2 days | (#45731433)

Oh yes, and an alternative is to argue that 3/9 is in fact equivalent to 0.4 -- but 0.4 * 3 = 1.2, not 1. Or, you could argue that 3/9 is always 1/3 and has no decimal representation, as infinite sequences aren't actually representable (at which point sequences like pi become a bit of an issue, as they have no known finite representation in any number base -- that we know of).

## Re:Accuracy isn't important anymore (2, Insightful)

## Anonymous Coward | 1 year,2 days | (#45729959)

Where the hell did you get that from? Oh yeah, its the talking points about the common core approach. Too bad that is nothing like what the common core says. Find a single place where any proponent of the common core said something like that and I'll show you a quote mine where they really said "it is understanding the process that is important, of which the final answer is just a small part because computation errors can be corrected."

## Future AIs (1)

## Anonymous Coward | 1 year,2 days | (#45729699)

I find this interesting that most science fiction portrays an AI of some sort of having all of the advantages of sentience (creativity, adaptability, intuition) while also retaining the advantages of a modern computer (perfect recall, high computational accuracy, etc.). This kind of suggests that with a future AI, maybe that would not be the case; maybe the requirements for adaptability and creativity places sufficient demands on a system's (biological or electronic) resources that you couldn't have such a perfect combination.

Also, I'm really bored at work today, so speculation like this is my cure.

## Heard this before (1, Interesting)

## Animats (122034) | 1 year,2 days | (#45729705)

Heard this one before. On Slashdot, even. Yes, you can do it. No, you don't want to. Remember when LCDs came with a few dead pixels? There used to be a market for DRAM with bad bits for phone answering machines and buffers in low-end CD players. That's essentially over.

Working around bad bits in storage devices is common; just about everything has error correction now. For applications where error correction is feasible, this works. Outside that area, there's some gain in cost and power consumption in exchange for a big gain in headaches.

## Re:Heard this before (0)

## Anonymous Coward | 1 year,2 days | (#45730003)

Working around bad bits in storage devices is common; just about everything has error correction now. For applications where error correction is feasible, this works. Outside that area, there's some gain in cost and power consumption in exchange for a big gain in headaches.

The point here is that you use approximate computing even when error correction is not needed. For example, when your browser retrieves an image and scales it down according to the stylesheet, minor errors in the scaling algorithm can make it faster with minimally visible loss of quality.

## Re:Heard this before (0)

## Anonymous Coward | 1 year,2 days | (#45730101)

minor errors in the scaling algorithm can make it faster with minimally visible loss of quality.Ah so jpeg then :)

Scuse me while I duck out of here...

## Re: Heard this before (1)

## Robin Ingenthron (3442105) | 1 year,2 days | (#45730945)

## Finally some better 'Ai' (0)

## Anonymous Coward | 1 year,2 days | (#45729723)

FPS game enemys who are more random, unpredictable, close enough, and maybe just say 'fuck this' after they see you mow down 50 of their buddies.

## Re:Finally some better 'Ai' (0)

## Anonymous Coward | 1 year,2 days | (#45729845)

"enemys"...

There is no such word.

## Re:Finally some better 'Ai' (3, Funny)

## Diss Champ (934796) | 1 year,2 days | (#45730097)

It's just another example of the 'Approximate Spelling' technique. The parent poster is illustrating significant savings in mental energy.

## Cue the hoary old Intel Pentium jokes in 3...2...1 (2)

## thatseattleguy (897282) | 1 year,2 days | (#45729777)

A1: Successive approximations.

A2: A random number generator

.

Hey, folks, I can keep this up all day.

http://www.netjeff.com/humor/item.cgi?file=PentiumJokes [netjeff.com]

## Been there (4, Funny)

## frovingslosh (582462) | 1 year,2 days | (#45729781)

## Re:Been there (0)

## Anonymous Coward | 1 year,2 days | (#45730121)

Actually it was Pentium [wikipedia.org] which was a precursor for these processors.

## Fuzzy Logic anyone? (4, Informative)

## kbdd (823155) | 1 year,2 days | (#45729787)

While the concept was interesting, it did not really catch up. Progress of silicon devices made it simply unnecessary. It ended up being used as a buzz word for a few years and quietly died away.

I wonder if this is going to follow the same trend.

## Re:Fuzzy Logic anyone? (1)

## phantomfive (622387) | 1 year,2 days | (#45730585)

## Didn't Intel already tried that with the P5 (0)

## Anonymous Coward | 1 year,2 days | (#45729795)

nuff said

## I Use This (1)

## DexterIsADog (2954149) | 1 year,2 days | (#45729841)

Saves me energy, too.

## Maybe now I can get respect here when I say (0, Funny)

## Anonymous Coward | 1 year,2 days | (#45729847)

FSRT POST!!!

## Stop using floats (-1)

## Anonymous Coward | 1 year,2 days | (#45729957)

You don't need floating point precision unless you're doing science. Not even in the realm of computer graphics, because sub pixel precision is hilariously unnecessary.

## Old noos... (1)

## dskoll (99328) | 1 year,2 days | (#45729963)

Mai spel checkar allreddy wurks dis weigh....

## It's not hard (1)

## Red Jesus (962106) | 1 year,2 days | (#45730043)

To determine if

a/bis greater than 1, it is sufficient to check ifa>b.To determine if

a/bis greater thanc, it is sufficient to check ifa>bc.Multiplication already consumes less time and energy than division on modern computers. I do not see why they needed to modify their instruction set to realize such gains.

## Re:It's not hard (1)

## femtobyte (710429) | 1 year,2 days | (#45730217)

However, multiplying "simpler" numbers might be faster. For example, I can multiply 20*30 in my head faster than 21.3625*29.7482 (YMMV). Rounding 21.3625*29.7482 to 20*30 might be "good enough" for many purposes, and you can even go back and keep more digits for a small number of cases where it's too close to call with the approximation.

## Re:It's not hard (0)

## Anonymous Coward | 1 year,2 days | (#45730619)

He's not talking about computers, he's talking about *you*, a person, doing multiplication. It's a fucking example!

This is just a basic demonstration that a less precise question can be answered faster than a more precise one.

Computers would be doing much more complicated calculations using more sophisticated algorithms but the larger point stands. If an approximate answer is good enough, you can spend less time figuring it out.

## Re:It's not hard (1)

## wonkey_monkey (2592601) | 1 year,2 days | (#45730927)

I do not see why they needed to modify their instruction set to realize such gains.

It was just a generic example to give the casual reader a basic grasp of the idea, not a specific scenario they'll be applying their process to.

## Physics doesn't care about complete precision (0)

## Anonymous Coward | 1 year,2 days | (#45730069)

It cares about knowing just how precise you are.

That is, measurements are reported 1.2+/-0.1.

## Re:Physics doesn't care about complete precision (1)

## camperdave (969942) | 1 year,2 days | (#45730421)

Physics doesn't care about complete precision

But if I don't know how precisely I know a particle's momentum, how can I tell how vague I have to be about it's position?

## Analog (1)

## Princeofcups (150855) | 1 year,2 days | (#45730087)

It sounds like they just invented analog.

## Re:Analog (0)

## Anonymous Coward | 1 year,2 days | (#45730193)

Ya, but this time they can patent it.

## Great idea! What could possiblity go wrong... (0)

## Anonymous Coward | 1 year,2 days | (#45730139)

Computer:

- Let met approximate this cypher while I encrypt your bitcoin wallet private key....

## MPEG, JPEG, MP3, etc (0)

## Anonymous Coward | 1 year,2 days | (#45730143)

Isn't this the idea behind advanced video and audio compression? Or any other "lossy" technique. You throw away data (precision) that isn't necessary to achieve an acceptable experience.

Could cool if you could arbitrarily turn down a processor's precision to save power.

## Half-precision (3, Interesting)

## michaelmalak (91262) | 1 year,2 days | (#45730219)

## Is that not what Approximation Algorithms are for? (1)

## wisnoskij (1206448) | 1 year,2 days | (#45730243)

And this is why we have thousands and thousands of approximation algorithms. Computers do the work perfectly precisely, except when we are talking about decimal numbers, and if you do not need perfect precision you just program in an approximate algorithm.

I do not think you will ever do any better than picking the best mathematical algorithm for your problem, instead of just relying on lazy computers.

## Re:Is that not what Approximation Algorithms are f (1)

## tlhIngan (30335) | 1 year,2 days | (#45731203)

No, it's not. Approximation algorithms use exact computations and model approximation. The problem is using exact computations - it costs a lot of power to do so.

If instead you just needed to approximate, you can enable "approximate" mode on the calculation and the system gets you an approximate answer, which costs about 50% of the energy it takes to do an exact one.

For calculations like video and audio, that means the GPU consumes much less power as those applications are far more tolerant of approximate answers and the result is discarded in short while afterwards too.

If you don't care for the exact value, then you enable approximate calculations and save the energy of having to do an exact calculation. This is different from using an approximation algorithm on a normal computer where you calculate everything exactly and then fake approximation.

And yes, even when you're doing approximate calculations, there are times you need to do exact calculations - e.g., if you're iterating over lines of video, your iterator needs to be exact while the actual data may only need to be approximate. The proper CPU architecture has to allow for this.

## Re:Is that not what Approximation Algorithms are f (1)

## wisnoskij (1206448) | 1 year,2 days | (#45731383)

If you don't care for the exact value you can use a specific algorithmic approximation, that normally gives you many orders of magnitude less computation time.

## Drones (1)

## srussia (884021) | 1 year,2 days | (#45730257)

'Researchers are developing computers capable of "approximate computing" to perform calculations good enough for certain tasks that don't require perfect accuracy, potentially doubling efficiency and reducing energy consumption.'

I am, for one, welcoming our new approximately accurate, longer-range drone overlords.

## Overclocked GPUs, ASIC, analog? (1)

## ezdiy (2717051) | 1 year,2 days | (#45730307)

However numbers for standard-cell ASIC design don't seem much favourable, certainly not "doubling", much less energy saving (on the contrary, at ballpark 10-30% of OC you reach point of diminishing returns, and only if you dont care much about MTBF).

Now what would be interesting is actual "analog" computers, ie number of states anywhere between 4-inf - there is literally too much of wasted "potential" nowadays. NAND flash chips do it already because they are about to hit limits of cost-effective litography (10nm?).

## Glad this is happening (1)

## Anonymous Coward | 1 year,2 days | (#45730361)

Fuzzy logic and all that jazz really should be used more in computing.

It could save considerable power, or allow for far more computation in a smaller space.

So much stuff in computing only requires decently accurate results. Some requires even less accurate results.

If something was off by one pixel during one frame when it was moving, big deal, no loss.

Not to mention how great it would be for the sake of procedural noise.

You want something that isn't too random, but is just one value messed up a little, throw it through a fuzzy command and out it comes with a random offset.

That'd now be two commands compared to the usual few it'd take to set a value to itself + a random value, then set the possible offsets for the random command.

Or how about procedural generation in games, it could be used in so many areas of animation, texturing and the like.

Or how about AI, it would work wonders for AI, it's massively simplify the logic required to implement a simple expert machine.

It'd even make a real AI even easier to do, more so if you made these processors massively parallel.

Imagine a GPU of these, or even a set area of a GPU dedicated to fuzzy calculations. Might happen in the next 10 years, I sure hope so. (I'd think APUs might be a bigger thing by then though, or early 3D processors, who knows, so many routes it might take soon)

All I know is the future of processing is going to be FUCKING AWESOME in the coming few decades, it is going to transition so much that our computers will look like toasters.

Of course, not those smart ones. Does Anyone Want Any Toast [youtube.com]

## Computation is not the big energy drain (4, Interesting)

## Ottibus (753944) | 1 year,2 days | (#45730367)

The problem with this approach is that the energy used for computation is a relatively small part of the whole. Much more energy is spent on fetching instructions, decoding instructions, fetching data, predicting branches, managing caches and many other processes. And the addition of approximate arithmetic increases the area and leakage of the processor which increases engergy consumption for all programs.

Approximate computation is already widely used in media and numerical applications, but it is far from clear that it is a good idea to put approximate arithmetic circuits in a standard processor.

## Approximately once a month.. (2)

## DigitAl56K (805623) | 1 year,2 days | (#45730533)

.. this story or a slight variant gets reposted to Slashdot in one form or another.

## Re:Approximately once a month.. (0)

## Anonymous Coward | 1 year,2 days | (#45730675)

one of these days, it will either be the year of linux on the desktop. just as soon as games start being available on linux, I think

## Well, yeah (0)

## Anonymous Coward | 1 year,2 days | (#45730555)

It's a fairly common thing when perfect accuracy is not required. It's easier to check the distance from Coord A to Coord B is less than X on each axis than to pythag. It may seem a small increase in efficiency but when it's being done for Z hundred entities (x-x) every 100ms it adds up fast.

## DUPE- and it's nonsense anyway (0)

## Anonymous Coward | 1 year,2 days | (#45730639)

ALL maths generally done in the 'floating point' domain is calculated to some APPROXIMATE accuracy. If this worthless clown-shoe excuse of a professor had the first clue, he'd understand this fundamental fact of applied computer engineering.

32-bit floating point calculates with less power than 64-bit at the same throughput, with the same type of electronic solution.

Markov Chains and the like already handle the statistical concept of "maybe this" or "maybe that" at known statistical probabilities.

The biggest MOUTHS at University are, sadly, all too frequently self-promoting morons. They do not seek to impress their associates in the same field, but seek to seem 'clever' to a more credulous general academic audience, like their bosses.

And to you who are reading this, but not understanding a word I say- try reading any decent primer on NUMERICAL ANALYSIS. India is famous for its mathematicians, and its cultural respect of the filed of maths, so sadly plenty of Indian conmen use their Indian heritage to pass themselves of as some form of maths genius to unsuspecting fools. What was that con Slashdot promoted a little while ago? The new 'Indian' method for super compression of data- or was it the new 'Indian' method of storing extraordinary amounts of data in a pattern printed by an inkjet printer? I think both cons got serious time here.

## Clive SInclair did this in 1974. (4, Informative)

## hamster_nz (656572) | 1 year,2 days | (#45730749)

Due to ROM and cost limitations the original Sinclair Scientific calulator only produced approximate answers, maybe to 3 or four digits.

This was far more accurate than the answers given by a slide rule....

For more info have a look at this page Reversing Sinclair's amazing 1974 calculator hack - half the ROM of the HP-35 [righto.com]

## Computing already approximate (0)

## Anonymous Coward | 1 year,2 days | (#45730951)

I'm sure it's a matter of degrees, but as-it-is, computing is already approximate due to the finite precision of computer arithmetic. There are only 2^(bits) numbers that can be exactly represented on a computer when you've allocated "bits" number of bits to representing numbers. When you solve for the square root of 2 (call it sqrt(2)) on a computer, the answer you get back is not sqrt(2) but sqrt(2) + epsilon, where epsilon is some known bound on the error, When you use an ODE solver to numerically evaluate a differential equation, part of the settings (even if they're just the default ones) is the error tolerance. Similar statements apply for all types of numerical algorithms such as solving nonlinear equations, optimization routines, etc. What are some of the key difference in this approximate computing approach that differentiates it from just cranking down the tolerance on standard algorithms? Higher robustness to errors, randomness, etc.?

Nah, I didn't RTFA.

## I see what they did here (1)

## ihtoit (3393327) | 1 year,2 days | (#45731003)

1. collect museum-piece Pentium systems ...

2. exploit FDIV bug

3. submit blurb to Slashdot

4.

5. Profit!

## Approximate computer (1)

## Iniamyen (2440798) | 1 year,2 days | (#45731579)