Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Increasing Wireless Network Speed By 1000% By Replacing Packets With Algebra

Soulskill posted about 2 years ago | from the throwing-textbooks-at-each-other-is-high-throughput dept.

Wireless Networking 357

MrSeb writes "A team of researchers from MIT, Caltech, Harvard, and other universities in Europe, have devised a way of boosting the performance of wireless networks by up to 10 times — without increasing transmission power, adding more base stations, or using more wireless spectrum. The researchers' creation, coded TCP, is a novel way of transmitting data so that lost packets don't result in higher latency or re-sent data. With coded TCP, blocks of packets are clumped together and then transformed into algebraic equations (PDF) that describe the packets. If part of the message is lost, the receiver can solve the equation to derive the missing data. The process of solving the equations is simple and linear, meaning it doesn't require much processing on behalf of the router/smartphone/laptop. In testing, the coded TCP resulted in some dramatic improvements. MIT found that campus WiFi (2% packet loss) jumped from 1Mbps to 16Mbps. On a fast-moving train (5% packet loss), the connection speed jumped from 0.5Mbps to 13.5Mbps. Moving forward, coded TCP is expected to have huge repercussions on the performance of LTE and WiFi networks — and the technology has already been commercially licensed to several hardware makers."

Sorry! There are no comments related to the filter you selected.

This is cool. But... (5, Insightful)

Anonymous Coward | about 2 years ago | (#41744553)

...I don't see how it will solve "spectrum crunch" when every nibble of your LTE bandwidth is oversubscribed by 5 to 1. Whether you have 32 users doing 10 Mbps streams, or 320 user doing 1 Mbps streams, it's all accounted for. I'd certainly like to be one of the 10, but 20 Mhz worth of spectrum at 16 symbols/Hertz is not a limitation you can change with fast/excellent forward error correction.

Re:This is cool. But... (5, Insightful)

jargonburn (1950578) | about 2 years ago | (#41744653)

I agree that it's not a magic bullet. The point is, however, that the overall throughput of the network was increased by better usage of the available resources! If the *effective* available bandwidth is increased, then the performance of everyone "nibbling" on that network will *also* presumably increase. Also, think how much more money carriers may be able to squeeze out of users without needing to invest more in infrastructure! [/sarcasm]

Re:This is cool. But... (1)

MyFirstNameIsPaul (1552283) | about 2 years ago | (#41744733)

Overall throughput? I'm skeptical. I have yet to consistently get 3G speeds out of even a 4G phone.

I've yet to get (2)

goldcd (587052) | about 2 years ago | (#41744915)

3G speeds out of my 3G phone.
HSPDA+ 'should' be capable of providing me with WAY more bandwidth than I could possibly need from my phone (42MB/ps?).
I know this is a theoretical speed, but at least in the UK there is a order of magnitude difference between what you actually get using this tech, based on your operator. I.e. the majority of telcos could up their average speed, but for cost reasons choose not to (or more fairly, wouldn't expect the investment to pay off due to the complete lack of interest from the majority of their customers).
I completely fail to see why LTE will be any different for the consumer - the case for the telco is fantastic, as it removes the need to keep on increasing their pt2pt backhaul, but consumers paying extra for 'LTE' now... eejits. If you want speed switch to a decent 3G telco. If you want to save money, just wait a bit and select the telco that's small/flexible enough to bite the bullet and ask/pay for you to switch.

Re:This is cool. But... (5, Informative)

Firehed (942385) | about 2 years ago | (#41744937)

That's kinda the point. Crappy signal results in high packet loss. If you can recover lost packets through some recipient-side magic (clever math, apparently) rather than retransmission, you avoid the overhead of another roundtrip, and get higher bandwidth as a result. This cuts down massively on latency (huge win) and should also decrease network congestion significantly.

I'm trying to think of a way to put this in the requisite car analogy, but don't honestly know enough about the lower-level parts of the network stack to do so without sounding like an idiot to someone sufficiently informed. But I'm sure there's something about a car exploding and all traffic backs up for tens of miles along the way ;)

Re:This is cool. But... (2, Interesting)

Anonymous Coward | about 2 years ago | (#41745071)

We'll get the extra packets that were re-transmits for sure. But the throughput gain (if I understand their sketchy details correctly) is from dropped packets not causing a pause at the TCP window size limit waiting for the dropped packet. You can just keep streaming them and generally assume the other end is happy.

But this doesn't increase the available bandwidth of your transport network. And if every packet from 300 users is going out back-to-back with another users packet then it doesn't "fix spectrum crunch" any further than eliminating retries. They were estimating the fast moving train had a 5% drop rate. Assuming your area's 4G is saturated (I'm looking at you AT&T) with the same drop rate, you can expect 1) fewer pauses and a steadeir transmission 2) 5% more bandwidth.

Applying this technology to the "long fat pipe" problem for ftp-style transfers to Europe however sounds like a grand plan. I hope it becomes an IEEE standard soon.

Re:This is cool. But... (0)

Anonymous Coward | about 2 years ago | (#41745271)

Another satisfied AT&T customer...

Re:This is cool. But... (-1)

Anonymous Coward | about 2 years ago | (#41744935)

we can be absoluteley 100 per cent totally sure and certain that no black ppl were involved in inventing this. GUARANTEED.

add blood transfusions and peanut butter and the few other things blacks invented and contributed. that is the plus side of the balance sheet. then subtract high crime and all the cost of lawyers courts judges and victims and incarcerations, all the bastard children who never had a father that cared to stay around, the welfare they disproportionately use, the gang activity and drugs and lives it destroys every day, affirmative action and economic costs of less qualified workers, politics usin race and crying that EVERYBODY they dont like is racist for stupid reasons, urban inner cities nobody can live in or start a business in because of gangs and crime, iq tests we cant use anymore because when blacks do bad nobody dares say maybe blacks have lower iq, no no the test is "biased" etc etc. single mothers are crime factories and most black mothers are single mothers. that dumb uneducated ghetto gutter bitch voting for obama for her "obama money" that obama gets "from his stash", jesse jackson, al sharpton, etc. then the virus of islam and all the blacks who convert to it.

add up what they contribute one time and subtract what they cost every day. blacks are a net drain on society. admit it. then realize i dont like it either. i wish tehy were better people. but admit it. dont cry about hwo much you hate me for sayin it. it is the truth. admit it.

put that in your political correctness pipe and smoke it.

Re:This is cool. But... (4, Insightful)

Jeng (926980) | about 2 years ago | (#41744989)

Also, think how much more money carriers may be able to squeeze out of users without needing to invest more in infrastructure!

This might actually hurt them then because they charge by what was transmitted, not by what was received.

Re:This is cool. But... (2, Informative)

gr8_phk (621180) | about 2 years ago | (#41745135)

This might actually hurt them then because they charge by what was transmitted, not by what was received.

Yeah, but you have to consider how they do math. You assume they calculate profit based on usage and rates. The reality is they calculate the rate based on the desired profit and usage. So when you use less data (fewer retransmits) they will just charge you more for the bits that get through.

Re:This is cool. But... (1)

amRadioHed (463061) | about 2 years ago | (#41745141)

The point is it will give them higher data rates for free which they can then charge extra for.

Congratulations, Baldrick (-1)

Man On Pink Corner (1089867) | about 2 years ago | (#41744559)

You've invented data compression.

Now, off to the patent office!

Re:Congratulations, Baldrick (1)

Anonymous Coward | about 2 years ago | (#41744621)

This isn't data compression, it is however the same technique used for Par2 files used in usnet since, forever - not that new applications of existing technology shouldn't be lauded.

Re:Congratulations, Baldrick (0)

Anonymous Coward | about 2 years ago | (#41744679)

How is it not compression? It reduces the data size being transferred and is recoverable on the other end. Maybe I'm not an expert, but isn't that _exactly_ the definition of compression?

Re:Congratulations, Baldrick (4, Insightful)

TheSpoom (715771) | about 2 years ago | (#41744749)

It's an error-correction method that happens to have compression built-in.

Also, I really wish people would stop shitting on new technologies like they're some sort of oracle. This is awesome. Accept it.

Re:Congratulations, Baldrick (1)

gr8_phk (621180) | about 2 years ago | (#41745157)

I didn't see the compression built in part? The correction allows better utilization but that's different than compression.

Re:Congratulations, Baldrick (5, Funny)

wonkey_monkey (2592601) | about 2 years ago | (#41745229)

This is awesome. Accept it.

I agree, but... this is Slashdot.

New technologies, but old technique. (3, Insightful)

Anonymous Coward | about 2 years ago | (#41745267)

My grandpa used to tell me stories of old men who pulled up into town offering medicinal cures for all sorts of diseases, old age, arthur-itis, bad vision, whatever.

"Medicine men" would orchestrate a "medicine show" with all sorts of claims and testimonies of the miracles of the precious snake oils he vended. Fistfuls of cash soon filled the air, and his cronies went through them exchanging their hard-earned cash for bottles of God-knows-what. ( Grandpa tells me the most common ingredients were assortments of oily vegetables with a smattering of creosote - same stuff used to preserve telephone poles - thrown in for good measure - and maybe a smattering of moonshine to give it a medicinal smell ).

These were simple old country folk he sold to - mostly farmers. They knew all about how to run a farm, and respected their doctor - and the charlatans of the medicine show knew full well how to monetize the people's faith in the technical jargon of chemistry.

A few days after the wagon left town, the people discovered they were no better off, and quite a bit poorer, after the medicine man left.

Woe to the medicine man if he visited a town a few weeks after another medicine man had pulled his scheme off.

I think we have seen so much stuff today whose sole purpose is monetizing bullshit, that we are leery of accepting stuff we do not understand. I - for one - would love to understand Rossi's "eCat" cold fusion LENR device, but the shroud of secrecy around it, along with what YouTube video I have seen of it has me believing it is just more snake oil, however much I would love to see something like this actually work..

As far as packet loss is concerned, if its a problem, put each packet through ECC just as we do on CD's or DVD's - and match the ECC size to the packet size. To me that is obvious - and being I am not advanced in this field - I think I would be reasonable in claiming that is an obvious use of the technology

Re:Congratulations, Baldrick (5, Informative)

Anonymous Coward | about 2 years ago | (#41745281)

There is no compression. Its RX error correction. This, seemingly, will reduce latency and increase effective throughput because you are now spending less time in a RX/TX retransmit cycle or a TX-TO retransmit cycle. As such, it allows more time for TCP window scaling to open up, even in the face of lost packets. In turn, a larger window means higher throughput with less protocol overhead.

I completely agree. Its awesome.

Re:Congratulations, Baldrick (4, Funny)

Anonymous Coward | about 2 years ago | (#41745289)

Also, I really wish people would stop shitting on new technologies like they're some sort of oracle. This is awesome. Accept it.

But if I keep shitting on new stuff like that, I can point back to my posts and say "LOOK! SEE? I SAID SO! I DID! ADMIT I WAS RIGHT FOR ONCE!!!!1!" if it fails. If it succeeds, though, I'm counting on nobody remembering my old posts so I can join in the praising of it later! It's a foolproof win-win for me, and that's all that matters!

Re:Congratulations, Baldrick (1)

Samantha Wright (1324923) | about 2 years ago | (#41744763)

Think of it like really fancy checksums. Most of the data we no longer have to transfer is redundant packets re-sent due to errors.

Re:Congratulations, Baldrick (5, Informative)

JDG1980 (2438906) | about 2 years ago | (#41744837)

How is it not compression? It reduces the data size being transferred and is recoverable on the other end.

No, it slightly increases the data size being transferred, thus allowing it to be recoverable on the other end if there are minor losses.

Here's an example of how it might work. Say you have a packet that holds 1024 bytes of payload data, plus a few extra for overhead. (Probably not realistic, but this is just to lay out the principles involved.) Now, you could send all 1024 bytes as straight data, but then if even 1 bit is wrong, the whole packet must be re-sent, adding latency. Instead, you send (say) only 896 bytes of actual data, and 128 bytes of recovery data. You break up the data into 64-byte blocks. Thus you have 14 blocks of actual data. The other 2 blocks consist of recovery data, generated by some sort of mathematical equation too complicated to describe here (and which frankly I don't understand myself). Here's the trick: on the receiving end, any 14 of the 16 blocks is enough to recover the whole 896-byte original datagram. Doesn't matter which 2 blocks are bad, as long as no more than 2 are bad, you can recover the whole thing.

This could be useful in an environment where packet loss is very high. A similar method is currently used when transmitting large binary files on Usenet, since many Usenet servers do not have 100% propagation and/or retention.

Re:Congratulations, Baldrick (0)

Anonymous Coward | about 2 years ago | (#41745147)

so, it's RAID for file transfers?

Re:Congratulations, Baldrick (0)

Anonymous Coward | about 2 years ago | (#41745205)

It's called network coding

Re:Congratulations, Baldrick (5, Informative)

psmears (629712) | about 2 years ago | (#41744891)

How is it not compression? It reduces the data size being transferred and is recoverable on the other end. Maybe I'm not an expert, but isn't that _exactly_ the definition of compression?

It doesn't make it smaller - in fact, it will make the data larger. It gives improved performance because of the way TCP responds to dropped packets:

(1) Normally the receiver has to notice the dropped packet, notify the sender, and wait for the packet to be retransmitted - meaning that the data in question (and any data after it in the stream) is delayed by at least one round-trip. With this scheme, there is enough redundancy in the data that the receiver can reconstruct the missing data provided not too much is lost, improving the latency.

(2)TCP responds to packet loss by assuming that it is an indication of link congestion, and slowing down transmission. With wired links, this is a good assumption, and results in TCP using the full bandwidth of the link fairly smoothly. With wireless links, however, you can get loss due to random interference, and so TCP will often end up going slower than it needs to as a result. The error correction allows this to be avoided too.

Re:Congratulations, Baldrick (0)

Anonymous Coward | about 2 years ago | (#41744627)

That's what I thought. If any old packet could be represented as an equation, then I'll just send you a quick text and you can solve for the 50MB file your ISP's email is blocking.

Re:Congratulations, Baldrick (5, Funny)

Anonymous Coward | about 2 years ago | (#41744853)

Just send a starting position in PI and a length.

Implementation is left as an exercise.

Re:Congratulations, Baldrick (3, Informative)

Anonymous Coward | about 2 years ago | (#41745031)

That works, actually!
The problem is that on average, the number of bits needed to express the starting position in pi is equal to the number of bits you are transmitting.

Re:Congratulations, Baldrick (2, Informative)

Anonymous Coward | about 2 years ago | (#41744637)

MIT, Caltech, Harvard, and other universities in Europe

I understand what they mean, but it's a silly way to write it. Maybe Slashdot should have someone who edits submissions.

Re:Congratulations, Baldrick (-1)

Anonymous Coward | about 2 years ago | (#41744661)

wrong

Re:Congratulations, Baldrick (-1)

Anonymous Coward | about 2 years ago | (#41744857)

Then tell us what wonders you have invented so far, smart ass...

PS: You're technically wrong as well.

Re:Congratulations, Baldrick (0)

HomelessInLaJolla (1026842) | about 2 years ago | (#41744871)

Right? I saw "replace packets with algebra" and I thought of the rotating spindle of the kernel, or even the rotating spindle of the core processor. Why do processors not single step very well in modern day? The packets have been replaced with algebra. The rotating spindles assist to feed the proper segments to the proper areas, actually querying for the result of any particular exact memory location is an ever-changing game of "guess what number I'm thinking of, lower, higher" which often changes while guessing a number.

Re:Congratulations, Baldrick (1)

bws111 (1216812) | about 2 years ago | (#41744979)

Seems to me it is a much data compression as RAID-5 is. Spread 2 packets of information across 3 packets. Lose any one of them and you are still OK. You didn't send any less data (in fact, you sent more) but you can tolerate lost packets now.

Note that I am NOT saying that is what they are doing, just giving an example where being able to lose data does not imply compression.

Re:Congratulations, Baldrick (4, Informative)

fa2k (881632) | about 2 years ago | (#41745013)

Not data compression, more like ECC or forward error correction

RAID for packets (2)

ShanghaiBill (739463) | about 2 years ago | (#41745087)

You've invented data compression.

This is NOT data compression. It is more like "RAID for packets". If a packet is dropped, you can recreate it from the other packets instead of requesting it again.

Network coding. (5, Informative)

hdas (2759085) | about 2 years ago | (#41745177)

This is not simple data compression or error control coding. This is network coding, e.g., see http://en.wikipedia.org/wiki/Linear_network_coding [wikipedia.org] [wikipedia.org] and how it can increase the capacity in the butterfly network over traditional packet routing schemes, counter to our intuition for flow networks/water pipes.

It is a fairly hot research topic that has been around for last few years. But it is fairly revolutionary. It is still early days in terms of practical coding schemes and implementations.

Re:Congratulations, Baldrick (4, Insightful)

Bill_the_Engineer (772575) | about 2 years ago | (#41745273)

Actually they came up with yet another method of Forward Error Correction (FEC). I haven't had time to read the article and look forward to see how they compare to Reed-Solomon or other Reed-Muller codes (Walsh-Hadamard code is used in CDMA).

This isn't exactly new but I'm glad to see someone take the initiative to apply it to today's WiFi networks. The mentality as of late is that the speed is more than fast enough to deliver the data and the occasional resend. FEC currently used where data rates are quite limited or the latencies are such that retransmissions are prohibitive long.

Just like parity files (5, Insightful)

Ignorant Aardvark (632408) | about 2 years ago | (#41744585)

If you've ever used Usenet, and you've used parity files to recover missing segments of data, then you know exactly how this technique works.

Frankly, I'm surprised it took so long for someone to apply it to lossy network environments. It seems obvious in hindsight.

Re:Just like parity files (4, Interesting)

timeOday (582209) | about 2 years ago | (#41744793)

Forward error correction [wikipedia.org] is a pretty basic principle in encoding and has been used nearly since "the beginning" in the 1940s. They're used in several places up and down the protocol stack; WiMax uses Reed-Solomon [wikipedia.org] coding, for example. But I guess this implementation uses a better algorithm at a different level in the stack.

Re:Just like parity files (5, Informative)

Rene S. Hollan (1943) | about 2 years ago | (#41745081)

Wouldn't bet on it. Probably just reinvented the wheel.

I coded Reed-Solomon FECs for packet radio in the 1980s to combat picket-fencing for mobile data radios using Z80 CPUs.

Re:Just like parity files (2)

oodaloop (1229816) | about 2 years ago | (#41745121)

It seems obvious in hindsight.

So does everything. Or was that the joke?

Good but... (0, Offtopic)

Anonymous Coward | about 2 years ago | (#41744587)

Will it blend?

too specialized on a single protocol? (0)

Anonymous Coward | about 2 years ago | (#41744595)

So this only covers TCP packets? What about UDP, ICMP, NETBIOS and other IP based protocols?

Sounds like a solution that is way too specialized.

 

Re:too specialized on a single protocol? (0)

Anonymous Coward | about 2 years ago | (#41744717)

You clearly don't understand the difference between TCP and UDP. Hint: there's a reason they implemented this with TCP.

Re:too specialized on a single protocol? (1)

vlm (69642) | about 2 years ago | (#41745007)

The problem is probably that he missed they "clump" packets together. If they hadn't done that, then this FEC scheme would work pretty well for UDP.

Any post 1980's telephone modem (needs trellis coding from a 14.4 modem or better) already does FEC at layer 1... this is a scheme to reimplement that idea over wifi by doing it at layer 3, which seems superficially kinda dumb. Common rule of networking is always push that kind of stuff as low in the OSI model as possible... I do believe that given an awful layer 1, a cruddy implementation at layer 3 will improve overall system thruput... its still a bad design.

Re:too specialized on a single protocol? (0)

flowerp (512865) | about 2 years ago | (#41745253)

I did not miss anything here.

If I send a video stream as a sequence of UDP or RTP packets, clumping together to perform some kind of forward error correction is perfectly possible and reasonable.

When you invent some kind of solution to prevent packet loss on wireless links, it should apply to all kinds of IP traffic and not single one protocol.

Re:too specialized on a single protocol? (5, Funny)

broginator (1955750) | about 2 years ago | (#41744747)

You know what the best part about UDP jokes is? I don't care if you get it or not.

Re:too specialized on a single protocol? (3, Informative)

wierd_w (1375923) | about 2 years ago | (#41744817)

By definition, UDP sessions don't have delivery garantees like TCP does. That's what TCP does! It provides a mechanism for clients to ensure integrity and ordering of received packets. Netbios is an encapsulated protocol over IP, which uses TCP to ensure delivery. ICMP... really? Are you really asking for delivery correction on multi packet ICMP? For real? You do realize that fragmented ICMP is a nono, right, and that ICMP should be wholly contained in single packet messages?

While true that I wouldn't work for UDP, the clearing of traffic normally consumed by TCP requests and responses would improve performance of UDP by making the medium more available even if the coded TCP method has no direct implication with UDP. (It is up to the UDP session members to negotiate and handle lost datagrams. Not the network stack. UDP is intended for custom user protocols that can't easily live inside a TCP/IP packet, like large video or streaming audio feeds. Normally these protocols can deal with loss, and the burden of ensuring 100% delivery comes at prohibitive performance costs, so UDP with acceptible loss is ideal.)

Awesome name (1, Interesting)

Anonymous Coward | about 2 years ago | (#41744607)

What the fuck were they thinking?

It's like if tomorrow I invent a new protocol for mobile phones and I call it GSM.

Or is this a fucking joke?

Re:Awesome name (0)

Anonymous Coward | about 2 years ago | (#41744713)

They named it "coded TCP", not TCP.

Re:Awesome name (2)

CanHasDIY (1672858) | about 2 years ago | (#41744815)

They named it "coded TCP", not TCP.

Yea, the phrasing was written in the most confusing way possible.

On second thought, I take that back - it could have been written in Esperanto.

Re:Awesome name (0)

Anonymous Coward | about 2 years ago | (#41744719)

It's a way of encoding TCP packets on the wire, not a whole new protocol. You just need routers converting to/from the coded form on segments of the network, or your network card driver encoding/decoding the packets.

Which makes it very useful as your existing clients (eg web browsers using HTTP) will work just fine completely transparently.

Re:Awesome name (1)

psmears (629712) | about 2 years ago | (#41744775)

What the fuck were they thinking?

It's like if tomorrow I invent a new protocol for mobile phones and I call it GSM.

Or is this a fucking joke?

Not a joke, but a badly-worded summary - the invention is called "coded TCP" (presumably because it's a version of TCP with error-correcting codes). I agree that the summary reads as if the protocol is called "TCP"...

Math! (2, Insightful)

Anonymous Coward | about 2 years ago | (#41744609)

To everyone who grew up saying that they never used math after high school, and it didn't have any meaningful use further than simple addition... you can kindly eat your own words now.

I'll just sit here and watch.

Re:Math! (0)

Anonymous Coward | about 2 years ago | (#41744667)

Whats a math?

Re:Math! (1)

Anonymous Coward | about 2 years ago | (#41744805)

Quiet, you bloo'y bri'ish git.

Re:Math! (1)

KernelMuncher (989766) | about 2 years ago | (#41744767)

Where's that politician who said people didn't need to study Algebra ??

Re:Math! (2)

bws111 (1216812) | about 2 years ago | (#41745107)

Yes, clearly this is the first time math has ever been used in computing!

I'll just sit here and watch. Watch what? All those people who didn't learn math happily using their computers with improved speed? Or do you think it is going to be up to the user to solve the equations to re-create this missing data?

Obvious (0)

Anonymous Coward | about 2 years ago | (#41744615)

Pretty sure this has been done before, but perhaps not specifically applied to wireless networks.

Introducing... (5, Funny)

sootman (158191) | about 2 years ago | (#41744625)

... the new Linksys 802.11x=(-b+/-sqrt(b^2-4ac))/2a router!

Re:Introducing... (0)

Anonymous Coward | about 2 years ago | (#41744875)

... the new Linksys 802.11x=(-b+/-sqrt(b^2-4ac))/2a router!

of course without parentheses around the 2a all of the packets get lost

Re:Introducing... (2)

mrbester (200927) | about 2 years ago | (#41745183)

Or they were imaginary to begin with.

forward endcoding (1)

Anonymous Coward | about 2 years ago | (#41744631)

So they basically re-invented forward error correction [wikipedia.org] .

Re:forward endcoding (2)

jschultz410 (583092) | about 2 years ago | (#41744705)

I don't even think they re-invented FEC. Instead, they simply applied it to TCP in a transparent way. I'm sure they aren't the first people to do something along these lines.

Re:forward endcoding (2)

AK Marc (707885) | about 2 years ago | (#41745277)

I've used Silver Peak before, and they use transparent packet FEC. These guys aren't the first, but they just moved the idea to new devices, an AP and wireless drivers. Not ground breaking, but may expand use to more people.

Re:forward endcoding (0)

Anonymous Coward | about 2 years ago | (#41744709)

I'm really surprised than WiFi doesn't already do forward error correction. I suppose that in all but the most noisy environments, it must hurt more than it helps.

Re:forward endcoding (1)

Relic of the Future (118669) | about 2 years ago | (#41744807)

Every environment is noisy now.

Re:forward endcoding (1)

Anonymous Coward | about 2 years ago | (#41744811)

I'm really surprised than WiFi doesn't already do forward error correction. I suppose that in all but the most noisy environments, it must hurt more than it helps.

WiFi has always had forward error correction. I can't imagine transmitting digital data without one. The advance here, which is conveyed very poorly, is using forward error correction to recover from packet loss in the TCP layer.

Wireless (0)

Anonymous Coward | about 2 years ago | (#41744685)

Seems like they've solved the nasty issue with wireless, that the near constant packet loss really screws over TCP.
Losing a packet causes nasty delays while the error is bucked up and down the network/application stack. Wireless, do to it's nature, nearly always loses packets.

Wonder what the overhead is.

Re:Wireless (1)

AK Marc (707885) | about 2 years ago | (#41745233)

I think it would be more efficient to use the AP as a TCP proxy and TCP acceleration across it to detect and correct for wireless errors without window size limitations and such.

Similar to existing data recovery? (1)

JDG1980 (2438906) | about 2 years ago | (#41744695)

With coded TCP, blocks of packets are clumped together and then transformed into algebraic equations (PDF) that describe the packets. If part of the message is lost, the receiver can solve the equation to derive the missing data.

It's been a while since I read the paper on exactly how it works, but isn't this basically the same principle as the par2 file recovery slices that have been used for Usenet binaries for quite some time?

Re:Similar to existing data recovery? (1)

AK Marc (707885) | about 2 years ago | (#41745251)

Yes, sounds like packet-PAR to me as well.

Error Correction Codes implementation? (4, Insightful)

Moskit (32486) | about 2 years ago | (#41744697)

Article is very light in details (except "Libraries of Congress" things), but it looks like those guys implemented a kind of error correction code (ECC) to recover lost data through extra data found in other packets. This has been in use for various types of networks (optical, DSL, GSM) for years.

Of course it is all down to how good the actual algorithm ("algebra") is in terms of overhead vs extent/capability of error correction vs introduced coding delay. There is always a trade-off, but a particular algorithm can take into account technology specifics (WiFi) and optimize it very well for a given task (whole packet lost, but not so often).

Journalists like to put BIG BUZZWORDS to well known things.

ECC is old (3, Interesting)

mcelrath (8027) | about 2 years ago | (#41744723)

So basically they're applying interleaved checksumming error correction (a la RAID5)? Good idea. What they didn't say is how much extra data was required to be sent by their solution. If they want to be able to recover 10% packet loss, presumably that means at least 10% more data sent, and there's still a failure point where the loss is greater than the checksum's size.

We've had these algorithms for decades. I've long been frustrated that checksums/ECC are not used at every single transmission and receiving point. Let's put this into the expansion bus, memory bus (ECC), and filesystem (btrfs/zfs), and of course, wifi and wired networks. Unfortunately the drive to the price floor resulted in everyone wanting to shave that 10% to make things cheaper. ECC was once commonly available in consumer hardware too, now you can only find it on ultra-specialized and ultra-pricey rackmount server hardware.

The 1980's assumption that the error is 1e-20, so can be ignored, is demonstrably false in nearly every computer application today. We need to (re-)start designing error correction into everything. Hey, why not use adaptive error correction, that increases the size of the checksum when the measured loss increases?

Re:ECC is old (0)

Anonymous Coward | about 2 years ago | (#41745019)

I've long been frustrated that checksums/ECC are not used at every single transmission and receiving point.

I'm glad they don't. ECC is not very good. It's fast and simple enough to put in a memory controller, but there are much better algorithms out there for telecommunications.

It's just FEC (5, Interesting)

Zarhan (415465) | about 2 years ago | (#41744737)

Forward error correction - there are different algorithms that are dime a dozen.

The one thing that *does* surprise me is that no such thing is built-in to the link layer of 802.11 spec. Physical layer does whatever it can to garner signal from the noise, but there is no redundant data at higher layers at all.

All this has of course resulted in a gazillion papers on that very topic, hoping to see practical application soon.

Re:It's just FEC (5, Informative)

hdas (2759085) | about 2 years ago | (#41745065)

This is not plain FEC for point-to-point communication. This is based on network coding, e.g., see http://en.wikipedia.org/wiki/Linear_network_coding [wikipedia.org] and how it can increase the capacity in the butterfly network over traditional packet routing schemes, counter to our intuition for flow networks/water pipes.

Network coding has been a fairly hot research topic in information theory and coding theory over last few years. But it is fairly revolutionary in my opinion. It is still early days in terms of practical coding schemes and implementations.

"commercially licensed" (4, Interesting)

daniel.benoy (1810984) | about 2 years ago | (#41744835)

Man this is going to be so sweet in 25 years when the patents expire :D

I also hope they use this as an excuse to popularize SCTP.

Reinventing the wheel (1)

currently_awake (1248758) | about 2 years ago | (#41744899)

If your new error correction technology eliminates lost packets, and you lose 5% normally, then using this you gain 5% back not 10x. What they actually invented is data compression, and it's been around for decades.

Re:Reinventing the wheel (1)

wonkey_monkey (2592601) | about 2 years ago | (#41745159)

If your new error correction technology eliminates lost packets, and you lose 5% normally, then using this you gain 5% back not 10x. What they actually invented is data compression, and it's been around for decades.

It's not that simple, and it's not data compression.

http://hardware.slashdot.org/comments.pl?sid=3205219&cid=41744891 [slashdot.org]

Re:Reinventing the wheel (1)

bws111 (1216812) | about 2 years ago | (#41745279)

Um, no. First of all, data compression means you are sending less data, and they are not. They are sending more data in total, but can tolerate missing packets.

Second, no, 5% missing packets does not slow you down by only 5%. Worst case, the sender has to wait for a timeout to occur with no ack received before resending the packet - that is going to be a long time.

old concept new application (1)

Dan9999 (679463) | about 2 years ago | (#41744905)

While using complex algbraic expressions this is somewhat like raid or even the udpsender tool. Nice.

Thank you for your interest in this topic. (2, Interesting)

Sheetrock (152993) | about 2 years ago | (#41744907)

Efficiency in wireless communication is something of a purple elephant, mostly due to interference concerns that aren't at issue in wired Ethernet transactions. True, wired connections will have the occasional collision (though this is largely solved by modern algorithms and operating systems) but digital transmissions over an analog medium are difficult enough when they aren't running into each other in the air. And then you have other interference introduced by microwaves, whether from devices like cell phones, microwaves, or sunspots. It's a very noisy environment!

The concept of using algebra is a unique step forward in this field. Most here would agree, if you're in a crowded cafe and trying to carry on a conversation, it's easier to shout "Pythagoreas" than to talk about squares and triangles. But with computers it happens to be exactly the opposite because they're designed to compute -- it's what they do and what they like to do. So feed it generalities and, often, it can come up with specifics, much like the Monty Hall Paradox.

The next step appears to be to move from algebraics to broad descriptions of the type of data you want to download. This is waiting on computers with a great deal more processing power and perhaps emergent AI, but there will come a time where instead of feeding a bunch of packets over a noisy channel the Internet will simply say to your computer "short film with 20-something actor wondering whether to marry now or enjoy life for a while longer" and your system will fill in the rest, completing the transfer mathematically. This is down the road a ways, but newer technology such as lossy compression for data is already available and potentially lucrative for those who are willing to think outside of the conventional box and try something with a few more holes in it.

Wait... (0)

Anonymous Coward | about 2 years ago | (#41744919)

Wait. Wait... wait.

I am going to use algebra for something in my life?

I just lost a bet 12 years ago.

it does use more spectrum (2, Insightful)

YesIAmAScript (886271) | about 2 years ago | (#41744929)

It's called forward error correction and it requires sending additional redundant data so you can solve for what is missing. Sending additional redundant data does use more spectrum for the same throughput, because you're sending more data. It may be worth it to avoid retransmissions when data is lost, but it definitely use using additional spectrum.

This is nothing new, your computer uses FEC on its storage (HDD or SSD) and maybe even on its RAM (if it has ECC RAM).

Finally, some use for High School Algebra! (0)

Pepebuho (167300) | about 2 years ago | (#41744945)

This makes me glad that I learned Algebra in high school. At last I can apply it!

doesn't add up (1)

illestov (945762) | about 2 years ago | (#41744947)

I've only read the abstract of the article and this is probably a stupid question but as i understand it, this algorithm is designed to efficiently recover lost packets in the transmission, so when the article claims that "MIT found that campus WiFi (2% packet loss) jumped from 1Mbps to 16Mbps" shouldn't the increase in speed be only 2% and not 16x?

Re:doesn't add up (1)

Anonymous Coward | about 2 years ago | (#41745249)

The large effective bandwidth increase comes from being able to transmit continuously & reliably rather than waiting (and waiting, and waiting) for retransmissions due to packet loss.

we were being inefficient (4, Insightful)

OrangeTide (124937) | about 2 years ago | (#41744967)

Shannon Limit shows that there is only so much information that can fit in a channel.

Plenty of forward error correction codes exist (algebraic encodings) to enable a channel to approach the shannon limit. Most of you have heard of Reed-Solomon or Hamming Code before.

NASA has used these since the 1970s to provide a more robust link with the effect of utilizing more bandwidth of that link.

This is a little fancier than what I mentioned, but conceptually similar I imagine. The advantage of just using some existing forward error correction, perhaps combined with one of the popular compression algorithms, is that techniques that have been in use for the past 4 decades probably can't have enforceable patents placed on it.

Coded TCP (2)

XB-70 (812342) | about 2 years ago | (#41745001)

It's obvious that nothing new has been invented here. It's just that they have put one previous idea cleverly together with another. What they are trying to do is to create Coded TCP... IP!!

MIT, Caltech, Harvard, and other universities (0)

Anonymous Coward | about 2 years ago | (#41745017)

in Europe? Since when have they been in Europe?

I think the summary was meant to read

MIT, Caltech, Harvard, and several universities in Europe

:)

Moderately ridiculous sounding. (1)

Ancient_Hacker (751168) | about 2 years ago | (#41745025)

Ah yes, all we have to do is find an algebraic equation whose 1542 roots happen to match packet #1767 of the Angry Birds video. And whose coefficients take up less than 5%, 75 bytes lets say.

I thought up this compression scheme in 8th grade. even then I knew there just had to be some basic problem with it.

Re:Moderately ridiculous sounding. (1)

wonkey_monkey (2592601) | about 2 years ago | (#41745195)

It's not compression. It's a fancy checksum which means less packets have to be discarded as lost, meaning less time wasted waiting for resent packets and less chance of network speed being negotiated down because of said lost packets.

If you divide every byte by 0 (1)

Ukab the Great (87152) | about 2 years ago | (#41745049)

Then the mathematic representation of the packet is effectively nothing, and even a computer with unplugged network cable becomes infinitely fast.

This only works end to end (5, Interesting)

flowerp (512865) | about 2 years ago | (#41745055)

This is why I think this will not catch on easily. You can't just put a new router with their coding functionality into your home and expect this to work. It also needs support from the server hosting the content you want to acces.

The way they designed their system is end to end. Meaning that the internet server has to run a modified TCP stack and the client system (alternatively your router inbetween) also has to understand this modified TCP dialect.

The chance of millions of Internet servers changing to a (likely) patented, proprietary version of TCP is ZERO.

This is why this idea will fail.

Christian

Re:This only works end to end (3, Informative)

flowerp (512865) | about 2 years ago | (#41745095)

And in case you want to read about the changes they made to TCP, here's the paper:
http://arxiv.org/pdf/0809.5022.pdf

The paper mentioned in the summary only does a performance evaluation for this TCP
dialect, but is light on details.

What the FEC? (2)

AK Marc (707885) | about 2 years ago | (#41745133)

Hasn't this already been solved for before? There are tons of FEC implementations out there. Shoot, from their description, they just PAR'd them all together and transmitted the parity as an (or some) extra packet(s). They just used "algebra" to generate their PAR.

Cool that someone's using it. Dynamic parity/FEC is a much better loss tolerant scenario than TCP's algorithm, but that doesn't make it new or novel. Silver Peak is a network acceleration device (like Riverbed) that integrates FEC, so it's been out commercially available for years. Putting it on an AP (with a matched wireless client) isn't that interesting. I could have done it 2 years ago with Silver Peak VMs running on the AP and my computer, though not as practical.

Maybe I'm just jaded because FEC and parity are things I've been working with for 20+ years.

Re:What the FEC? (1)

hdas (2759085) | about 2 years ago | (#41745259)

Please see Network Coding: http://en.wikipedia.org/wiki/Linear_network_coding [wikipedia.org] [wikipedia.org]. It is beyond simple FEC and can increase the capacity in networks over traditional packet routing schemes, counter to our intuition for flow networks/water pipes.

Cant wait! (1)

hesaigo999ca (786966) | about 2 years ago | (#41745301)

You mean we get to keep all the hardware as is, and not pay more to get more..... I am in, but alas, Canadian Monopoly companies will see through this, and implant some way of getting more money for the lost bandwidth they are able to charge us for....too bad it is not credible business model.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?