Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Cheaper, More Powerful Alternative To FPGAs 108

holy_calamity writes "Technology Review takes a look at a competitor to FPGAs claimed to be significantly faster and cheaper. Startup Tabula recently picked up another $108m in funding and says their chips make it economic to ship products with reconfigurable hardware, enabling novel upgrade strategies that include hardware as well as software."
This discussion has been archived. No new comments can be posted.

Cheaper, More Powerful Alternative To FPGAs

Comments Filter:
  • I do not think the FPGA works the way they think it works. For instance an ARM processor is going to be faster less power hungry then a FPGA programed as a ARM processor. It can't grow a blue-tooth or a GPS etc. The FPGA is also in the same improvement cycle as any other part so the newer phone will have the better FPGA. I am not saying having one in there is bad it is nice for tweaks to the system but it is not a magic bullet.
    • It can't grow bluetooth, or gps, yet.

      it can become the receiver for either though. which means all you really need to do is to setup multiple antenna's that can be lengthened/shortened as needed.

      Now there's an experiment that needs more research. how do design an "modular" antenna so that you can change which frequencies are received/transmitted allowing for a truly broad spectrum operations.

      • Re: (Score:3, Informative)

        by GlibOne ( 1203032 )
        See Fractal antenna [wikipedia.org] no multiple antenna's needed. The receiver part is a different story.
      • It can't grow bluetooth, or gps, yet.

        Well, there are devices that can, which are basically just slightly "better" PSoCs .

        That's not the point. It doesn't seem like FPGA/PSoCs could ever be as cheap as a dedicated solution. Even if there is a breakthrough in fab that makes FPGAs closer to their dedicated counterparts, those efficiencies should also apply to the dedicated process.

        Basically, FPGAs and PSoC always involve some extra overhead for the flexibility. The overhead may diminish more and more, but as things scale up those small overheads

    • by blair1q ( 305137 )

      No, you're not doing any funky RF on-chip, unless someone is making specialized FPGAs with the RF goodies baked-in.

      FPGAs are wizard for dev cycles, though, if your changes are only in the logical realm. No need to turn new boards; just reprogram the FPGA and get on with your life.

      This guy's real problem is it's going to be as little as 1/N as fast as the N times bigger circuit he's replacing.

  • Mmmm.... 40+ years after going out of style as "Hopelessly Obsolete", Delay Lines return to the cutting edge.

    • by blair1q ( 305137 )

      Erm, no. This is kind of the opposite of delay lines. It's more like a pipeline, where each segment of the pipe is actually the same piece of silicon real-estate.

      Your data goes through the entire pipeline, getting munged just as a pipeline would at each step, and comes out just how you want it.

      Problem is, with a pipeline I can have a different datum in each segment. With this, one datum has to go through all the steps before I can feed another datum into the pipeline.

      The pipeline gives me an N:1 speedup

  • I've got kicked out of school with an EE degree, gone into software business (yeah, I know), and never looked back.

    Do they ship products, other than dev kits, with FPGA?

    • by blair1q ( 305137 ) on Friday April 15, 2011 @07:48PM (#35835374) Journal

      Yup. Especially write-once FPGAs.

      Sometimes making an FPGA is cheaper than building an equivalent board. You can get preprogrammed cells for entire microprocessors now. And lots of other library cells. Build an entire custom computer into a single package, if you want.

      It's not dirt-cheap, but it's easy, and saves an assload on inventory.

      • by JamesP ( 688957 )

        I didn't know write-once FPGAs were much popular

        Most products I've seen use either a serial EEPROM, flash or code (up)loaded from another processor on the board.

        • They're not... as on most things blair1q is an idiot.

        • If your only tool is a x86 motherboard, I guess everything looks like a flash BIOS?

          In other words, you might want to stick your nose in some medical equipment before you judge the popularity of FPGA's...
    • by Anonymous Coward

      Sure, but only some market segments.

      For example, FPGAs are never as power or area efficient as dedicated silicon gates, and area equates to manufacturing cost. This tends to mean that for volume consumer apps, a dedicated ASIC will win out in the end.

      The flip side is that FPGAs are much quicker and cheaper to develop for than an ASIC, so for specialist applications or small markets, FPGA win out.

      These days FPGAs are starting to include more and more hardened blocks such PCIe interfaces, flash controllers a

    • by mrmeval ( 662166 ) <.moc.oohay. .ta. .lavemcj.> on Friday April 15, 2011 @07:59PM (#35835486) Journal

      All of our product have some sort of reprogrammable logic. PLD , GAL , EPLD, CPLD , FPGA, and some the designers should have been shot for making.

      Without it we would not be able to design a product and get it to market with any hope of turning a profit. It keeps engineering costs low allows us to make changes for regulatory requirements and allows end users to load new firmware and fix problems in the field.

      Some of our products are niche and low volume and some of our products are very high volume and we're growing.

    • by erice ( 13380 ) on Friday April 15, 2011 @08:01PM (#35835498) Homepage

      I've got kicked out of school with an EE degree, gone into software business (yeah, I know), and never looked back.

      Do they ship products, other than dev kits, with FPGA?

      All the time. They tend to be low volume items with high unit cost. Cisco has been a big consumer FPGAs forever. It's not even all that uncommon to find FPGA's in consumer electronics, though they tend to be very small parts used a glue logic.

    • Many IO cards use FPGAs.

      I have seen them in interferometer controllers, motor servo boards, fast multi-io cards, etc. Most of the stuff is low quantity, expensive stuff ($5000+ per item), so it seems like its easier to put in an FPGA than creating a new chip for a few 100/1000 copies...

    • by the_raptor ( 652941 ) on Friday April 15, 2011 @09:53PM (#35836236)

      Plenty of tools like oscilloscopes now use FPGA's. Low end FPGA's are a couple of dollars tops, which is cheaper then the purchase plus production costs for a bunch of discrete chips.

      A lot of hobbyist producers make designs with those low end FPGA's because it can be cheaper to use one FPGA over a whole bunch of products rather then stocking equivalent discrete IC's (ie you can buy an FPGA in 1000 quantities and use it across 10 products).

      Of course this new product is just a cheaper FPGA, and their marketing claims are bullshit. Consumer electronics producers do not want upgradeable or repairable electronics. They want to be in the "fashion" business like Apple and sell new "upgrades" every year.

    • Lots of dedicated video encoder/compressor boxes on the market, I haven't seen one yet that wasn't FPGA based.

    • I have an EMU Proteus 2000 midi synthesizer that happens to have an FPGA in it. (I can't remember off the top of my head if it was a Xilinx Spartan or an Altera Cyclone, but I think it was one of those.)
  • With proper implementation, you could build chips that essentially are functional programs with this, and swap between programs as required. Fans of Haskell would likely realize some interesting benefits.
    • by gtall ( 79522 )

      You can do that with FPGAs, just go through a defunctionalization step which spits out a FSM. See Bill Harrison's work at U. of Missouri.

    • Xilinx already supports this. You can load multiple different .bit files (fully compiled FPGA file format) into flash and then just reprogram the FPGA as needed on the fly. Also, FPGAs are great for general glue logic and massive individual IO connections. They allow you to have very low level control over signals that is just not the same in a microprocessor. They will definitely not replace a microprocessor for general program flow but they give you much tighter control over signals and signal timing.
  • by Anonymous Coward on Friday April 15, 2011 @07:47PM (#35835362)

    The real problem with FPGAs is the painfully byzantine tools you have to use to deal with them. The chips themselves are fine.

    There is a lot of room for disruption in the programmable logic tools industry. If this company is smart, they will focus on workflow and toolchain innovations, rather than becoming too distracted by shiny silicon baubles. Shorten the edit-simulate-synthesize-test cycle and you will make a lot of people happy.

    "FPGAs are very expensive because they are large pieces of silicon," says Teig, "and silicon [wafer] costs roughly $1 billion an acre."

    Then again, you should never argue with a man who buys his ink by the gallon, or his wafers by the acre.

    • by artor3 ( 1344997 )

      "FPGAs are very expensive because they are large pieces of silicon," says Teig, "and silicon [wafer] costs roughly $1 billion an acre."

      Then again, you should never argue with a man who buys his ink by the gallon, or his wafers by the acre.

      Especially when he's so incredibly wrong. Silicon costs more like $10 million per acre right now (I had to look up the conversion, it's a kinda weird unit). The reason FPGAs are expensive is because of all the crap you need to implant and deposit and remove in order to make that silicon into a chip. And then you have the added cost of testing every single transistor in the chip to make sure that no little dust particle floated by and ruined the chip. That's where the size really hurts you, because one t

      • I think it's closer to $100 million per acre, not $10 million, but what do I know? (That's based on ~$8000 per 12-inch wafer, which is an estimate I saw a year or two ago. Of course, maybe this guy is only getting 10% yield, in which case he's essentially right - and there's plenty of chips that are that bad, at least early in the product and process lifetime...)

    • NeoCad + DIY FPGA (Score:4, Interesting)

      by femto ( 459605 ) on Friday April 15, 2011 @09:59PM (#35836264) Homepage

      The disruption you mention almost happened in the early 90's. NeoCAD [findarticles.com] produced a compete competing tool chain for Xilinx FPGAs, including the place and route, for the then state-of-the-art 4000 series. Their software was better than Xilinx's, including things like a graphical layout editor. Xilinx was having none of it and bought NeoCAD. Quite a few NeoCAD features made it into the Xilinx software, eventually. Soon after that Xlininx started publishing less information on their FPGA's interconnect networks, and there has never been another attempt at writing such software.

      Personally, I think writing a clone of the Xilinx software, today, is the wrong thing to do. It would be less effort to design and manufacture an "open source" FPGA, and write the necessary software from scratch, than to reverse engineer Xilinx's place and route.

      • by boombox ( 188790 )

        I have been thinking about the same thing as of lately. Creating an open source FPGA with full tools: Synthesis, Place and Route, low level FPGA editor and more.

        I was thinking as a start to make something similar to the old Spartan-1 type fpga in a newer process. The marked targeted would be in between the CPLD and the smallish fpga's. (100 luts to 1000 luts)

        The reason for this is that CPLD's are simple to integrate on board (no non volatile storage, only one supply voltage) but for small fpga's you already

    • I worked for a company that ignored the tool problem. You had to perform routing and timing in your head and with a CAD GUI. Not pretty.
    • Checkout: http://www.maxeler.com/ [maxeler.com]

      They've been getting some pretty crazy results. If i understand correctly, they've got a completely innovative workflow, tool-chain and abstraction. I think they've even created their own simulation tools that give you cycle-accurate results 1000x faster than modelsim.

  • No mention of how easy these things are to program. Timing constraints will be very tight, and what happens if clock skew carries signals across folds? Any success depends on how well the accompanying tools can implement the standard synthesis flow to support multiple levels.
    • by pavon ( 30274 )

      what happens if clock skew carries signals across folds?

      I assumed that data was registered between fold switches.

    • by Anonymous Coward

      No, they had to do their own synthesis since the synthesis step has to be combined with the time domain place/route to work. This is one of the reasons they have burned through so much money. We've not been able to establish a single application that the Tabula technology improves. This is all smoke and mirrors.

      And there are no FF's in the technology -- only latches. We have no idea how clock domain crossing is accomplished.

  • by Anonymous Coward

    To me, after reading the papers, it looks like they reinvented (or reimplemented) Transputer architecture, but in a single chip, and with a different API.

    • by Anonymous Coward

      To me, after reading the papers, it looks like they reinvented (or reimplemented) Transputer architecture, but in a single chip, and with a different API.

      Uh, no. I'm sorry, but you probably have no clue what FPGAs are if you think that. Possibly transputers too.

      There is no "API". FPGAs aren't devices for executing software. They're giant arrays of lookup tables used to implement logic functions. If you have a 16-entry LUT indexed by 4 bits, you can program the LUT to implement any possible logic function on 4 input bits. If you've ever taken a digital logic course, they're nothing more than truth tables. These LUTs and a bunch of other hardware buildi

      • by makomk ( 752139 )

        Transputers were general purpose CPUs designed to be tied together in some kind of network (I don't recall the details). They looked nothing like FPGAs, and didn't do this kind of time-sliced trick.

        I seem to recall that one of the contemporary transputer-like designs used "CPUs" not much more intelligent than a LUT, though...

  • by Alotau ( 714890 ) on Friday April 15, 2011 @08:33PM (#35835744)
    For those of you who missed TFA, here is a juicy tidbit:

    Teig estimates that the footprint of a Tabula chip is less than a third of an equivalent FPGA, making it five times cheaper to make, while providing more than double the density of logic and roughly four times the performance.

    That is 6X more impressive than any other use of factors in a sentence... ever.

  • by Anonymous Coward on Friday April 15, 2011 @08:55PM (#35835880)

    the guy behind Tabula is behind a number of "failwins" in the electronics industry - a fail in that the technology ended up being pointless and rejected by the market, but wins in that his companies were all bought out by suckers for quite a bit of $$$$

    two examples:

    - X initiative (use 45 degree routing on chips) - look at http://www.xinitiative.org now - 100% dead. look at it, and all the wonderful claims he (and his sucker followers) made in archive.org.

    - Simplex solutions - built a large number of poor quality EDA tools (poor because they never got adopted and so never got the real bugs worked out and features required for real work) but looked very shiny, so were sold to cadence for a fairly large sum of money (relative to the low dev. cost). All but one of the simplex tools (now called cadence QRC) has been EOLd by cadence, and QRC will be thrown out just as soon as anyone cares enough to replace it with something better.

    You can bet Tabula, if it succeeds at all, will be another failwin. It will be bought by one of Xilinx or Altera (the current FPGA duopoly), a couple of minor good ideas will be incorporated into future products and the overwhelming majority of the Tabula technology will be promptly forgotten. ...why? I hear you ask?

    The reason is simple: Steve Teig has realized that "spamming" technology really does work (for him) - he has figured out that he can leave it up to much larger corporations to figure out, in their own sweet time, why 99% of his ideas sound great but are actually pointless, in the months and years after they are fooled into acquiring his techno-spam through an acquisition.

    From one of his many online bios [c-eda.org]:

    He holds over 220 patents. In 2002, he broke Thomas Edison’s record for the number of patents filed by an individual in a single year.

    Enough said.

    • Re: (Score:3, Interesting)

      by Anonymous Coward
      I am working in the EDA business, and based on the biography you provided, Steve Teig looks far from being a frauder.
      You point at the 'large number of poor quality EDA tools' in Simplex (which I never used), but you certainly know that EVERY EDA tools (from Synopsys/Mentor/Cadence) have its LARGE set of bugs. Why? Because quality is driven by ASIC design companies, and those companies do not understand that putting pressure on tool prices is hurting overall quality. Anyway.

      "in 1982, he invented compiled-
    • by meza ( 414214 )

      Wow thanks for bringing my attention to such an interesting and fascinating guy. From what you write he sounds like a truly brilliant engineer and businessman man. Of course only a tiny proportion of all products make the dominating and long lasting impact on the market that you seem to want, think the light bulb, printing press or the integrated circuit. However an even tinier amount of inventions ever make it to the market or generate any revenue at all! This guy has been able to do that over and over aga

  • Better writeup (Score:4, Informative)

    by ayvee ( 1125639 ) on Friday April 15, 2011 @09:12PM (#35836012)
    Link [eejournal.com] to a better writeup, one that doesn't attempt strained architectural analogies (ignore the first paragraph or three, but do look at the comments).
  • I remember Starbridge [slashdot.org], and their audacious claims, and this company sounds like it's trying to accomplish something similar, but aren't being as audacious with their claims.

    I do look forward to software reconfigurable hardware, but that does mean it brings a whole new meaning to the word "bricking."

  • by Macman408 ( 1308925 ) on Friday April 15, 2011 @10:45PM (#35836488)

    ...but it has fast context switching built-in. And you can't control when the contexts switch, they always go in order (as they should, since they're all statically assigned, and are different parts of a single problem, rather than separate problems).

    For those that don't know how FPGAs work, here's a basic crash course: they have lots of blocks, each one has a look-up table (say a 4-LUT; 4 inputs, 1 output). The LUT is basically a "read-only" RAM with 4 address bits (so 16 addressable locations), and one data bit. The RAM can be rewritten (this is what is done when they program an FPGA), but it's fairly slow. Tabula changes it up a bit so that each addressable location is 8 bits instead of 1 bit. Since transistors are basically free on an FPGA (they're wire-dominated), this doesn't cost much, and it means that they can time-share pieces of silicon for different purposes without the penalty of reprogramming the chip. Then, each cycle, it'll pick a different one of the 8 bits (though the address, or inputs to the 4-LUT, may be changing at the same time).

    It's a fairly straightforward idea, though there's a fair amount of complexity added to the design tools.

    However, it's not free. You now have lots of high-speed logic, which is probably using tons of power, and it's switching frequently, which is using tons more power, and even when it's not, it's probably fairly leaky, using even more power. Effectively, you have a 1.6 GHz chip, but to you it seems like it's only running at 200 MHz - but it can do ~8 times more processing per silicon area. You might also think of it as being similar to the Pentium 4 integer units; they ran at twice the clock speed of the rest of the chip, so it seemed like there were twice as many of them (so a single IU could do an add in the first half of a core clock cycle, and a subtract in the second, computing two instructions per cycle).

    So this chip is basically trading latency for computing power. The more operations you need to do, the slower it will run, because it'll take more of their folds to implement your logic.

    • So it does nothing infinitely fast?

      Sounds like a fair tradeoff.

    • by davFr ( 679391 )
      200Mhz is still a very high frequency for logic mapped on an FPGA. user-compiled logic does not reach this frequency on, let's say, a Xilinx's Virtex 5.

      If Tabula can provide the equivalent of 8 FPGA in a single circuit, it will be a huge win for system designer. Multi-FPGA systems have reduced performance because signals must be propagated between FPGAs, via a limited number of IO pads. A PCB integrating a single FPGA (rather than 8 FPGAs) would be cheaper to produce, more compact, while providing better
    • by epine ( 68316 )

      That's not a bad overview, but you need to apply some intelligence to the power consumption figures.

      A|G || B|H
      -+- || -+-
      C|E || D|F

      • Now back to the original program. Some year fer fools day, /. needs to randomize Preview/Submit.

        That's not a bad overview, but you need to apply some intelligence to the power consumption figures.


        A|G || B|H
        -+- || -+-
        C|E || D|F

        My first instinct is to set up the eight bit shift register as a pair of four element squares; one clocking on the rising edge, the other on the falling edge. A mux at the bottom selects from the left/right square on alternate cycles.

        Your clock is 800MHz instead of 1.6GHz. The time

        • The only FPGA I've used in my own design was a Spartan DSP. Heinlein's magic box isn't going to do you much good implementing 18x18 Wallace trees or adding conventional compute cores.

          It's optimized for a very high LUT/pin ratio, in a small, hot package, discounting macro blocks.

          I was more enthusiastic about mixed signal ASIC technology [triadsemi.com] from Triad, but on my initial inquiry they haven't lowered the cost of full-custom analog ASICs at the low end. What they seem to offer is a fairly expensive, but far less

          • It's optimized for a very high LUT/pin ratio, in a small, hot package, discounting macro blocks.

            And that's a very good point. I've seen many, many designs that are far more I/O limited than logic-limited. I actually did a design once that used a very simple, cheap PLD; out of roughly 100 I/O pins, I think we had 4 or 5 unused, but we were only using something like 7% of the available logic. (Granted, we tried to maximize the I/Os that were in use - a couple might not have actually been tied to any logic, but we had them connected so that if we decided we could build logic off of them later, we didn't

  • what kind of process they're using, I imagine it will be a 40nm process or some similar feature size. What if we all just concentrated on making cheap short run fabrication machines, maybe something that could make a 150nm feature-size on pre-sliced wafers. That way I could quickly print something up in-house. Maybe my design could have some re-programmability, but I can't see that being the biggest use for FPGAs. Even if post-shipping re-programmability is feasible, I doubt many FPGA designs actually use i
  • "The power consumption if these devices is relatively high, and likely too much for a device like a phone" Dead giveaway that this is a marketing story, not a real proven technological renovation.

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...