Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Technology

Stretch Announces Chip That Rewires Itself On The Fly 311

tigre writes "CNET News reports on a chip startup call Stretch which produces the S5000, a RISC processor with electronically programmable hardware so that it can add to its instruction set as it deems necessary. Thus it can re-configure itself to behave like a DSP, or a (digital) ASIC, and perform the equivalent of hundreds of instructions in one cycle. Great way to bridge the gap between general-purpose computing and ASICs."
This discussion has been archived. No new comments can be posted.

Stretch Announces Chip That Rewires Itself On The Fly

Comments Filter:
  • by KDN ( 3283 ) on Monday April 26, 2004 @03:09PM (#8975111)
    Can you imagine the virus you could write if you could change the instruction set of the cpu?
    • Can you imagine the virus you could write if you could change the instruction set of the cpu?

      Uh, no.
    • by NanoGator ( 522640 ) on Monday April 26, 2004 @03:21PM (#8975237) Homepage Journal
      "Can you imagine the virus you could write if you could change the instruction set of the cpu?"

      Forgive my ignorance, but why would this be any different than the virus you can write with the general purpose CPUs we have today? You could make the machine unreliable, but that wouldn't make for an effective virus distributing machine.
    • by CedgeS ( 159076 )
      Wow! The virus could execute arbitrary code! Just like if it could choose which of the existing instructions were executed by another processor. The core part of your virus could run faster, maybe in just one clock cycle!
      • How do you detect a virus that has control of the underlying hardware though...
        • Re:Insightful?! (Score:5, Interesting)

          by CedgeS ( 159076 ) on Monday April 26, 2004 @03:33PM (#8975380) Homepage Journal
          Easy - Say, the extra instructions are supposed perform a matrix convolution. Call extra instruction 1 with some random matrix. If it doesn't calculate the same thing as a slow version run in the regular RISC part you know extra instruction 1 has in some way failed and needs to be reprogrammed. Your virus software and OS etc should never have special instructions and are always run in the regular RISC part.

          I highly doubt anyone is planning on making PCs with these. They are designed for being a processor in something like a data logging / control system, surveillance video compression, etc. Your system will probably have no need for virus detection any more specific than other more general regression and test suites it will need during operation.

        • stop the madness. (Score:3, Insightful)

          by twitter ( 104583 )
          How do you detect a virus that has control of the underlying hardware though...

          The same way you detect a virus on any machine that has been compromised, with another machine and or a thorough understanding of normal operation and running processes. Nothing new here. Evaluate the harm done by a potential compromise and take steps accordingly.

          There is no practical difference between a hardware and a software compromise and the remedy is the same. Indeed, for critical purposes, there's little difference b

    • You could partially solve this problem by running as a restricted user all the time so that you couldn't change the instruction.
    • I wouldn't bet on that.

      A Minor change in the instruction set would likely render the OS dysfunctional - and while that would certainly get attention - it would not propogate very well.

      There is a math about viruses which requires them not to kill their hosts, and to do as little damage really as they can bear. Damaging viruses get high priority on fix lists and would get shut down more quickly than less harmful viruses.

      I think a CPU change virus would be a rather self-defeating proposition.
  • by LostCluster ( 625375 ) * on Monday April 26, 2004 @03:10PM (#8975114)
    If this doesn't rempresent the death of the megahertz as a processor-benchmark standard, I don't know what will...

    Effective application speed was never based on a cycle count alone, because different processors can have better instruction sets for the given application. The main breakthrough here is that this chip leaves "user-definable" space in its instruction set so they can re-optimize the instruction set on the fly. Whatever you're running, its most commonly used functions can almost slide from being code to being "on the chip" and that's sure to speed up the experienced speed.

    Yeah, I know its a /. cliche, but... imagine a cluster of these!
    • errmm... (Score:2, Funny)

      by torpor ( 458 )
      ... earth to slashdoid,

      being code to being "on the chip" and that's sure to speed up the experienced speed.


      first, where exactly is code run, if it isn't 'on a chip', and second, what? speed up the experienced speed?

      you mean, as opposed to something like 'pretended speed', which is what i imagine you were using to measure your rapid desire to let your undoubtedly 'speedy' fingers get through your slashdot post without thinking ...

      'experienced speed' indeed...
      • Well... (Score:4, Informative)

        by Ayanami Rei ( 621112 ) * <rayanami&gmail,com> on Monday April 26, 2004 @03:27PM (#8975309) Journal
        This is basically an FPGA married to a RISC processor. So if you have a bit of RISC code that can be simulated using the FPGA portion, and you have enough spare cells to add it, and it takes 10 clock cycles for the FPGA "user instruction" to dispatch, but it takes 200 to do it outright in the original RISC instructions, then you're experiencing a 20 to 1 speed increase for that bit. You speed up the function without overclocking. Actually what you've done is "trade off".

        He could have posted clearer, if he wasn't trying for first post.
      • first, where exactly is code run, if it isn't 'on a chip', and second, what? speed up the experienced speed?

        When a function is defined in code, you have to use multiple processor cycles to complete the function. However, when the funciton is "on the chip", that entire function can be completed in just one assembly-level call to the processor.

        "Experienced speed" is of course a pseudo-benchmark because it can't be standardized, and its components highly specialized. It's how fast you can complete a set of
        • Re:errmm... (Score:2, Informative)

          by fitten ( 521191 )
          When a function is defined in code, you have to use multiple processor cycles to complete the function. However, when the funciton is "on the chip", that entire function can be completed in just one assembly-level call to the processor.

          But you cannot say that one "assembly level call" to the processor will take (even) fewer "processor cycles" to complete. Hint: very few instructions in even today's CPUs take a single clock cycle to execute, most take several, it's just with pipelining, many instructions h
    • by Stripe7 ( 571267 ) on Monday April 26, 2004 @03:25PM (#8975293)
      This looks interesting, at this generation it looks to be dedicated applications. You code for your particular application and use their compiler which restructures the CPU to optimize for that application. What it does not say is if the hardware changes are read/write. If you release a maintenance patch to your application, do you have to swap in a new CPU for optimal performance? If the area is read/write just how many times can you change the CPU instruction set? Can you change the CPU instruction set with something else other than using their compiler? That is using a microcode release that rewrites the CPU. I would not want to load a compiler onto every one of my products.
  • Beware! (Score:5, Funny)

    by spudthepotatofreak ( 649917 ) on Monday April 26, 2004 @03:10PM (#8975115)
    Give these damn chips awhile to evolve and you'll have borg nanoprobes... Beware the nanoprobes!!
  • And it will ship with a free copy of Duke Nukem Forever, right?
  • by hatrisc ( 555862 ) on Monday April 26, 2004 @03:11PM (#8975124) Homepage
    we can have only one standard assembly language? the hell with java if that's the case.
    • by tuffy ( 10202 ) on Monday April 26, 2004 @03:24PM (#8975281) Homepage Journal
      we can have only one standard assembly language?

      That's already here. It's called "C".

    • The advantages of Java (and .Net, once Mono comes out) is not just portability but managed code as well, to help you protect from things like buffer overflows. This applies as well to interpreted languages like Perl, Tcl, Python, etc.

      Where I see a real possibility is in taking the JVM/CLR/Parrot/etc. and putting part of THAT functionality on-chip. Imagine your bytecode or interpreted programs running as fast on this platform as a compiled program runs on your run-of-the-mill Intel or AMD processor!
    • While you're at it, to hell with anything that requires maintainence, or is more complex than "Hello World". Speed isn't everything.

      Given the choice between writing all of my programs in assembly, or being thrown face-first down a flight of stairs, I'd have to think about it.
    • There's a cool library called GNU Lightning [gnu.org] which will generate machine code at runtime, which is good for JITs and such. It isn't exactly what you're looking for, but it illustrates that having a standard assembly language (or, much more likely, several standard assembly languages!) isn't all that far off.
  • Whoa.. (Score:5, Funny)

    by Anonymous Coward on Monday April 26, 2004 @03:11PM (#8975127)
    Just imagine a Beowulf Clu...oh. Skynet. Right.

    Let's not do this one.
  • by Revolution 9 ( 743242 ) on Monday April 26, 2004 @03:11PM (#8975129) Homepage
    cool. -One step closer to Judgement Day
  • yawn ... (Score:4, Insightful)

    by torpor ( 458 ) <ibisum.gmail@com> on Monday April 26, 2004 @03:12PM (#8975134) Homepage Journal
    ... wake me up when i can buy a thousand of them for $10 a piece ...

    [okay, okay, so it'll be -hell- fun to design codecs and other protocols that can switch their chipset dynamically, yeah, but i'd need 1000's of them deployed to have a real reason to do it...]
  • by Neil Blender ( 555885 ) <neilblender@gmail.com> on Monday April 26, 2004 @03:12PM (#8975139)
    "I see that you are (insert processor mumbojumbo.) Would you like me to reconfigure my instruction sets?"
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Monday April 26, 2004 @03:12PM (#8975141)
    Comment removed based on user account deletion
    • by DaHat ( 247651 ) on Monday April 26, 2004 @03:24PM (#8975274)
      For the most part, FPGA's you build its code from scratch, you give it it's identity of how it works, what it does and so on.

      This chip sounds like a hybrid between an FPGA and a run of the mill general purpose RISC processor. Being based on a RISC instruction set, you code for it as you would a normal processor, however if the compiler sees code which could take advantage of having more CPU support, it could add instructions to the FPGA like portion of the chip to enable better throughput.

      The short summery is: FPGA, programmed from scratch. Standard RISC processor: Already has instruction set which you program against.

      This could be quite handy for some of the embedded programming I do.
    • As far as I can tell, it is different in that you have essentially an FPGA-like chip on the same core as a regular CPU.
    • a FPGA is just a block of logic gates that can be connected after the original manufacture. Typically, they are used to implement simple logic cheaply and easily. This is more of an entire processor designed on a similar principle. I would guess that it includes registers, a clock, bus connection facilities, etc. If anything, this is closer to a CPLD which combines i/o blocks, function cells and interconnection blocks to create somewhat more complicated (and often times sequential, as opposed to combination
    • How is this different from FPGA's?

      If I read the article correctly, the difference is in the compiler.

      When you write code for this processor, the compiler would figure out which operations would fit best in reprogrammable logic, then configures the logic and compiles to this custom instruction set all on its own. At runtime, the custom logic is loaded and the program executes.

      A traditional FPGA, while reconfigurable, is normally developed in Verilog or VHDL. Where reconfigurable logic is used in a micropr

    • Well it seems rather similar to the Virtex 2 Pro, those have PowerPCs integrated on them. Although they are rather expensive. And while the individual chips may not be all that expensive the boards are.

      All in all it seems like these have a developer environment which helps the user port C/C++ programs to this platform. There has been quite a few of those chips / systems before though. It will be interesting to see if this one can take off the ground where the others have failed.
    • by Christopher Thomas ( 11717 ) on Monday April 26, 2004 @03:43PM (#8975484)
      How is this different from FPGA's?

      Short answer: FPGAs let you build using basic gates and (very small) lookup tables. This lets you build anything you please, and fully optimize the number of functional units of each type that you have, but has a speed and size penalty.

      This chip is basically a RISC processor with an FPGA-type fabric bolted on as a co-processor, as far as I can tell from the detail-poor press release. By implementing most of the instruction pipeline as fixed, optimized hardware, it runs without any of the penalties of a purely FPGA-based implementation. When you have a number-crunching task that would benefit from a custom logic implementation enough to offset the performance penalty of implementing it in programmable logic blocks, the compiler configures the programmable logic into a suitable coprocessor which is stuck in as an extra branch of the instruction pipeline.

      How much benefit you get from this depends on what you're doing. Modern general-purpose microprocessors have enough vector instructions to handle most DSP-ish tasks without an abysmal speed penalty (just a large size and power penalty over a purely DSP-based implementation). Most computing tasks aren't limited by processing horsepower at all - they're either waiting for memory accesses to complete (even cache accesses are very slow compared to register accesses), or they're waiting for the target address of a branch to be decided (speculation and BTBs don't address this perfectly by a long shot). A reconfigurable processor would suffer from much the same type of problem. While using the programmable logic path for slice processing could remove some of the branching penalties (by following all paths and selecting the desired result), this would be at an even greater area and power cost.

      For specialized applications, it would be quite useful, of course.

      A quick glossary of terms being thrown around, for anyone confused:
      • FPGA - Field Programmable Gate Array.
        This is a combination of lookup tables, sum-of-products combinational logic blocks, and scratch-pad SRAM that you can hook up in nearly arbitrary ways to produce custom circuits at a gate level. Bulky and slow, but good at implementing algorithms efficiently. Configuration information is loaded from a serial PROM chip at startup, letting you change it relatively easily.

      • CPLD - Complex Programmable Logic Device.
        Like an FPGA, but stores configuration information internally, so you need to take out the CPLD and burn it to change configuration instead of re-burning the configuration PROM.

      • PLA/PLD - Programmable Logic Array/Device.
        Little cousin to CPLD. This is what you played with in second or third year. Typically these are just a sum-of-products combinational logic block with a register stuck on the end to latch the output. Useful as glue logic.

      • ASIC - Application-Specific Integrated Circuit.
        This is an integrated circuit that's half-made. A number of gates and registers and so forth have been fabricated on the chip, and the lowest few metal layers have been used for internal routing for these, but you get to define the upper metal layers to form arbitrary connections among these (either as the last fabrication step, or by laser-cutting a pre-fabricated wiring mesh to leave the geometry you want). Works much like a CPLD, but the design is decided at fabrication time and cannot be changed. Faster and less bulky than a CPLD implementation.

      • Standard cell design.
        This is a custom-fabricated integrated circuit that uses cells from a standard library of components, usually automatically placed and routed from a VHDL or Verilog description of what you want the chip to do. Faster than an ASIC if you have good place and route software, but more expensive in small quantities because you're making what amounts to a full custom chip. Design time is much less than a fully custom design would be, though (but verifying that the design description is correct is a royal pain).


      I hope this clears things up for anyone who was confused.
  • more info (Score:5, Informative)

    by morcheeba ( 260908 ) * on Monday April 26, 2004 @03:12PM (#8975142) Journal
    NetworkZone has a product review [analogzone.com] with some more insight. A good quote:

    ...the [300 MHz] Stretch even beats the Intrinsity FastMath processor running at 2 GHz

    Of course, there is no such thing as a universal solution and the Stretch processor does have its limits. One significant area is in "low touch" operations such as network processors. While it can certainly do the relatively simple packet inspection and transformation that switch fabrics and network processors normally handle, it is really much better suited to the heavy-duty calculation- and manipulation-intensive tasks found in "high touch" applications such as video compression. For example, H.263/264 motion estimation is capable of producing very high-quality video from a relatively small bit stream, but requires lots (and lots) of raw processing horsepower. Happily, the Stretch processor is only too happy to oblige, churning out a SAD (sum-absolute difference) operation on a tile-full of pixels for H.263 video in 43 ns (H.264 takes 83 ns).

    • even more info (Score:2, Informative)

      by Anonymous Coward

      EE Times has an article here [eetimes.com]. Apparently this chip has a competitor. There's also more details about the chip itself.

      (Anonymous because logging in at work)

  • by LostCluster ( 625375 ) * on Monday April 26, 2004 @03:12PM (#8975144)
    I think we're going to have to move the crypto benchmarks back a step when this tech comes out. Not very many of us have RISC chips that are optimized for MD5 or any of the other popular crypto formulas, but if the typical consumer PC had this technology, we could all effectively have an on-demand RISC for whatever we need at the moment sitting in our PCs.

    In short, the time-to-crack using consumer technologies for almost any form of crypto is about to take a step backwards. It won't "break" anything, but the brute force combinations will be able to be examined in a faster time, meaning higher standards will be needed for the same level of protection you have today.

    Not surprising, these breakthroughs will always keep coming...
    • Luckily it will also immensely speed up encryption times. So, on the whole, probably a gain for the white hats rather than the black hats.
      • So? the only reason crypto works at the moment is because cracking is several hundred orders of magnitude slower than encrypting/decrypting.

        Taking more time to encrypt/decrypt isn't a problem (does anyone here notice the differance between 2.5ms and 5ms?) but reducing the crack time by the same proportions means that codes that were built to last years might only last months, or even mere weeks, which is a real problem.
    • by Jerf ( 17166 ) on Monday April 26, 2004 @03:42PM (#8975475) Journal
      Along with jsac's comment (more processor power exponentially benefits encryptors, only linearly benefits crackers, on the whole more power means a win for encryptors), I'd like point out this is only a set-back for encyption in-as-much as encryptors claim that their encryption will keep your data safe for all time. Which is to say, at least for the reputable encryptors, this isn't a set back at all.

      If you insist on putting words in their mouth, then yeah, you might consider it a set back. But that's your misunderstanding, not theirs. All reputable encryptors have accounted for Moore's Law in their cost/benefits tradeoffs. Since it doesn't take much encryption power before it requires computers larger then the Universe to crack it via brute force (and since "cracks" on good encryption are really typically just ways of collapsing the search space, not procedures that give immediate answers, often adding more bits will require Universe sized machines, too), this isn't that big a deal for encryption. Push your key size up and be done with it. Even conventional machines can handle that today, it just takes longer.
  • by AtariAmarok ( 451306 ) on Monday April 26, 2004 @03:13PM (#8975158)
    Is this the only technology they managed to salvage from the android's severed hand? Any interesting gears and motors at all?
  • by dhasenan ( 758719 ) on Monday April 26, 2004 @03:14PM (#8975163)
    How can something that normally takes "hundreds of thousands of instructions" be handled in a single instruction? Surely all the same mathematical operations must take place, except for some optimization. Or is it a matter of a certain structure for computation being created in a more permanent fashion rather than being dynamically formed upon demand? Then the operations could be performed in a single cycle. On the other hand, that portion of the processor would become useless to other tasks. Or am I misunderstanding this entirely?
    • Say you had to compute a 10000-entry sin/cos table (simple example). The processor would reconfigure itself to perform sin/cos operations in a single cycle (parallel ALUs etc.) and, if there were enough configurable circuits, perhaps multiple sin/cos table entries at once. That's where the speed advantage is - large blocks of repetitious calculations. With a sophisticated enough reprogramming AI, computationally intensive apps like video games could get a huge performance boost.
    • by Chirs ( 87576 )
      You hit upon the answer in the latter portion of your post. Most cpus are generalists--they're fast at most things, but aren't optimized for anything. This kind of tech allows you to optimize your cpu for a particular task.

      If you have something that needs to do a simple operation on each member of a large data set, the chip could be configured as many tiny simple cores that are just smart enough to do that operation.

      Or if you needed to do a complicated math function, you could optimize the cpu for that
    • by radish ( 98371 )
      I studied "Custom Computing" as it was called at my university a few years ago. That was based around using FPGAs as the processor, but with the same idea of doing on-the-fly redesign of your hardware to suit the current problem.

      The basic idea is to move problems from the time space (i.e. do X then Y then Z taking T time to do it) to the physical space (i.e. do X next to Y next to Z taking S transistors to do so, but only one cycle). So your simple add operation in a regular microprocessor, which fetches t
    • Perhaps they mis-worded it.

      You can do lots of addition/subtraction instructions to get the result of a single multiplication instruction.

      Maybe they meant to say thousands of clock cycles can be reduced to one clock cycle since you can have larger single instructions(i.e. squareroot over pi or something) programmed into the chip that only take one cycle?
    • It's a DSP/RISC processor (basically the same thing) with an on-chip FPGA. If you have some particular algorithm, you can put it on the FPGA to get a solution instead of having to use code. (this is a lot harder to explain then I thought it would be....)
    • In electrical terms, imagine a processor that has left some of its circuit space with a "This space for rent!" sign posted. Instead of being a hard-wired function like normal, there's a grid of switches that cna be turned on and of in combinations in order to create define a few new processor functions.

      Sure, you have to "call your shot" and define your new function before you can use it, but storing the function inside the chip rather than as code makes it a whole lot faster to use...
  • Finally (Score:2, Funny)

    by Anonymous Coward
    I can tell my computer to go fuck itself and it will.
  • by SlipJig ( 184130 ) on Monday April 26, 2004 @03:15PM (#8975172) Homepage
    IANAEE, but I was just wondering if this technology provides greater advantages to unique monolithic apps as opposed to apps targeted for virtual machines such as the JVM or CLR. Those VMs are general-purpose, and maybe apps that run on them would be "invisible" to the hardware reprogrammability... however I don't know how just-in-time native compilation might change that picture. Anyone with knowledge of this stuff care to enlighten?
    • Right now, this product isn't meant for PCs quite yet. Basically, the manufacturer instructions are to write your program in standard C, and then run it through their application which will convert the most-used C functions into a RISC instruction for the chip.

      So "virtual machines" is a situation this chip hasn't had to encounter yet. I'm guessing that a PC user would have to throw the switch manually to change which "processor image" is running at any given time...
  • by stephenry ( 648792 ) on Monday April 26, 2004 @03:15PM (#8975173)
    It's called DISC, Dynamically Reconfigurable-Set Computer. It's existed for a few years now. If I remember correctly, there is a group at Berkley working in the area and have released a few nice papers on it.
  • by ajiva ( 156759 ) on Monday April 26, 2004 @03:16PM (#8975183)
    I remember a project where hardware engineers setup a cpu to modify itself until it learned to do a task by itself. It got to the point where the hardware was doing the right thing, but not because the hardware was reconfigured properly, but because the software was using minute naunances in the electricity flowing through to get the job done. Even the hardware designers had no idea how it could possible be working
    • by itp ( 6424 ) on Monday April 26, 2004 @03:38PM (#8975431)
      It was an FPGA, and it wasn't the CPU modifying itself, it was a genetic algorithm designing a circuit that would perform a specific task (differentiate between two different ranges of input signals, IIRC).

      The interesting result was that the circuit designed by the GA didn't use conventional structures, but instead, according to traditional circuit design theory, should not have functioned at all -- dead loops, etc. The behavior and result was tied to the physical FPGA being used to test and give feedback to the GA -- the minute nuances, as you referred to them -- and was not portable to even another instance of the exact same FPGA.
      • by bigbigbison ( 104532 ) on Monday April 26, 2004 @03:54PM (#8975597) Homepage
        I remember reading about this in either Popular Science of Discover magazine. I seem to remember that the head researcher took the chips to another building or room to show them off and they didn't work. Then took them back to the room they came from and they worked again. They finally determined that the rooms had slightly different temperature and the chips were so specific to that environment thta changing the temperature even a tiny bit stopped them from working.
        Crazy stuff.
    • by jcorgan ( 30025 ) on Monday April 26, 2004 @03:59PM (#8975667)
      This was Adrian Thompson's [susx.ac.uk] doctoral thesis in 1996.

      He used a Xilinx FPGA and a genetic algorithm (implemented separately) to evolve a circuit which could distinguish (IIRC) two different frequency tones on the input as a logic level output. The "program" was allowed to interconnect the FPGA configurable logic blocks in any old sort of way internally and between CLBs. This would include ways which would cause logic designers to shudder in horror :), and did not include a clock input to the circuit at all.

      The result was a successful circuit that used a relatively small portion of the FPGA. But trying to work out how it was accomplished the tone discrimination was impossible. There were sub-circuits that were isolated from the rest of the circuit but when removed would cause the circuit to fail. Thompson hypothesized that the circuits were taking advantage of "out of band" communication via electromagnetic or thermal influences on adjacent CLBs.

      Furthermore, the circuits turned out to be very specific to the ambient temperature during training and usage, as well as being specific to a particular FPGA used (a working circuit on one would fail on another.)

      In any case it was a fascinating small-scale exploration of what reconfigurable hardware and genetic algorithms could accomplish, when not constrained by the "clock driven sequential logic" paradigm nearly all human engineered circuits use.

  • damn!! (Score:2, Funny)

    by Mastadex ( 576985 )
    I like to welcome our new reprogrammed overlords...
  • by Anonymous Coward on Monday April 26, 2004 @03:18PM (#8975213)
    ...I sense another Transmeta coming on...

    Yes sure, rewirable chips would be cool for certain applications, but how does one go about making it deal with multiple applications with multiple needs? You'd over load the CPU with a truckload of specialized instructions - which would probably slow it down. Granted, I see uses in things like mobile phones, but for multitasking machines, a 'Jack of all trades' chip is the way to go.
    • Did you read the article? Do you think that the only users of computing power are multi-tasking machines? This doesn't compeate with Intel; it compeates with TI. It is for EMBEDED products.

      As someone who designed such products, I think the chip has a very good shot at succeeding if it does what it says. In fact it is EXACTLY what I need for several projects.

      Assuming it performs comparable to a TI DSP and costs only slightly more, I can make a cheeper product because I have fewer chips on board (just the

    • You have OS support. New instructions are a resource that the OS manages. Too many processes want to add their own instructions? Then when a context switch takes place the OS overwrites instructions for the outgoing context with instructions for the new one. Same as managing small amounts of RAM by swapping.
  • by ebrandsberg ( 75344 ) on Monday April 26, 2004 @03:19PM (#8975218)
    From what I gathered, this allows the compiler to create an instruction that can do a lot of work in one instruction, NOT for the processor to decide to create an instruction. Think of it this way, if you know you need to do something like an array multiply many times, the compiler could create an instruction for it, and then use it as needed. The key to this is that the instruction set can be optimized on a program basis, so you don't waste gates on SSE2 instructions if you don't use them, etc.

    This would compare with FPGA's I believe in that most FPGA applications are fixed once loaded, although I know that there was talk about stargate systems on slashdot (http://slashdot.org/article.pl?sid=03/02/15/16292 37&mode=nested&tid=126)
    using FPGA's for general processing before.
    • FPGAs are not static. They can even be reconfigured during runtime. (Though it takes a lot of time, from the chips point of view.)

      Search around for reconfigureable FPGA and you'll find that there is several projects which does this. I know of three such projects of the top of my head (Stargate, RAW, Mitrion) so I would exactly call the idea new.
  • It sounds interesting enough that I wouldn't mind buying one to play w/ or port an os to. Their numbers of their 300mhz chip outperforming a 2ghz chips makes sense if the instruction set has been changed for a single purpose. A coworker pointed out that task switching can't be that speedy. So a general purpose chip that can automatically tune itself to a specific purpose is how this comes across. Still, this can be useful.
  • The concept of a programmable hardware device isn't all that new. And the encoding and encryption they talk about speeding up is a typical application of PLD's. High end routers use similar devices to optimize their tables etc. Kuro5shin has a nice article for beginners. http://www.kuro5hin.org/story/2004/2/27/213254/152
  • FPGA (Score:2, Interesting)

    by tttonyyy ( 726776 )
    FPGAs have had processor IPs [xilinx.com] available for a while, which, in theory, can be reprogrammed on the fly. But AFAIK, no-one does this. I doubt this will be any different.

    Hardware manufacturers that need special hardware operations (IE MPEG-2 decoding) use dedicated, custom hardware for large volume production. Dynamically configurable hardware is expensive for large scales production, and small scale production will likely use FPGA for similar effect. I may be sceptical, but I doubt it'll catch on.

  • This is evolutionary, not revolutionary. Many chipmakers have offered microcontrollers and microprocessors with FPGA on chip. Often it is an extension of the I/O built into the processor, so it's not much different than an external FPGA on the processor bus. Please note that this is NOT like processors that run on the FPGA itself - these are seperate from the FPGA portion of the chip.

    Stretch is different in a few ways:
    It pulls the FPGA closer to the core, so that it can be utilized almost as part of the pipeline. I say almost because of the following statement in the article:
    Inside the chip, the ISEF is coupled to the rest of the circuit by 128-bit buses and has 32 128-bit registers. It runs in parallel with other areas of the processor, effectively becoming a fully reconfigurable co-processor, and can be reprogrammed for new instructions at any time during operation.

    So it's still fairly seperate from the processor core.

    But the core itself is high performance (fast clock, a little faster than the average FPGA) and it has a very fast memory bus (again faster than the average FPGA)

    The downsides are likely to be:
    1) Power cost and dissipation. Since it's a slow clock, the dissipation probably won't be bad, but it's not going into a small portable machine.
    2) Time to reconfigure. This isn't meant to be a general processor with task switching. Context and task switching is going to be expensive and if you plan on running two concurrent tasks which both require special instructions the entire processor will likely perform, on average, much worse than it would without the reconfigurable portion. Unless, of course, the processes were created to use the same set of special instructions so the context switch isn't more expesnsive than it is for today's processors.

    So they are targetting it correctly, it seems. Specialized areas with, in general, only one task/program running at a time. Multimedia players, for example, would be great here. A digital recorder/player would work well if both the encoding and decoding portions of the code were compiled so the special instructions created wouldn't have to be changed for either application to allow playback while recording.

    -Adam
  • by ezraekman ( 650090 ) on Monday April 26, 2004 @03:25PM (#8975289) Homepage

    This sounds vaguely like the dream solution for developers. The article says:

    "It runs in parallel with other areas of the processor, effectively becoming a fully reconfigurable co-processor, and can be reprogrammed for new instructions at any time during operation."

    Does that mean it can handly booting multiple OSes simutaniously? If so, how long before someone writes an app that bridges multiple OSes, allowing the equivalent of emulation, without the emulation? I don't know about the rest of you, but the potential of this chip sounds like a dream come true. And at $35-$100 per chip... it's cheaper than the processor for most systems anyway.

  • by mrplado ( 736237 ) * on Monday April 26, 2004 @03:27PM (#8975305) Homepage
    The first processor that can add to its instruction set while operating? I think there were a few microprogrammed processors in the 70s/80s with writable control store that could do exactly that. Anybody remember PERQ workstations? Now this new gadget appears to be able to extend itself by means of an embedded FPGA, instead of plain old microcode, so it's a bit like the Xilinx Virtex II PRO series (PowerPC core with big FPGA on one chip). The really innovative thing is that you don't have to program the FPGA in VHDL or Verilog, but the C++ compiler takes care of that.
  • Gaming? (Score:4, Interesting)

    by shirai ( 42309 ) * on Monday April 26, 2004 @03:28PM (#8975312) Homepage
    One of the best applications for this chip is a programmable Graphics card.

    Imagine the optimizations that you could do for the next release of the Doom engine. They could own the market for GPUs that optimizes itself for specific games. Could be amazing.
  • by apirkle ( 40268 ) on Monday April 26, 2004 @03:28PM (#8975320)
    There is a much, much better article with lots more detail on EETimes.com [eetimes.com].
  • Woooo (Score:3, Interesting)

    by Cr3d3nd0 ( 517274 ) <Credendo AT gmail DOT com> on Monday April 26, 2004 @03:28PM (#8975324)
    I can just see this processor, mixed with a bit of Mark Tildens analog AI research to really advance Artificial Intelligence. For the uninitiated Mark Tilden discovered that by tying a group of only four or so transistors and sending a regular analog signal through it he could get small robots to walk, and indeed do an amazing number of things, including optimize it's path and even remember it's solution for a small amount of time(about 3 or 4 seconds). Not only that but when given a certain stimulus need (example make them solar powered and have only one are of light they would compete with other bots to gain access to better light. Indeed a lot of the behavior that these little bots produce is so complex and life like that he has spent a long time just documenting behaviour. Now give a set of these bot's circuits the ability to "optimize" the speed of the signal, and a few stimuly and let it play. If the stimulous was for "human approval" some input from a human indicating good or bad.... Heck what do I know, I'm non AI researcher but it always sounded cool to me :-) For more information on Mark Tilden go to BEAM Online [beam-online.com]
  • That insanely complicated piece of software that can automatically figure out what it needs the chip to do at any given time for its own survival --
    oh yeah, we have those... PEOPLE! Now, can I get those neural processor connects and graft this thing to my head already?
  • Nothing radically new..

    The ability to dynamically reprogram on the fly in-circuit sounds cool though.
  • Or maybe the world is just running out of good project names.
    Project STRETCH
    http://en.wikipedia.org/wiki/IBM_7030
  • Pretty skimpy blurb - I suspect that the product is either a) vapourware or b) a lot more limited than is discussed in the article.

    From the article, I presume that the processor's microinstruction memory can be updated with special information embedded in the executable file. This is not as unique as you might think: virtually all Intel and AMD processors have the ability to have their microinstruction memory updated during the boot process - this is used up upload microinstruction updates/corrections wit
  • by Gyorg_Lavode ( 520114 ) on Monday April 26, 2004 @03:40PM (#8975452)
    The idea of programmable chips is nothing new. Xlinx etc have been doing it for ever. The idea of putting both a standard core w/ a generic instruction set AND a programmable core ont he same chip is very interesting. It will, however, be a niche product. You aren't going to use it in your home computer because your home computer does a broad range of things.

    This will be useful in places that they mentioned. Places where you do a lot of processing that takes many generic instructions but can be translated into a single string of descrete instuctions.

    The more I think about it, this is the direction processors are going. We keep moving processors towards RISC based cores. We keep adding specialized paths for things such as multimedia. Eventually we WILL have half the processor being a purely RISC core and half being programmable hardware for specialized computational intensive instructions. I retract my initial view.

    I do wonder though, what the life is on the hardware side. How many times can you reprogram the hardware before it starts to die. What is the error rate in reprogramming it? What happens when a few programmable transistors die?

  • This != New (Score:2, Informative)

    by sam_van ( 602963 )
    I've noticed some folks comparing this to Transmeta. While similar, there are a few more comparable architectures out there.

    Perhaps the most notable (in its conception, at least) was Seymour Cray's attempt at a Pentium Pro core + reprogrammable extensions (via FPGA or the like) at his post-Cray Research company. More recently, IBM licensed PowerPC cores for use by Xilinx. Up to four of those cores get thrown on the die with a Virtex-II FPGA (?); each of the cores has the ability to add opcodes in FPGA lan

  • Stretch claims that their CPU running at 300MHz has shown superior performance to a 2GHz box. We have no details of their testing and I wonder about the real world performance.

    Natural questions come to mind like how quickly does the chip configure itself to optimize for the application, does the configuration only occur at start of the application, how many chip-configuring applications can it run concurrently, will it optimize for interpreted languages, can some configurations be made "permanent" to accom

  • well sorta.

    Star Bridge Systems [starbridgesystems.com] has been selling computers that reconfigure their own logic (with the help of compilers) for about 5 years now. True, their solution isn't a single chip, but the idea of reconfigurable computing is not at all new, and Star Brigdes implementation appears to be even more flexible.

  • by Ars-Fartsica ( 166957 ) on Monday April 26, 2004 @03:46PM (#8975507)
    General purpose CPUs are fast, ubiqutous, and cheap. While compelling, this new approach is in no sense a slam-dunk in the market. Stretch will have to show a compelling case why this is a faster and cheaper alternative to the x86 (compatible) hegemony.
  • by arock99 ( 612650 ) on Monday April 26, 2004 @03:46PM (#8975509)
    Sounds like this would be a perfect processor for emulating consoles such as the SNES, XBOX, GameCube, PS2, etc etc or pretty much any other processor.
  • by Lust ( 14189 ) on Monday April 26, 2004 @03:53PM (#8975588) Homepage
    This reminds me of Field Programmable Gate Arrays [elecdesign.com]. Can someone explain the difference?
  • by TheAncientHacker ( 222131 ) <TheAncientHackerNO@SPAMhotmail.com> on Monday April 26, 2004 @04:39PM (#8976188)
    The original design for the Zilog Z-80000 (Not to be confused with the Z80000 that actually shipped and was an enhanced Z8001) was also dynamically self configuring and optimized its execution based on the frequency of use of instructions.

    Of course, that was only a little over 20 years ago.

    FYI: Since somebody is going to ask... The original Z80000 design was killed when Zilog stalled out as a general purpose processor maker and moved into embedded processors after the bugs in the initial run of Z8001 chips and IBM's selection of the Intel 8088.

  • by gupg ( 58086 ) on Monday April 26, 2004 @05:28PM (#8976771) Homepage
    It seems Stretch is not the only company that announced such a product today: EE Times article [eetimes.com].
    Also, keep in mind, customizable ISAs have been around for a while -- in Tensilica and ARC processors. These guys do it dynamically.
  • by cybergibbons ( 554352 ) on Monday April 26, 2004 @06:33PM (#8977516) Homepage
    I'm currently working on modular multiprocessor systems implemented on FPGAs, so this field is something I know something about.

    Altera produce an FPGA with one or more built in ARM processors. This sounds very similar to the Scratch system, but the ARM processors are limited in connection into the fabric of the FPGA by the not particularly fast bus used with the processor. Scratch appear to have made the data transfer rate between the two parts of utmost importance, which is essential in high throughput applications like this.

    Altera have also developed a softcore processor, that is one implemented entirely on an FPGA. It is highly configurable - instructions can be added, cache and memory behavior altered, buses adapted, etc. Coupled with things such as the DSP blocks (trees of multiply accumulates), a 50Mhz processor can process data in a specific task at the same rate as a general purpose processor running at 10 times the speed.

    The work I'm doing is investigating the use of many of these processors on one fpga. Levels of optimisation that cannot be done with conventional multiprocessor systems will be possible. Changing the memory system to deal with specific algoriths, or bus widths between certain processors will allow much better performance.

    Scratch also seems to be making a difference by claiming to have easy to use and working development tools, which is one thing that Altera cannot really claim to have done.

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...