Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

PCI Express 3.0 Delayed Till 2011 80

Professor_Quail writes "PC Magazine reports that the PCI SIG has officially delayed the release of the PCI Express 3.0 specification until the second quarter of 2010. Originally, the PCI Express 3.0 specification called for the spec itself to be released this year, with products due about a year after the spec's release, or in 2010."
This discussion has been archived. No new comments can be posted.

PCI Express 3.0 Delayed Till 2011

Comments Filter:
  • by jhfry ( 829244 ) on Thursday August 20, 2009 @01:51PM (#29135799)

    So the spec is complete, but were not gonna tell you what it says!

    Doesn't make sense!

    • by jhfry ( 829244 )

      Oh wait... they didn't delay the spec... they spec is not ready yet. BIG DIFFERENCE!

    • by impaledsunset ( 1337701 ) on Thursday August 20, 2009 @01:57PM (#29135913)

      They are just giving time to Amazon's EC2/S3 to get compliant.

      • Or they worked out a deal with computer manufacturers to get an extra upgrade cycle. There'll be one this year, for people who just have to have Windows Vista SP2/3, and then another one next year for businesses that want PCI3...

    • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Thursday August 20, 2009 @01:59PM (#29135959) Journal

      So the spec is complete, but were not gonna tell you what it says!

      Doesn't make sense!

      The article says they're working on getting it to be backward compliant with the current PCIe specs. You probably don't want to start building to the spec until that's in place anyway. You can find a lot of information on PCIe 3.0 [pcisig.com] on the FAQ on their site. If you're a member of PCI SIG, you might even be able to get the preliminary spec, who knows?

      • Re: (Score:3, Interesting)

        Comment removed based on user account deletion
        • AGP 1x and 2x were 3.3V, 4x was 1.5V and 8x was 0.8V.

          Afaict virtually all stuff that supported 0.8V supported 1.5V as well. So that left 1.5V/0.8V vs 3.3V as the main compatibility issue. There was a notching system that was supposed to indicate whether a card/motherboard supported just 3.3V, just 1.5V/0.8V or both and prevent incompatible combinations from mating. Unfortunately some manufacturers miskeyed thier products.

          BTW PCI also had two voltages though the lower voltage was generally only seen on prett

          • Comment removed based on user account deletion
            • "I am staring right now at a SFF 733Mhz Compaq where the PSU is shaped like a fricking triangle! "

              Now that is truly awesome! What drugs was/were the engineer(s) taking when they designed that!

              As for RAM, I've stuck different speeds together (not recommended) when Franken-building machines from old parts. It will work, if a bit quirky. Talking about 5-10 years ago; don't know how current machines and RAM will behave if you try that now.

              Then there was the Amiga's floppy drive. Even though the disks were t

              • The problem with the Amiga floppy was not really the drive itself but the floppy controller. You can read amiga disks on a standard pc diskdrive, but it require either a special disk controller, or 2 diskdrives and some REALLY clever software*.

                *Googling for why it require 2 diskdrives for a pc to read 1 amiga disk, will really show some great software hacking.

              • Comment removed based on user account deletion
        • Re: (Score:2, Interesting)

          by Hal_Porter ( 817932 )

          Personally I am quite glad they are delaying it until it is fully backwards compatible.

          Umm dude this is slashdot. The correct response is "This new standard sucks. It would be 10x faster if they didn't worry about back compatibility cruft" from a bunch of people who didn't understand the old standard but have been told it was really complex.

          A good example would be x64 replacing x86. Every single nerd on the internet knows that x86 is bloated and that x64 should have started from scratch, despite the fact that a look at a picture of the die of a modern processor shows that the actually CPU cor

          • It is one thing to switch to a new processor (say Macs from Motorola to PPC to Intel), which is a good thing in the long run, and another in letting interconnects be backwards compatible. Remember the furor of phasing out ADB, PS/2, serial, and parallel for USB? If USB 3 is not back-compatible, yikes. As for PCI, I'm not a graphics guy, so I don't know if back-compat is a big deal performance wise or not.

    • Maybe it's like USB 3.0 XHCI spec. Our spec is like beautiful man. But you can't just download the PDF from a webpage. You need to get your boss to sign something and fax it and then post the originals. Bureaucratic Fucks.

      http://www.intel.com/technology/usb/xhcispec.htm [intel.com]

  • Who cares (Score:1, Insightful)

    by Anonymous Coward

    Just another reason to make everyone buy new motherboards. Add one more pin to the CPU while you're at it. Seriously, PCIe 1.1 or whatever is great for me and I play crysis at 1280x1024 with an old ATI X1900- by no means top of the line and on a FX-60 socket 939 CPU. Eventually I'll buy an AM2 or AM3 or AM9 or whatever they're on next. These PCIe upgrades really don't offer much anyway. Mainly we need to get manufacturers to stop selling x8 electricals as x16's.

    • Re: (Score:3, Insightful)

      I'm guessing that PCIe 3 isn't really aimed at people playing games on single socket systems with outdated graphics cards. It probably isn't really aimed at desktops, at present.

      Cluster interconnects, high speed storage attachment, and various flavors of coprocessors are always hungry for more bandwidth.
      • Re: (Score:3, Insightful)

        by Amouth ( 879122 )

        but isnt' that the point of making it a channelized system? where each channel is full duplex? they can jsut add more channels as needed.

        16x - 20x - 24x - 32x

        you can plug a 1x or 4x card in a 16x slot and have it work - hell if you wanted to you could make a 3x card..

        adding more available channels on the slot is much less of a change to it than PICX was to PCI.. and that actualy turned out to work quite well..

        i'm all for increasing the speed of interconnects - but adding more lanes seems to work just as

        • Re: (Score:3, Interesting)

          Being able to add channels is certainly handy; but it isn't really a substitute for increasing speeds. If it were, we'd still be using PCI-X. Particularly in space constrained systems(laptops, blades, etc.) running more connectors and more traces is neither easy nor cheap. Even in your basic desktop ATX boards, you'd be hard pressed to get much more than a 16X slot without impinging on the RAM slots, or the CPU cooler area, or some other part.

          For the moment, at least, our ability to drive wires faster at
          • Make a double-slot card that goes in two 16x slots if you need 32x?

            The only problem with that is almost all boards with multiple 16x sockets have them with a 1x socket in between...

          • Looks like a PCIe x32 connector is 210 mm long ( http://az-com.com/pages/pcie/pcie_pdf/ds-06-01.pdf [az-com.com] ) compared to 158 mm for x16

            I'm finding it tricky to find the length of standard PCI connectors and things are also complicated by the fact that PCI express connector go closer to the edge of the motherboard than PCI ones but I'd guess it would reach back about as far as a 64-bit PCI slot does.

            Still I agree it would be a routing nightmare (which means more layers and therefore more cost)

      • by afidel ( 530433 )
        8GB/s (PCIe 2.0 x16) per connector is a hell of a lot. Dual connector FCoE adapters are about the biggest bandwidth users in most servers today and that's only 2GB/s. For servers the big thing will probably be to reduce the number of PCIe busses due to getting sufficient bandwidth out of an x4 connector, but it comes at the cost of much more expensive silicon and motherboard design. Not only that but I like distributing load among multiple busses as it reduces the havoc that one misbehaving device can cause
        • For some applications (like CUDA), it's better to be able to 'burst' at a higher speed (like that provided by PCI Express 3.0) rather than a sustained transfer at high speeds (though obviously both would be the best). I submitted the article because frankly I was a bit disappointed when I read it. I've been holding off on getting a new graphics card for CUDA development because I was hoping that PCI Express 3.0 would be out towards the end of the year. Now I have to debate whether to get one now, or wait f
  • whats in 3.0? (Score:5, Interesting)

    by convolvatron ( 176505 ) on Thursday August 20, 2009 @01:59PM (#29135961)

    the pci sig blurb says its mostyl cleanup and the removal of 5v support

    does anyone know of anything interesting in 3.0?

    • Re:whats in 3.0? (Score:5, Interesting)

      by symbolset ( 646467 ) on Thursday August 20, 2009 @02:09PM (#29136099) Journal

      Twice as fast again. x16 is 32GB/s. They're looking to support 3 graphics cards per PC, which is cool if you're into that whole supercomputer on your desk thing, but it's going to burn at least a kilowatt.

      I'm sad we haven't seen external PCIe implemented. It was in the v2 specification. The idea of an external interconnect with that much bandwidth probably made some heavy players nervous.

      • Re: (Score:2, Informative)

        by Anonymous Coward
      • Re: (Score:3, Informative)

        by AndrewNeo ( 979708 )

        As the AC above me referenced, National Instruments uses PCI-e for a lot of their backplane communications in their equipment.

      • Nvidia has an external PCIe Tesla. I've also seen external GPUs for laptops and I heard something about RED Rocket for laptops that hangs off the ExpressCard slot.

      • Re: (Score:3, Interesting)

        by seifried ( 12921 )

        They're looking to support 3 graphics cards per PC

        Interesting, I just read the specs on my motherboard which has 4 slots for video cards, granted with 4 slots used it's only 8x (which is ok since I live in 2d land) but with 3 or less in use they're all 16x (well, so it claims), so it would seem that's already covered.

      • Re: (Score:3, Informative)

        by bobcat7677 ( 561727 )
        There have been and still are a few implementations of external pci express. But they have all been prohibitively expensive and somewhat "special purpose". Besides ones already mentioned there is also several product options from http://www.magma.com/ [magma.com] Be prepared to drop a Grover Cleveland to get one.
        • There have been and still are a few implementations of external pci express. But they have all been prohibitively expensive and somewhat "special purpose".

          Yeah, they're called ExpressCards.

        • Re: (Score:2, Funny)

          by symbolset ( 646467 )

          Holy cow, that's what I was looking for, thanks! The Magma ExpressBox7. $2800 for 7 x4 electical, x16 physical slots and a x4 host adapter with cable, rackmount. That's why I like Slashdot.

          This enables some interesting configurations of those 1TB PCIe attached SSDs.

      • Twice as fast again. x16 is 32GB/s. They're looking to support 3 graphics cards per PC, which is cool if you're into that whole supercomputer on your desk thing, but it's going to burn at least a kilowatt.

        No.

        Ever read those power consumption reviews, with beefy high end cards? Usually the computers(quad core, single high end GPU) use 200-300w load. Much of that comes from the CPU/mobo/RAM/HDD/etc. If you add a few more cards, it's unlikely you'll even hit 500 watts.

        I picked up a Kill-A-Watt off newegg, a while back, and was surprised to find out my gaming computer only consumes ~100 watts from the wall. That's partly influenced by having a high efficiency PSU, and partly by parts not consuming nearly as much

        • I picked up a Kill-A-Watt off newegg, a while back, and was surprised to find out my gaming computer only consumes ~100 watts from the wall.
          Is that an idle measurement or one under heavy load?

          • Idle. Heavy load peaks it up to about 150, depending on whether the CPU, GPU, or both are stressed.

            If I were to multitask and burn a DVD while video encoding on one core and playing Left4Dead, I have a feeling I could push it higher - but lets be honest... that isn't really average use. ;)

            And before anyone asks, I checked out the consumption of other stuff like lightbulbs, my monitor, microwave, etc. to make sure the Kill-A-Watt wasn't on the fritz.

            • Cool. Because we wouldn't want your measurements to be out of range of variance for a microwave or retail 60 Watt bulb. That would be bad.

              It has probably occurred to you that people using that "supercomputer on your desk thing" might have different use cases than yourself. You've probably also considered that since this is slashdot, you might be talking to someone with NIST certified test equipment rather than a Kill-A-Watt purchased from Newegg.

              It's cool that you're interested enough to buy your own t

              • Hehe, sarcasm. ;)

                I don't really care how accurate my Kill-A-Watt is, so long as it isn't reporting a 300 watt computer as using 150 watts. After testing various devices, I'm fairly satisfied that this isn't the case.

                I tried some 20w energy efficient bulbs, and they were consuming 25 watts each. :/ My 35w monitor only consumes 28w when on.

                Out of curiosity I also tried an old CRT TV. That thing was a monster! ;)

                I wish more people were interested too. Power demands are always going up - but if we can make thin

  • by Anonymous Coward

    This is when the first pci express 3 spec computer is installed into the LHC control system.

  • by Black Parrot ( 19622 ) on Thursday August 20, 2009 @02:08PM (#29136083)

    the PCI Express 3.0 specification called for the spec itself to be released this year

    Now we know how time loops are accidentally created.

    • Wareware wa onaji jikan wo, eien to loop shiteru no desu yo -- We've entered an endless recursion of time
  • In which calendar??
  • Stupid X-Fi Fatality and its terrible drivers.
    • by MLS100 ( 1073958 )

      Did you really just now find out Creative drivers are shit?

      • I took a break from creative for a long time. I owned the original SB many years ago, the AWE32, SB LIVes, and Audigy 1...

        After that i was done with creative for sometime. Those cards were all good and I still had some faith in creative. For a long time they were a solid go to company for sound cards.... since the old dos days.

        Recent years... i guess thats not true. I knew that when i went in on the XFI titantium, but my sigmatel onboard chip SUCKED. It had terrible driver support and broken functionality d

    • by Manip ( 656104 ) on Thursday August 20, 2009 @03:12PM (#29137033)

      Creative purchased their drivers off of a third party company and then just updated them over the years. This literally happened since the soundriver products began. Once Vista came out with an entirely new sound infrastructure nobody at Creative had the expertise to write a decent driver so they cobbled one together (with Microsoft's help) from their old horrible drivers.

      Fact is - Creative soundscards aren't worth while because the drivers are so poor. Even if the sound hardware could potentially take load off of the CPU, you're more likely to spend endless hours messing with it and even if it does work it won't work as effectively as one might hope.

      • I have heard that Creative purchased their drivers from a third party. I'm not sure that its completely true or any different than what Creative has done in the past. I'm pretty sure for a long time, Creative's products were all pretty much made and engineered overseas by tech companies they hired. I dont think Creative did any real driver producing ever... short of maybe the original SB for dos.

  • Epic fail with the title? A "till" is a cash register, something you put money into. Do they mean 'til, short for until.
  • 128 Bit encoding running at up to 8 GHz, not that any current or near term CPU has a bus half as fast a that. That a lot of bandwidth. Are current graphic card bottlenecked at all by the PCI bus?

    ---

    Graphics Cards [feeddistiller.com] Feed @ Feed Distiller [feeddistiller.com]

  • "... we had to do the diligence required to move the date."

    Uh, I hate to break this to you, guy, but according to the dictionary, moving back the deadline is pretty close to the opposite of "doing diligence".

  • The capital equipment costs to buy IC testers that run up to 8Ghz is quite prohibitive. In this economy I don't think too many IC production facilities are willing to lay out the funds to buy equipment to test at this higher rate until they have cash flow coming in from the upturn. Until then the test coverage of IC's that run at 8Ghz is minimal and will require bench test methods and "guarantee" by design. This delay if not due to the capital equipment requirements of testing at 8Ghz, will allow supplie

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...