Serial SCSI Standard Coming Soon 328
rchatterjee writes "SCSI is very close to joining ATA in leaving a parallel interface design behind in favor of serial one. Serial attached SCSI, as the standard will be known, is expected to be ratified sometime in the second quarter of this year according to this article at Computerworld. Hard drive manufacturers Seagate and Maxtor have already said that they will have drives conforming to the new standard shipping by the end of the year. The new standard will shatter the current SCSI throughput limit of 320 megabit/sec with a starting maximum throughput of 3 gigabit/sec. But before this thread turns into a SCSI fanboy vs. ATA fanboy flame war this other article states that Serial Attached SCSI will be compatible with SATA drives so you can have the best of both worlds."
SASCSI (Score:3, Insightful)
Re:SASCSI (Score:3, Interesting)
Re:SASCSI (Score:2, Interesting)
Re:SASCSI (Score:5, Informative)
Re:SASCSI (Score:5, Informative)
Impedance, crosstalk (mentioned) and price.
It takes seconds to crimp a ribbon cable. Cheap and easy. You can even do it yourself!
Taking a bunch of twisted pair wires (which is what would be required to keep the impedance and crosstalk bearable) and soldering them onto connectors individually takes a lot more effort, and therefore costs more.
Not to mention fabbing individual strands of insulated wire and twisting them together costs more than running 5 wires parallel to each other and simply coating them all at the same time with PVC.
You can... (Score:2)
1) The connectors must still be huge
2) As a consequence, the connector -> cable area is big.
3) There's so many connectors, the cable is big and inflexible.
Basicly, I couldn't fit the cables the way I wanted to have the disks, because they were so inflexible they collided with my GF4. So I had to rearrange the disks instead. With ribbon cables, it'd be much more of a mess but it would have worked. SerialATA is much better designed for this.
Kjella
Re:SASCSI (Score:3, Interesting)
Re:SASCSI (Score:2, Informative)
I have a 10M SCSI ribbon, and each pair is twisted. I think the main reason for ribbons inside the box is so you can crimp on a connector wherever you want. Oh, and in a Sparc20, the internal SCSI cable isn't a ribbon, it's a cable from the motherboard right up to where it connects to the disks, cable again to the CDROM.
So, IMO, there's no reason it can't be a ribbon, except for the convenience of crimping connectors wherever you want.
Re:SASCSI (Score:3, Insightful)
evil technology! (Score:5, Funny)
Re:evil technology! (Score:5, Funny)
Serial ATA Network Interface Controller = SATANIC
Re:evil technology! (Score:4, Funny)
"Does your computer support SATAN ?"
Next thing you know all new dell computers will support SATAN.. imagine the adds, "Dude you're connected with SATAN!"...
ok ok i'm done
bits vs. bytes (Score:5, Informative)
320 megabytes is about 2.5 gigabits ... which is a lot closer to 3 gigabits than the erroneous 320 megabits figure.
Re:bits vs. bytes (Score:3, Insightful)
Of course, a really fast connection may allow you to daisy chain and still get almost full transfer rates from each drive, but that's not really such a big deal, in particular when the cables are as small as they are for serial connections.
Re:bits vs. bytes (Score:4, Insightful)
Scsi is a bus. I have a box here with 5x10K drives, at 49 MB/s each, easily able to saturate its ultra 160 bus. These days, that box is nothing special.
Re:bits vs. bytes (Score:2)
Re:bits vs. bytes (Score:3, Informative)
You can already get USB2 and FireWire cards that can do high speed transfers simultaneously on several connectors, and if this really takes off, there is no reason why you couldn't have a card with 8 or 16 independent channels (ultimately, of course, it gets silly because PCI can't keep up anymore).
SCSI = ... (Score:4, Funny)
Re:SCSI = ... (Score:5, Funny)
I think it stands for
Some Can't Stand IDE
Re:SCSI = ... (Score:3, Funny)
Re:SCSI = ... (Score:2)
Ha!
How about:
Same Configuration, Spastic Interface
How parallel will it be? (Score:3, Interesting)
Ok, So I've noticed a couple of corrections. (Score:5, Insightful)
Re:Ok, So I've noticed a couple of corrections. (Score:2, Informative)
Re:Ok, So I've noticed a couple of corrections. (Score:2)
Re:Ok, So I've noticed a couple of corrections. (Score:4, Informative)
Re:Ok, So I've noticed a couple of corrections. (Score:2, Insightful)
Re:Ok, So I've noticed a couple of corrections. (Score:2, Informative)
http://www.hypertransport.org/
11.
Question:
At what clock speeds does HyperTransport(TM) technology operate?
Answer:
HyperTransport(TM) technology devices are designed to operate at multiple clock speeds from 200MHz up to 800MHz, and utilizes double data rate technology transferring two bits of data per clock cycle, for an effective transfer rate of up to 1,600Mb/sec in each direction. Since transfers can occur in both directions simultaneously, an aggregate transfer rate of 6.4 Gigabytes per second in a 16 bit HyperTransport(TM) I/O Link and an aggregate transfer rate of 12.8 Gigabytes per second in a 32-bit HyperTransport(TM) I/O Link can be achieved. To allow for system design optimization, the clocks of the receive and transmit links may be set at different rates.
----
For the pentium4:
133MHz Quad Pumped (533MHz effective) allowing access to up to 4.2GB Bandwidth
But I guess that most of the trafic is mem-hd
or hd-mem and thus does not nead to go trough
the cpu, I think the latest alphadesign was
to have 8 rambus chanels giving plenty of
bandwith
U320 SCSI (Score:2, Informative)
U320/LVD SCSI is capabable of 320MB / sec not 320mbps.
3gbps ~= 300MB/sec. therefore it would not be be quite as fast as U320 SCSI.
Naturally 320MB/sec is the theoretical max bandwidth for the SCSI bus not the individual drives in the SCSI chain.
Live long and prosper
Re:U320 SCSI (Score:2)
Re:U320 SCSI (Score:2, Interesting)
However I did say:
3gbps ~= 300MB/sec which was meant to indicate it was "[very] approximately" 300MB/sec
300*1024^3=322,122,547,200 bytes per second and 3gbps = 3,000,000,000 (3 billion bits per second).
3000000000/8 = 375,000,000 bytes = 357 MiB/sec (1KiB = 1024^1 bytes, 1MiB = 1024^2 bytes, 1GiB= 1024^3 bytes)
In the real world we also run into: encoding overhead, protocol overhead, errors, bus resets, cache misses, interference and many other factors which impact actual throughput.
FYI: Studies I have observed myself during a research project indicated that the maximum total throughput under GigE is approx. 80MiB/sec under ideal conditions, even though 1,062*1000^3 = 126,600MiB/sec
Of course it all varies depending on the network adapter used, packet size, processor "speed", RAM, Operating System [!!!], 64bit x 66MHz PCI vs. 64bit x 33MHz PCI vs. 32bit x 33MHz PCI, copper vs. MMF or SMF, HD vs FD, and about a bazillion other factors.
Believe it or not, at an undisclosed, fully accredited, state-owned University somewhere in the US they taught us in a senior level networking class of all places that due to those factors it is wiser to divide by 10 when converting bits to bytes.
Go figure! I am NOT making this up!
Peace and Long Life
good performance.. but at what price? (Score:5, Insightful)
Re:good performance.. but at what price? (Score:2)
Re:good performance.. but at what price? (Score:5, Insightful)
Not to say that ATA disks aren't reliable, but the components that are used in ATA disks are typically those that were outside the absurdly strict tolerances that are required for "enterprise-class" drives.
And yes, when it comes to speed, SCSI tends to rule the roost. Not only because you can throw 320MB/s down each individual channel, but you can toss enough devices on that channel to keep that overall speed sustained over longer periods of time.
Drives have very high burst speeds, but have it do lots of random data access constantly and watch speeds plummet. That's why a 10-disk striped array (with another 10-disks to mirror if you require redundancy, likely on another channel) tends to kick considerable ass. Because even if you're only sustaining say... 10MB/sec per disk, it's now 100MB/sec over the channel.
ATA storage is definitely cheap. If all that is required is just LOTS of storage, and performance and reliability isn't really critical, ATA is a pretty good choice. Of course then you could use robotic tape libraries as well.
SCSI also really ruled the server rooms because those expensive servers and storage systems simply didn't have ATA support. Period.
-----
Re:good performance.. but at what price? (Score:3, Insightful)
The future of reliable, enterprise-class hardware is not delicately engineered systems that cost a premium, but a large number of inexpensive, simple servers and drives. For disks, we already have that in the form of RAIDs. If a drive, or two, or three, fail, you just replace them.
And yes, when it comes to speed, SCSI tends to rule the roost. Not only because you can throw 320MB/s down each individual channel, but you can toss enough devices on that channel to keep that overall speed sustained over longer periods of time.
That is circular reasoning. If you pick separate channels for each device, then each channel can be slower. Besides, "tossing enough devices on that channel" makes the overall system less reliable because if there is a problem with any one of them, it may kill the whole channel. And, besides, the more devices you toss onto a serial bus, the less efficiently it will be utilized relative to having a single device with the same total bandwidth requirements. Overall, you are probably better off using five separate USB2 or IEEE1394 connections than one of these serial SCSI connections.
Re:good performance.. but at what price? (Score:3, Insightful)
And if/when these drives go down and take your 2TB RAID array with them, who wears the blame for buying crap disks ?
RAID gives you some added security, it is *not* a silver bullet - even with hot-spares and several replacement drives handy, a simultaneous failure of 3 drives could potentially bring down nearly any RAID array.
Horray! (Score:5, Interesting)
Re:Horray! (Score:2)
What is the exact technical difference between a serial ATA and a serial SCSI drive? I read somewhere that the only difference between IDE drives and SCSI drives are the interfaces and electronics, while the actual storage mechanisms are identical. So if both can now work on a SCSI bus, what the heck is the difference between ATA and SCSI???
Bork!
Re:Horray! (Score:5, Funny)
Well DAMN! Did you even bother to read the Slashdot summary? You're right, the fact that they share a similar physical design doesn't mean that they will be compatible... It's the FACT that they WILL BE COMPATIBLE that tells you why they would be compatible, and lead to "the elimination of the distinction"
And I blockquote:
Re:Horray! (Score:2)
SATA vs SASCSI (Score:5, Informative)
One detail is that SAS is now point to point, just like SATA, and not a bus, but they also indicate that there would be boxes to split a single connection to a bunch of devices, sort of like network hubs. The protocol addresses 128 devices. It isn't clear whether a hub could have SATA devices hooked to it, or if that would require 1 serial channel per device from the host adapter. That is what I understood to be the case for SATA (need one port for each device, no hubs or sharing). The most important protocol difference should be that SAS is still multipoint, even if the connections are point-to-point, so both hosts and adapters need to arbitrate for the bus, while SATA hosts adapters just send out commands and data and wait for the drive to respond on the reverse channel.
It wouldn't surprise me if devices eventually just supported both protocols, and maybe even auto-sensed the type of adapter on the other end. By the time these interfaces get common, I expect the cost differences to be negligible, so It begs the question of why SATA would survive. Because the cost differences are going to be sunk into the chipset designs with almost no marginal cost differences, both system and drive makers will probably save more by reducing the size of their product lines by having one product for both.
For more info (Score:3, Informative)
http://www.lsilogic.com/products/islands/sas_isla
Is this a trend? (Score:5, Interesting)
Anybody think we'll have a massive paralell trend in a few years?
Re:Is this a trend? (Score:5, Informative)
Re:Is this a trend? (Score:2)
That problem is on it's way to being rectified. My company has a large LCD monitor that needs lots of data to drive it. We've got a $700 optical cable that takes it's range to some ridiculous length like 30 meters. They've made a small adapter that converts the electrical impulses to light, and back to electricity again on the other end so that the monitor itself doesn't have to be modified to use the cable.
Light doesn't cause this type of interference, so they'd be able to (in theory) apply this to other technologies as well. I don't think it'll be long before we see hard drives using something like it.
Re:Is this a trend? (Score:2)
The thing is, for short distances there is almost no reason to use fibre over copper. Ever notice that not many workstations have fibre gigabit ethernet? Its great for connecting two routers together that happen to be a mile apart, but over short distances it makes no sense.
Re:Is this a trend? (Score:2)
I don't. We're nowhere near saturating the potential bandwidth of a fiber with consumer electronics. So there's not a great benefit in putting a bunch of fibers next to each other to aggregate their bandwidth--especially since (regardless of the optical path) the electrical signals going into the electrical-optical converters would be subject to the same high frequency timing issues that're causing the push away from parallel busses in the first place.
Re:Is this a trend? (Score:3, Insightful)
I doubt it... (Score:2)
Kjella
Re:I doubt it... (Score:2)
But I do agree about the problems with parallel. Thing about the interfaces called "parallel" and "serial", the old ports on the back of the computer. Sure the LPT ports were faster, but were very limited to the distance they could run because interference.
Also to get IDE over 33 Mbits/sec. they had to add an extra ground wire between each data wire to keep the noise down. SCSI always had extra wires, but they had to go to twisted pair (aka LVD) with in the cables to get any distance.
But FC is here today, it supports high, and huge cable lengths on optical cables, and respectible lengths on copper.
Re:I doubt it... (Score:2)
Fiber optic hardware is more expensive. What I'd like to know is why *Firewire* doesn't serve the purpose.
Re:I doubt it... (Score:2)
I haven't used multiple Firewire devices on the same bus to find out how they perform. That is my main reason for using SCSI and FC now. I just hope any new standard that comes out doesn't suffer like ATA does with 2 devices on the same channel.
Re:I doubt it... (Score:2)
Fibre channel is godawful expensive. It's only practical use is for multi attached devices and where long cable runs are necessary.
The same economics are behind copper Gig ether dominating fibre for everything but long haul links.
Re:I doubt it... (Score:2)
Hypertransport is a good example of a serial/parallel interface. To get more bandwidth, you add more links in parallel, each of which is a serial link capable of carrying the whole traffic on its own, just slower.
Re:I doubt it... (Score:2)
Is less then 1 clock pulse really a lot of latency? Ofcourse there is also latency in every electrical circuits as well.
The real problem with fiber is the cost. For short distances electric signals over coppper are cheaper.
It's too bad... (Score:3, Insightful)
Re:It's too bad... (Score:2, Insightful)
That bandwidth can be shared between many drives. The drive itself has cache, so it isn't always returning data from the platters. And it's gigabits, not gigabytes. Get a freakin clue.
Re:It's too bad... (Score:2)
The interface runs at 3Gbps, not 3GBps. A standard SCSI interface can support at least 7 drives. This only 45MBps per drive on an U320 channel not counting protocol overhead. Quite a few SCSI drives can handle that speed.
Re:It's too bad... (Score:2)
Re:It's too bad... (Score:2)
According to this [smh.com.au] you might be a planet soon.
Re:It's too bad... (Score:2)
Haven't we learned this already a dozen times now? Bandwidth is EASY. If you want a high-bandwidth link between NY and LA, charter a few trucks and fill them with DDS4 tapes. If you want a high-bandwidth disk subsystem, fill it with a dew dozen drives. If you want more memory bandwidth, add another channel or three.
Latency, not bandwidth, is the problem in nearly all applications. You want a drive that can sustain 3GB/sec. Well, I'll give you a hypothetical drive that can transfer data instantaneously. With a 5ms access time, it can still only transfer 100KB/sec if it reads 512byte sectors randomly. A drive with half the latency but only 10MB/sec transfer rate could come within 95% of doubling the first drive's performance under those conditions.
Until you approach petabits/second, bandwidth is not a technical problem, it is a financial problem. I have to go, so thus endeth the lesson.
Parallel Interface? (Score:2)
If I'm not mistaken, doesn't SCSI stand for "small computer serial interface"?
Re:Parallel Interface? (Score:3, Informative)
Re:Parallel Interface? (Score:5, Informative)
descended from
SASI: Shugart & Associates Systems Interface
If it came from GNU... (Score:2)
Benefits of SCSI? (Score:2)
I keep hearing that SCSI drives are better for hardcore media editing and for servers, but I'm curious why. Is there a compelling advantage for desktop users (or even servers)?
I have to admit, I've got a box with two IDE drives and two CD/DVD drives, and I'm irritated that I can't keep my IDE ZIP drive installed or add another drive (transferring data is a pain in the butt...). It would be awfully nice just to throw another drive in the chassis, and add the free space to my existing partitions.
I dunno, I'll be in the market for a new desktop in the next year or so, so I'm trying to figure out now what the best hardware arrangement is.
Pizzle.
Re:Benefits of SCSI? (Score:3, Interesting)
Uhmmm ... you CAN have more than 4 IDE devices ... what you need is more IDE channels.
Each IDE channel can have only 2 devices, a master and a slave.
The more IDE channels you have, the more devices you can have. Currently, on my Motherboard, it has 4 channels, (2 for "standard" IDE connections, for 4 devices, and 2 for "RAID" IDE connections, for another 4 devices).
In fact, there are a couple of MOBO mfgs that have 6 channels (2 + 4 RAID channels, for maximum throughput you would have only 1 device per RAID channel.) ... however, you don't need to configure the RAID array, and could have 12 IDE devices.
Currently, I have:
BTW, it's really nice not to partition anything, and have a whole drive dedicated to an OS.
Re:Benefits of SCSI? (Score:2)
Re:Benefits of SCSI? (Score:2)
Re:Benefits of SCSI? (Score:2, Informative)
Re:Benefits of SCSI? (Score:2, Informative)
Most board manufacturers include only two IDE channels because that's how many are generally built into north-bridge chipsets. The Abit boards mentioned above use an additional Promise HPT374 chip to provide FOUR extra IDE channels, for a total of TWELVE IDE devices, altogether.
If you want more IDE devices than your board supports natively, you can just buy PCI cards that have more IDE channels. Promise, SIIG, and Highpoint all make really cheap cards that have an extra two channels, or four more devices.
SCSI limitations are similar. You only get 15 devices PER BUS, but you can add as many devices into your system as you have PCI slots and IRQs for. You can buy an Adaptec 29160 card (dual busses) and plug 30 hard drives into it. Buy four of them, and can have more than 100 drives.
Re:Benefits of SCSI? (Score:2)
I run both ATA and SCSI drives. My take is that if you're using small numbers of drives or just doing straight, simple high bandwidth sequential seeks, ATA is fine. SCSI will show when you have differing loads that are more real. Personally, I'm much happier with SCSI for just about anything. The fact is that ATA propenents can only compare against current SCSI technology by trying to be "good enough" for the job. They're not. It's all an issue of price vs. performance - but take out the issue of price, and SCSI wins.
Re:Benefits of SCSI? (Score:3, Informative)
For desktops, not really. For server, yes. SCSI, due to (generally) lower latencies, higher rotational speeds and a smarter interface destroys IDE in high-load multi-user style scenarios (lots of random reads & writes all over the disk). Very few (if any) desktop users generate the sort of usage patterns that allow SCSI to shine, so on the desktop it has little advantage (particularly taking into account the cost).
Most people who say SCSI gives them a good boost on their desktop machines are usually comparing quite new SCSI drives to quite old IDE ones, are dealing with poorly-configured IDE setups (more than one device on a channel) or are using an older, slower machine (probably with a crappy IDE controller). For the vast, vast majority of users (and that includes high-end users) SCSI offers little benefit.
Here's a shot (Score:4, Insightful)
SCSI is generally used to allow price discrimination by vendors. SCSI drives have a reputation for being more reliable, and much more expensive.
SCSI supports many more devices on a bus. This is a big deal to me -- it's a royal pain to buy another controller to add another device or two.
It's unlikely that the two will be merged any time soon, because there's tremendous financial incentive to prevent "enterprise-class" drives from becoming commoditized. SCSI is one of the industry's last useful tools to avoid this.
If you're getting a desktop, use ATA, almost certainly. If you're getting a server with a lot of drives, it may be worth your while to get SCSI, for the abovementioned benefits.
If I had some extra money and just wanted some extra reliability, I'd probably have a mirrored RAID pair of IDE drives, if I were building a desktop without a ton of drives.
Re:Benefits of SCSI? (Score:3, Interesting)
My current desktop setup is...
The additional cost to get the extra two IDE channels was $25 for a dual channel IDE RAID card. For a home machine, IDE is perfectly adequate for the main drives. I keep SCSI around in hopes of acquiring a reasonably priced backup solution at some point. (My current backup is to copy modified files to another machine in the garage with an eventual dump to DVD). If I need more storage in the near term, I'd probably pick up a firewire drive.
"Next year or so" the arrangement I'd choose would likely be entirely different. We'll see where serial ATA and SASCSI are at that point.
Firewire? (Score:3, Interesting)
Re:Firewire? (Score:5, Informative)
Firewire is low end consumer product...even with its successor (which is taking longer than expected to ship) running at 800Mbits/s (100 Megabytes/second) it falls short of current SCSI technology running @ 320MB/s. As such there is no one who would seriously consider firewire for a large scale server handling many gigabytes/terabytes of data. Firewire is just too slow of a bus for big needs, but does fills its convenience needs in the consumer market. Everything has it's own niche... that's why heavily marked up servers/mainframes/supercomputers still exist instead of cheaper home machines which just can't fill the requirements.
Re:Firewire? (Score:2)
Why not create a serial bus with many different speeds depending on the application required, make a wireless version too. Why do we need bluetooth, 802whatever, USB, firewire, serial ATA and now serial SCSI? it's just in the interests of hardware vendors to make all these different technologies.
Re:Firewire is not fast enough (Score:2)
I quote from the article you posted: The current generation supports transfer speeds of 800Mb/s (100MB/s, the same as most ATA controllers).
This discussion is about Serial SCSI which will have a peak throughput of 384MB/s. Clearly, firewire is insufficient.
Firewire? How about PCI Express? (Score:3, Interesting)
n.b.: Putting the controller logic back in the drive unit harkens back to the original In Drive Electronics approach.
Re:Firewire? (Score:2)
no longer pronounced "Scuzzy." (Score:4, Funny)
And the full acronym for "Serial attached SCSI" is SASCSI..
How exactly would we pronounce that? Sacksie? Sasky? Oh God, I bet it will be a silent C.
Yay, my computer iss really sspeedy now that I've upgraded to the new SSSASSSSSY DRIVE !@#!@^#^$^$#! [colonpee.com]
Jason Fisher.
Fanboy? (Score:2, Funny)
> fanboy vs. ATA fanboy flame war...
FWIW, the alternative name for fanboy is "fanboi". An even more disrespectful version of the term. (As if fanboy wasn't disrespectful enough for some people.)
A couple of notes (Score:4, Informative)
First, SAS uses a point-to-point topology similar to Serial-ATA instead of a shared bus like SCSI. This means each drive has access to full bandwidth, not just one (the bottleneck being the card itself).
Second, according to the SAS working group, SAS comes in three speeds; 150, 300 and 600 MB/s. I'm not sure where that 3 Gbps figure came from.
Third, unlike Serial-ATA or parallel SCSI, SAS is full duplex like fibre channel. This should have some interesting effects on latency.
Fourth, SAS uses the same physical connector as Serial-ATA and in fact can use Serial-ATA drives in legacy mode.
IBM's had this for several years, it's called SSA (Score:3, Informative)
coward
Ummmm.... (Score:3, Informative)
That's already ~2.5Gbits/sec.
And isn't there a SCSI640 working group, too?
-psy
I'm so confused. It's not Firewire? (Score:2)
No SAS drives on SATA (Score:3, Informative)
http://www.snwonline.com/whats_new/sas_and_sata
The article states that the SAS drives won't work on a SATA channel, but SATA drive will on the SAS.
I wonder if mobo makers like ASUS, ABIT, MSI and the likes will choose to have SAS ships on the mobo instead of SATA, as a performance feature?
Lets hope so it would sure open a lot of option for upgrading a PC over time.
this is not the merge of scsi and ata (Score:3, Insightful)
Just some info about the cables (Score:3, Informative)
I for one will be doing my best to hunt down a supplier which makes precise lengths so I can have mine cut to size as they aren't as easy to route as a ribbon cable (seriously!)
Plus if you have 6 devices that's SIX cables in the box instead of 3,... - one of the small shortcomings of SATa
(when I first heard about it, I was under the impression it dasiy chained with an "in" and an "out" port - boy did I think that was FANTASTIC... but I was sorely disapointed when I discovered I was incorrect)
Is Serial faster? (Score:3, Interesting)
Re:Mbit != MByte (Score:2, Informative)
Yup -- post is wrong, eds please amend (Score:3, Informative)
Existing SCSI is 320Mbps*8bits/byte = 2.5Gbps.
Moving to 3Gbits is evolutionary, not a huge jump.
I'm wondering what's going on here too -- WTF happened to Firewire? I remember thinking that everyone would be using it as a universal high bandwidth data bus, and for some reason it doesn't seem to be happening.
Re:Mbit != MByte (Score:4, Insightful)
I don't know what stupid scheme they are trying to create here -- interface-wise. SATA is a point-to-point configuration. SCSI has always been a bus configuration. If they go the p-t-p route, then it depends on the controller to be able to support the device on the other end -- SCSI crossing the pyhical interface or IDE/ATA/ATAPI crossing it. (Think parallel port ethernet dongle.) I'll have a hard time accepting p-t-p SCSI.
If they want to make SCSI more attractive, they should stop significantly over charging for the technology. They can bulk test "desktop" SCSI drives just as cheaply as IDE drives. They all use the same servo assemblies -- and in some cases, the same basic interface logic (obviously with different microcode.)
Re:3 gigabit/sec! (Score:2, Insightful)
Wouldn't that be called parallel?
Re:why is serial better? (Score:2, Informative)
I think the problem(s) come when you have to take into account keeping parallel lines in synch with one another, accouting for lost bits, and breaking down/putting back together all the information at either end. This all adds up in overhead for a parallel connection, where a serial connection just lets the information go through the line with little or no pre/post processing or synching to worry about.
Re:why is serial better? (Score:2)
In addition to the other posters (who are correct), minute differences in the wire length and composition mean the signals arriving at the speed of light often to not arrive close enough to the same time on parrell wires in the same cable to be considered close enough. With a scsi bus it is common for several signals to be in the wire at once.
Re:why is serial better? (Score:4, Informative)
Overcoming the differences in arrival time of signals in a parallel cable is not significantly more difficult than handling clocking (and maybe clock recovery) and buffering and serial-to-parallel conversion on a serial interface.
The main reason that parallel interfaces were popular years ago when things like SCSI were established was the electronics at the time just weren't very fast. The 74LS00 family logic that SCSI and parallel printer ports were designed around had a maximum clock rate of about 30Mhz. Add in margin for cable noise and distortion and 5-10Mhz was absolutely the most you could manage through any distance. So, if that wasn't fast enough for what you wanted to do, you used more wires in parallel.
These days, it's relatively easy to put multi-gigahertz logic onto chips, and the fewer wires in a cable and connector, the cheaper, so serial wins.
Re:It's already here... (Score:2)
Re:Does SCSI now compete with firewire2 ? (Score:3, Insightful)
Is iSCSI a standard yet?