Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Indiana University Dedicates Biggest College-Owned Supercomputer

samzenpus posted about a year and a half ago | from the getting-an-upgrade dept.

Education 83

Indiana University has replaced their supercomputer, Big Red, with a new system predictably named Big Red II. At the dedication HPC scientist Paul Messina said: "It's important that this is a university-owned resource. ... Here you have the opportunity to have your own faculty, staff and students get access with very little difficulty to this wonderful resource." From the article: "Big Red II is a Cray-built machine, which uses both GPU-enabled and standard CPU compute nodes to deliver a petaflop -- or 1 quadrillion floating-point operations per second -- of max performance. Each of the 344 CPU nodes uses two 16-core AMD Abu Dhabi processors, while the 676 GPU nodes use one 16-core AMD Interlagos and one NVIDIA Kepler K20."

Sorry! There are no comments related to the filter you selected.

Dedication ceremony? (-1, Troll)

Anonymous Coward | about a year and a half ago | (#43579055)

Why is it having a dedication ceremony? What it is being dedicated to? The traditional answer is some deity; that seems unlikely in this context though Hephaestus might be a good choice. This is the 21st century. Please do away with this babbling nonsense.

Re:Dedication ceremony? (3, Insightful)

Cenan (1892902) | about a year and a half ago | (#43579069)

Or it could be something completely innocent like cutting of tape and speeches and shit. You know the kind of stuff you do when you show your stakeholders what their money went to. Stop your idiotic religious babbling.

Re:Dedication ceremony? (1)

Anonymous Coward | about a year and a half ago | (#43579369)

I dunno, Indiana does have those awful "In God We Trust" license plates.

Re:Dedication ceremony? (1)

Anonymous Coward | about a year and a half ago | (#43579491)

That's just an alternative to the Free Market sky fairy used elsewhere in the US.

Re:Dedication ceremony? (5, Funny)

Anonymous Coward | about a year and a half ago | (#43579337)

I realize that you don't "believe", but SkyNet will be real one day and we never know which super computer will be the first node in humanity's Beowulf Cluster of Death.

Maybe, just maybe, if we're nice to them and show them some respect they'll let us service their modules until we die of natural causes.

Biggest? Really? (3, Interesting)

wonkey_monkey (2592601) | about a year and a half ago | (#43579059)

Computers used to be a lot bigger.

Re:Biggest? Really? (2, Interesting)

Anonymous Coward | about a year and a half ago | (#43579083)

How can you tell? In TFA, you just have a photo of one corner of the building.

Re:Biggest? Really? (2)

reboot246 (623534) | about a year and a half ago | (#43579343)

That's a building? It looked more like a stack of 4x8 Styrofoam sheets. I guess that's what passes for architectural design nowadays.

Re:Biggest? Really? (1)

Anonymous Coward | about a year and a half ago | (#43579389)

Yea, hard to tell scale, but you can see a security camera and the outside lighting, so it seems to be one-story....

Re:Biggest? Really? (2)

cdrudge (68377) | about a year and a half ago | (#43579623)

IU is a public university. Would you prefer tax dollars get spent on a masterpiece of architectural design for a data center?

Re:Biggest? Really? (1)

gtall (79522) | about a year and a half ago | (#43579715)

Depends upon what you mean by public. Most of their money these days come not from the state but from tuition, grants, etc. Many "public" unis are in the same position because lawmakers have increasingly found education to be not worth their while.

Re:Biggest? Really? (0)

Anonymous Coward | about a year and a half ago | (#43579861)

It probably is.

Those "masterpiece" buildings that some universities have that look pretty usually are nightmares to actually use.

Re:Biggest? Really? (3, Insightful)

Z_A_Commando (991404) | about a year and a half ago | (#43580605)

It's a Tier 3 data-center built to withstand F5 tornadoes and earthquakes. All the pretty glass stuff doesn't really survive in 300MPH winds. Also, the main receiving area in the back looks like something out of Jurassic Park. And in Bloomington, they think limestone is very pretty.

Re:Biggest? Really? (1)

Chrisq (894406) | about a year and a half ago | (#43579373)

Computers used to be a lot bigger.

But the largest computer ever built [wikipedia.org] (physically) executed only 75,000 instructions per second, and had 70k of memory (though that was 32 bit words")!

Imagine a Beowulf.. (2)

blackicye (760472) | about a year and a half ago | (#43579093)

Cray is still kicking around??

Re:Imagine a Beowulf.. (4, Informative)

fuzzyfuzzyfungus (1223518) | about a year and a half ago | (#43579229)

Cray is still kicking around??

I don't think they've kicked out an original processor design in ages; but they are still(among) those you talk to if you want something a little more tightly coupled, and/or a bit more 'turnkey' than "10,000 of whatever dell is selling, and some 10GbE switches".

Re:Imagine a Beowulf.. (3, Informative)

RicktheBrick (588466) | about a year and a half ago | (#43579759)

Maybe it is because Seymour Cray died in 1996 of complications from a automobile accident. http://en.wikipedia.org/wiki/Seymour_Cray [wikipedia.org] I would think that most people who have a passing interest in supercomputer would have known that fact. Seymour Cray did a lot of work in establishing the supercomputers. I can remember seeing his supercomputers on the cover of Popular Science back in the 70's. I think he deserves a little more respect than shown here by that remark.

Re:Imagine a Beowulf.. (-1)

Anonymous Coward | about a year and a half ago | (#43580595)

I don't think they've kicked out an original processor design in ages; but they are still(among) those you talk to if you want something a little more tightly coupled, and/or a bit more 'turnkey' than "10,000 of whatever dell is selling, and some 10GbE switches".

I think he deserves a little more respect than shown here by that remark.

I don't blame you for being upset. That turnkey comment is pretty offensive given what happened last time Seymour Cray turned a key.

Re:Imagine a Beowulf.. (1)

OhSoLaMeow (2536022) | about a year and a half ago | (#43584023)

Still too soon.

Re:Imagine a Beowulf.. (1)

blackicye (760472) | about a year and a half ago | (#43584787)

Maybe it is because Seymour Cray died in 1996 of complications from a automobile accident. http://en.wikipedia.org/wiki/Seymour_Cray [wikipedia.org] I would think that most people who have a passing interest in supercomputer would have known that fact. Seymour Cray did a lot of work in establishing the supercomputers. I can remember seeing his supercomputers on the cover of Popular Science back in the 70's. I think he deserves a little more respect than shown here by that remark.

I meant no disrespect to Seymour Cray, I just didn't think the company was still all that relevant any more.

Re:Imagine a Beowulf.. (1)

Anonymous Coward | about a year and a half ago | (#43579915)

Google "cray blackwidow". Last custom cray-designed vector processor

Re:Imagine a Beowulf.. (0)

Anonymous Coward | about a year and a half ago | (#43580637)

Cray is still kicking around??

The name is still alive [wikipedia.org]

AMD ?? (0, Funny)

Anonymous Coward | about a year and a half ago | (#43579105)

You want PERF you must go INTEL !!

Intel RUELZ !!

Re:AMD ?? (0)

Anonymous Coward | about a year and a half ago | (#43580973)

Warz teh intel with more than four COREZ? Itz multi-processing Biznitches!

Won't be long now... (1)

Anonymous Coward | about a year and a half ago | (#43579163)

until some wannabe comedian makes a "Does it run Linux?" post. Despite the fact that it's one of the earliest /. memes and has been used over a million times, it will get moderated "+5 Funny" because originality and creativity are lost on this crowd.

Re:Won't be long now... (4, Insightful)

JustOK (667959) | about a year and a half ago | (#43579233)

you must be new here.

Re:Won't be long now... (0)

Anonymous Coward | about a year and a half ago | (#43579767)

Well that is a valid question. So, does it run Linux?

Re:Won't be long now... (2)

Entropius (188861) | about a year and a half ago | (#43580079)

Yes. What else would it run?

Re:Won't be long now... (0)

Anonymous Coward | about a year and a half ago | (#43580615)

100 meters in 9 seconds flat?

Re:Won't be long now... (1)

Entropius (188861) | about a year and a half ago | (#43583155)

It tried to run, but as soon as they fired the starting pistol it just fell over. Someone called it a "flop".

Re:Won't be long now... (0)

Anonymous Coward | about a year and a half ago | (#43581039)

I was kind of expecting OpenIndiana, to be honest.

Re:Won't be long now... (1)

Anonymous Coward | about a year and a half ago | (#43580721)

Do you like apples? TFA says it runs Cray's version of SUSE.

As Kanye Would Say... (-1)

Anonymous Coward | about a year and a half ago | (#43579193)

That Shit Cray.

Biggest and probably dull too look at... (2)

mendax (114116) | about a year and a half ago | (#43579197)

It's great to see a university have a monster like this for research use. And old universities you would think are well suited for these kinds of monsters. Their computer centers were built at a time when the computers really were filled with monster machines that your iPad would run circles around today performance-wise. They were replaced in the 1990s by servers that would fit into a closet. But they still have all this space that can be filled with racks upon racks of supercomputer nodes. However, I suspect that IU may have built a new building for these new and improved monsters. But anyway, these new monsters are nice to contemplate but they're not a pretty to look at. Computers in the old days were designed to be both functional and attractive to members of the unwashed masses who could gaze at them through the glass windows and drool, and be hypnotized at the blinking lights and the spinning tape drive reels spinning. And the glass windows were there to allow the institution to show the machine off as a kind of status symbol. There was no picture in the article of this new beast but I will bet $0.02 that it's pretty dull looking.

Re:Biggest and probably dull too look at... (1)

mendax (114116) | about a year and a half ago | (#43579205)

Ack.... it's 3:30 in the morning and I can't spell. It's "probably dull TO look at". That ought to be modded down just for that little slip.

Re:Biggest and probably dull too look at... (2)

fuzzyfuzzyfungus (1223518) | about a year and a half ago | (#43579249)

Check this one out... [degiorgi.math.hr]

(More broadly, though, the point is largely valid. Reel-to-reel deserved to die, technologically; but damn did it look 'high tech' churning away in the background, now that everything fits in standard 72u racks, it's mostly just a 'Should we go for 'basic black, or spring for custom powdercoat and a cool cutout design for the doors?' game.

Of course, seeing as the CM-2 [uic.edu] won 'coolest-looking computer of all time', with the CM-5 playing 'solid; but ever-so-slightly-disappointing-sequel', perhaps it's only fair for everyone else to just stop trying.

Re:Biggest and probably dull too look at... (2)

dbIII (701233) | about a year and a half ago | (#43579357)

Not ten feet away from me is a box of nine track reels - it's still not dead yet. It should be, and it would be if people had transcribed their media within a sane timeframe, but it's still in use on occasion. I don't know if those reels in that box will ever be read again since they were a third copy in 1982, but I had to get a few transcribed up to a couple of years ago after the original owners threw out their copies and then found out a decade later that they wanted the data.

Re:Biggest and probably dull too look at... (1)

Charliemopps (1157495) | about a year and a half ago | (#43579475)

About 50 feet from me there's about 100 boxes of microfiche that represent billing records from the 1960s on back. I have no idea why we still keep them, but they're there sure enough.

Re:Biggest and probably dull too look at... (1)

fahrbot-bot (874524) | about a year and a half ago | (#43583597)

Not ten feet away from me is a box of nine track reels...

About 50 feet from me there's about 100 boxes of microfiche...

About a mile away from me is something called a Library, filled with things called "books" and "magazines" - I hear they're like papery blogs. I have no idea why we still keep them, but they're there sure enough.

Re:Biggest and probably dull too look at... (1)

mwvdlee (775178) | about a year and a half ago | (#43579257)

http://newsinfo.iu.edu/pub/libs/images/usr/15356_h.jpg [iu.edu]
Not too shabby looking for a line of racks. Please donate your $0.02 to a charity of your own choice.
Though from the looks of it, I expect it to be mostly calculating ballistic trajectories originating in eastern Europe.

Re:Biggest and probably dull too look at... (2)

mendax (114116) | about a year and a half ago | (#43579377)

Looks rather boring to me. But I'm old school... and after just made my first visit to the Computer History Museum in Mountain View in five years and seen bits of the beauty of old room-filling computers from the last sixty years on display, I can say with some certainty that the IU machine is dull to look at.

Research Hell (4, Funny)

kurt555gs (309278) | about a year and a half ago | (#43579219)

Can you imagine how many Bitcoins this thing could mine per hour?

Not many with only 676 GPU nodes (1)

Anonymous Coward | about a year and a half ago | (#43579353)

The bitcoin pools are mostly PCs with GPU rigs, and they often number into the tens of thousands, so basically this supercomputer is puny compared to even the GPU bitcoin pools.

Re:Not many with only 676 GPU nodes (2)

flowerp (512865) | about a year and a half ago | (#43579483)

Bitcoin is now dominated by FPGA and ASIC miners (dedicated hardware), most GPU farms have moved on to litecoin.

Re:Not many with only 676 GPU nodes (1)

Anonymous Coward | about a year and a half ago | (#43579495)

RTFS... in what world are 676 Kepler K20s called puny?

In a Bitcoin world (0)

Anonymous Coward | about a year and a half ago | (#43579571)

"RTFS... in what world are 676 Kepler K20s called puny?"
In a world where 100k GPU's can't compete with the Avalon rigs, so why do you think 20k can?

Re:Not many with only 676 GPU nodes (1)

gman003 (1693318) | about a year and a half ago | (#43580925)

The Bitcoin world.

Kepler is good for stuff involving lots of double-precision floating-point, like scientific computing. Physics, chemistry, stuff like that.

AMD has a lead on Bitcoin mining because a) it's integer, not float32 or float64, b) AMD has a shitload of slower cores rather than a smaller number of more efficient, powerful cores, and c) due to some weird coincidences of architecture, AMD designs (both VLIW5 and GCN, IIRC) can run the "main loop" of Bitcoin mining in only one instruction, while Nvidia's Fermi and Kepler designs require three instructions.

The most badass Bitcoin miner would consist of Radeon 7990s. Fortunately, Indiana University seems to be planning to use this for SCIENCE and not Bitcoins. Science is probably a better investment, in the long run.

Re:Research Hell (0)

Anonymous Coward | about a year and a half ago | (#43581845)

How do you think they plan on paying for it?

and with all that computing power.. (0)

Anonymous Coward | about a year and a half ago | (#43579297)

they're still unable to figure out why the big ten sucks at bowl games.

some big ten schools are not sports colleges (2)

Joe_Dragon (2206452) | about a year and a half ago | (#43579457)

some big ten schools are not sports colleges.

No at some you have real classes to take and pass.

Re:some big ten schools are not sports colleges (-1)

Anonymous Coward | about a year and a half ago | (#43579687)

some big ten schools are not sports colleges.

No at some you have real classes to take and pass.

Must make it hard for them to fulfill their quota of blacks

Re:some big ten schools are not sports colleges (0)

Anonymous Coward | about a year and a half ago | (#43579989)

I went to the University of Minnesota and I can tell you that we're neither.

Re:and with all that computing power.. (1)

anjrober (150253) | about a year and a half ago | (#43580091)

IU has never been a football power house
its all basketball
and all in, they had a very good year. would loved to see them do better in the tournament but a great regular season.

So they'll be mining a shitload of Bitcoins. (1)

davesag (140186) | about a year and a half ago | (#43579391)

They'll just use it to mine Bitcoins I'm sure.

Quality name? (0)

Anonymous Coward | about a year and a half ago | (#43579417)

Now they have Big Red Two and Big Red too. Twice the fun!

At least we (Purdue University) name our supercomputers something interesting.

Re:Quality name? (1)

slugstone (307678) | about a year and a half ago | (#43579583)

like what? Carter, that interesting?

Re:Quality name? (1)

jtillots (719111) | about a year and a half ago | (#43592051)

I wanted to name it Trident. It's 1.) IU related (our symbol is called the trident), 2.) it's powerful since it's a weapon, and 3.) it's gum-themed (expanding on BigRed as a chewing gum). Who's with me!

Another Bitcoin Joke (1)

puddingebola (2036796) | about a year and a half ago | (#43579487)

It's already found 2 verified bitcoins and paid for the first month's electric bill.

They should make a virtual supercomputer (0)

Anonymous Coward | about a year and a half ago | (#43579917)

Just offer money (pay even in Bitcoins is fine), and in return you run a node on your computer.

Open source the node design, with Bitcoin it drove the design of FGPA and now ASICs to run the nodes. Someone is sure to design tiny fast cores capable of just running your nodes. GPU nodes are too general purpose to be the most efficient solution, and once you make the cores simpler they become cheaper and use less power and scale far better.

As long as you can keep the compute jobs running (so there's income there), the computing power will scale to use it.

First task: (0)

Anonymous Coward | about a year and a half ago | (#43579521)

Come up with something better than "Big Red III" for the next machine.

Re:First task: (1)

tqk (413719) | about a year and a half ago | (#43581495)

"Hellboy!" (apologies to Ron Perlman & Co.).

I have to question the wisdom of this (2)

Virtucon (127420) | about a year and a half ago | (#43579549)

While it has been in vogue for years for universities to have this capability in-house, I have to question the wisdom of this kind of investment in a few areas. First, recently there was an article on Slashdot posted about the Federal Government retiring Roadrunner because in less than 5 years because it was too much of a power hog. [digitaltrends.com] I haven't seen anything in the press releases about Big Red that would indicate that IU has solved the power obsolescence issue; in five years, we'll probably see Big Red II retired because it wasn't power efficient given newer technology. IMO in five years, IU will be looking to fund Big Red III so I hope they get their value out of this investment, total operating costs (TOC) because it has to be very, very expensive to keep the lights on for this thing. Second, with Utility Computing models available in the Cloud with AWS, Google Apps etc. for large scale experiments, more and more companies are choosing the utility model to run their research rather than buying it. I don't need to cite them all here but there's stories day in and day out of companies and universities leveraging utility based, cloud models for HPC. You have one resource here at IU when you could lease multiple Cloud based resources with hundreds of thousands of nodes simultaneously, not just rely on one large machine in your data center. I can imagine there are quite a few experiments that IU can do with it, but when I read their press, it's available to IU students and Faculty, does that mean they won't let other academic institutions use it? If that's true it's a very expensive resource that only one institution can use and I doubt that they can keep it busy 24x7x365 for its useful life with experiments. Maybe I'm wrong but I just can't see this kind of large scale investment being feasible over the coming years because it will just be too inexpensive and disposable to run it in a Cloud based model.

Re:I have to question the wisdom of this (0)

Anonymous Coward | about a year and a half ago | (#43579709)

The cloud idea is always wonderful, but in the end, there's just too many risks involved... (I.e. - data loss, unexpected and or continual downtime, IP, etc.)

Investing in cloud solutions only to suddenly experience 1 single instant of these sorts of dangers can sometimes be all that's required to completely derail an entire company, product, research, etc.

Re:I have to question the wisdom of this (1)

Thawk9455 (1037874) | about a year and a half ago | (#43579743)

It might help to know that many universities can't just use cloud services willy nilly. There are only a few people on campus that are technically allowed to agree to a license agreement (doesn't mean others don't, but for something big it becomes important). Those license agreements have to go through significant negotiations to ensure all requirements for state laws governing the university are met, all NIH or other grant requirements are met, etc. Just the contract negotiations alone can take a year or more, if they can ever come to agreement on terms. This is definitely one area that companies, that have the ability to make their own decisions, can move around and try different services much more quickly. Universities and medical organizations have much more stringent data requirements than most businesses.

Re:I have to question the wisdom of this (3, Interesting)

gtall (79522) | about a year and a half ago | (#43579745)

Indiana University is not simply a university; it is a state school system with several regional campuses. Oddly enough, Purdue is Indiana's second state school system with its own regional campuses. They both share a campus at IUPUI (Indianapolis). I'd be very surprised if there is any free time left for this. And if there is, IU would likely just lease it to Purdue.

All in all, it is probably cost effective for them to do this. They are unlikely to have made this decision in a vacuum; they are well aware of the alternatives. (I'm an IU alum).

It's a good idea. (0)

Anonymous Coward | about a year and a half ago | (#43579969)

There is a huge benefit to having a computer that you physically control. For instance, you can install whatever software you want, including experimental operating systems that might crash. Sure, you could VM all the machines, etc., but if your *thing* is developing high performance computing, then running on the real metal is important.

The other aspect is that the money comes out of a different bucket. If you have the big box already installed and sitting there, your little $10k grant can essentially get the computer time for free (assuming the Uni sets it up that way), as opposed to buying by the CPU-Second from a cloud provider. CapEx vs OpEx as it were.

Finally, the administrative overhead in getting an account and access to an on-campus resource is typically trivial in comparison to negotiating a procurement contract with a cloud provider, particularly if your money is coming from grant funds, since the granting agency usually sticks their fingers into the negotiations as well. (Is the provider compliant with the "Drug Free Workplace Act", for instance)

If that "personal touch" weren't important, then we'd all be still using timesharing terminals on a mainframe.

Re:I have to question the wisdom of this (4, Insightful)

riley (36484) | about a year and a half ago | (#43579981)

Cloud computing is not appropriate for all types of research computing. Let's say you want to use Amazon's cloud offering, but you have a genomic and geospatial dataset of 60 TB. While not ubiquitous in research computing, it is not unheard of, especially in the fields of bioinformatics. The cost of storage and the cost of transfer will each away at whatever grant that is funding the research. This is a business decision. Does the cost of the computing resource and operation result in [ more grants / better faculty retention ] than not having it?

The cost-benefit analysis has been done, and while cloud computing has its place, there are additional costs that make it problematic. The cloud is not a panacea.

That said, in five years IU could very well be looking for its next big computer. The average lifespan of a supercomputer is 5-8 years. So, five years is on the early side of looking for the next big thing, but not outrageously so.

Disclaimer -- I run high speed data storage for a university. I've written acceptance test measures for high performance computing resources. I've done the cost-benefit analyses.

Re:I have to question the wisdom of this (1)

Blaskowicz (634489) | about a year and a half ago | (#43580335)

I don't run HPC on 60 TB datasets or do anything remotely like that but I know that for me, storing a video game or something on a local disk is cheaper and higher performance than to signup for an ISP that gives me four aggregated SDSL links (symetrical 20Mbits/s) and rent Amazon storage.

Re:I have to question the wisdom of this (1)

godrik (1287354) | about a year and a half ago | (#43580533)

Well, that's a valid questions. And depending on the case, the answer can be different. Many applications (especially the ones that will use GPU clusters) will need a good interconnect. Cray provides that on its machines. Last time I checked cloud platform, they did not have a suitable interconnect (10Gig ethernet has high latency).

Re:I have to question the wisdom of this (1)

iggymanz (596061) | about a year and a half ago | (#43581821)

quit using the cloud word; what are you, a marketing choad?

multiple "cloud based resources" (i.e. just another goddamn bunch of servers on the internet) don't have the high speed network interconnects of a supercomputing cluster

FWIW (2)

jimbrooking (1909170) | about a year and a half ago | (#43579589)

The IU machine at 1 PFLOP would rank around 24th in the world and 11th in the U.S. (http://www.top500.org/list/2012/11/).

Why two different GPUs / graphics cards (1)

angel'o'sphere (80593) | about a year and a half ago | (#43579851)

What is the advantage having two different GPUs in one node? Any idea?

Re:Why two different GPUs / graphics cards (0)

Anonymous Coward | about a year and a half ago | (#43579983)

I use molecular dynamics software (Lammps) for my research, and it likes to have one GPU per CPU when possible.

Re:Why two different GPUs / graphics cards (2)

Blaskowicz (634489) | about a year and a half ago | (#43580055)

There aren't two different GPUs in a node, the AMD Interlagos is a Bulldozer Opteron, made of two dies on the chip, same die as the FX-8150 CPU.
CPU nodes run newer Opteron, about 10% more effcient, made of two Piledriver dies (similar to the FX-8350). It's weird that two different kind of CPU are used but that's probably because the GPU nodes were already made, validated etc.

Re:Why two different GPUs / graphics cards (1)

angel'o'sphere (80593) | about a year and a half ago | (#43580197)

Well, the summary says there are two different GPUs per node, or is this just missleading and they mean a CPU + GPU on a GPU node while the CPU there is different from the one on CPU nodes?

Re:Why two different GPUs / graphics cards (1)

Blaskowicz (634489) | about a year and a half ago | (#43580457)

It's implied that a GPU node has CPU + GPU, because that GPU won't do anything at all on its own, not even able to talk to the networking or to run an OS (until some future generation of GPU includes CPU cores). The summary said it were "GPU-enabled nodes", too. Then the reader is supposed to understand (know already) that the AMD 16-core chip is a CPU, or infer it somehow.

Alright, I'm only now seeing the last sentence's ambiguity, the trick is you don't see what's wrong when you understand all the techno-babble words already. The submitter did some lazy editing, adding just three letters would fix it :

"while the 676 GPU nodes use one 16-core AMD Interlagos CPU and one NVIDIA Kepler K20."

Re:Why two different GPUs / graphics cards (1)

angel'o'sphere (80593) | about a year and a half ago | (#43595121)

Year, after your post, while I was typing, I guessed that.
Thanx for the clarification, anyway! (I'm a software and process guy, as I lost interest in the most modern car models, or aircraft models etc. I don't realy have a clue right now about CPUs)

Bobby Knight powered? (1)

rimcrazy (146022) | about a year and a half ago | (#43580283)

So do they have Bobby Knight in a closet kicking chairs to power this thing?

Clown owned? (0)

Anonymous Coward | about a year and a half ago | (#43580359)

My mind read 'clown owned supercomputer'. Where was my subconscious going with that?

It would be nice... (3, Insightful)

MrLizard (95131) | about a year and a half ago | (#43580717)

...if this got as much attention in the local press as throwing a ball into a basket does.

Re:It would be nice... (0)

Anonymous Coward | about a year and a half ago | (#43583179)

If it made the school millions of dollars in TV rights and merchandizing, it probably would.

NOT (0)

Anonymous Coward | about a year and a half ago | (#43581755)

It's only the biggest supercomputer only allowed to be used by one university. Even their own FAQ explains their weasel words: there's at least several (much) larger university supercomputers out there (Blue Waters, Kraken, Stampede). Just because no one else will work with them doesn't make them the biggest.

http://kb.iu.edu/data/bcqt.html

Who (1)

Richy_T (111409) | about a year and a half ago | (#43585183)

So who was it dedicated to?

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?