Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

  • Does Being First Still Matter In America?

    dcblogs writes At the supercomputing conference, SC14, this week, a U.S. Dept. of Energy offical said the government has set a goal of 2023 as its delivery date for an exascale system. It may be taking a risky path with that amount of lead time because of increasing international competition. There was a time when the U.S. didn't settle for second place. President John F. Kennedy delivered his famous "we choose to go to the moon" speech in 1962, and seven years later a man walked on the moon. The U.S. exascale goal is nine years away. China, Europe and Japan all have major exascale efforts, and the government has already dropped on supercomputing. The European forecast of Hurricane Sandy in 2012 was so far ahead of U.S. models in predicting the storm's path that the National Oceanic and Atmospheric Administration was called before Congress to explain how it happened. It was told by a U.S. official that NOAA wasn't keeping up in computational capability. It's still not keeping up. Cliff Mass, a professor of meteorology at the University of Washington, wrote on his blog last month that the U.S. is "rapidly falling behind leading weather prediction centers around the world" because it has yet to catch up in computational capability to Europe. That criticism followed the $128 million recent purchase a Cray supercomputer by the U.K.'s Met Office, its meteorological agency.

    233 comments | 3 days ago

  • US DOE Sets Sights On 300 Petaflop Supercomputer

    dcblogs writes U.S. officials Friday announced plans to spend $325 million on two new supercomputers, one of which may eventually be built to support speeds of up to 300 petaflops. The U.S. Department of Energy, the major funder of supercomputers used for scientific research, wants to have the two systems – each with a base speed of 150 petaflops – possibly running by 2017. Going beyond the base speed to reach 300 petaflops will take additional government approvals. If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems. How adequate this planned investment will look three years from now is a question. Lawmakers weren't reading from the same script as U.S. Energy Secretary Ernest Moniz when it came to assessing the U.S.'s place in the supercomputing world. Moniz said the awards "will ensure the United States retains global leadership in supercomputing." But Rep. Chuck Fleischmann (R-Tenn.) put U.S. leadership in the past tense. "Supercomputing is one of those things that we can step up and lead the world again," he said.

    127 comments | about a week ago

  • Researchers Simulate Monster EF5 Tornado

    New submitter Orp writes: I am the member of a research team that created a supercell thunderstorm simulation that is getting a lot of attention. Presented at the 27th Annual Severe Local Storms Conference in Madison, Wisconsin, Leigh Orf's talk was produced entirely as high def video and put on YouTube shortly after the presentation. In the simulation, the storm's updraft is so strong that it essentially peels rain-cooled air near the surface upward and into the storm's updraft, which appears to play a key role in maintaining the tornado. The simulation was based upon the environment that produced the May 24, 2011 outbreak which included a long-track EF5 tornado near El Reno Oklahoma (not to be confused with the May 31, 2013 EF5 tornado that killed three storm researchers).

    61 comments | about two weeks ago

  • Interviews: Ask CMI Director Alex King About Rare Earth Mineral Supplies

    The modern electronics industry relies on inputs and supply chains, both material and technological, and none of them are easy to bypass. These include, besides expertise and manufacturing facilities, the actual materials that go into electronic components. Some of them are as common as silicon; rare earth minerals, not so much. One story linked from Slashdot a few years back predicted that then-known supplies would be exhausted by 2017, though such predictions of scarcity are notoriously hard to get right, as people (and prices) adjust to changes in supply. There's no denying that there's been a crunch on rare earths, though, over the last several years. The minerals themselves aren't necessarily rare in an absolute sense, but they're expensive to extract. The most economically viable deposits are found in China, and rising prices for them as exports to the U.S., the EU, and Japan have raised political hackles. At the same time, those rising prices have spurred exploration and reexamination of known deposits off the coast of Japan, in the midwestern U.S., and elsewhere.

    Alex King is director of the Critical Materials Institute, a part of the U.S. Department of Energy's Ames Laboratory. CMI is heavily involved in making rare earth minerals slightly less rare by means of supercomputer analysis; researchers there are approaching the ongoing crunch by looking both for substitute materials for things like gallium, indium, and tantalum, and easier ways of separating out the individual rare earths (a difficult process). One team there is working with "ligands – molecules that attach with a specific rare-earth – that allow metallurgists to extract elements with minimal contamination from surrounding minerals" to simplify the extraction process. We'll be talking with King soon; what questions would you like to see posed? (This 18-minute TED talk from King is worth watching first, as is this Q&A.)

    62 comments | about three weeks ago

  • 16-Teraflops, £97m Cray To Replace IBM At UK Meteorological Office

    Memetic writes: The UK weather forecasting service is replacing its IBM supercomputer with a Cray XC40 containing 17 petabytes of storage and capable of 16 TeraFLOPS. This is Cray's biggest contract outside the U.S. With 480,000 CPUs, it should be 13 times faster than the current system. It will weigh 140 tons. The aim is to enable more accurate modeling of the unstable UK climate, with UK-wide forecasts at a resolution of 1.5km run hourly, rather than every three hours, as currently happens. (Here's a similar system from the U.S.)

    125 comments | about three weeks ago

  • First Demonstration of Artificial Intelligence On a Quantum Computer

    KentuckyFC writes: Machine learning algorithms use a training dataset to learn how to recognize features in images and use this 'knowledge' to spot the same features in new images. The computational complexity of this task is such that the time required to solve it increases in polynomial time with the number of images in the training set and the complexity of the "learned" feature. So it's no surprise that quantum computers ought to be able to rapidly speed up this process. Indeed, a group of theoretical physicists last year designed a quantum algorithm that solves this problem in logarithmic time rather than polynomial, a significant improvement.

    Now, a Chinese team has successfully implemented this artificial intelligence algorithm on a working quantum computer, for the first time. The information processor is a standard nuclear magnetic resonance quantum computer capable of handling 4 qubits. The team trained it to recognize the difference between the characters '6' and '9' and then asked it to classify a set of handwritten 6s and 9s accordingly, which it did successfully. The team says this is the first time that this kind of artificial intelligence has ever been demonstrated on a quantum computer and opens the way to the more rapid processing of other big data sets — provided, of course, that physicists can build more powerful quantum computers.

    98 comments | about a month ago

  • Brown Dog: a Search Engine For the Other 99 Percent (of Data)

    aarondubrow writes: We've all experienced the frustration of trying to access information on websites, only to find that the data is trapped in outdated, difficult-to-read file formats and that metadata — the critical data about the data, such as when and how and by whom it was produced — is nonexistent. Led by Kenton McHenry, a team at the National Center for Supercomputing Applications is working to change that. Recipients in 2013 of a $10 million, five-year award from the National Science Foundation, the team is developing software that allows researchers to manage and make sense of vast amounts of digital scientific data that is currently trapped in outdated file formats. The NCSA team recently demonstrated two publicly-available services to make the contents of uncurated data collections accessible.

    23 comments | about a month and a half ago

  • Supercomputing Upgrade Produces High-Resolution Storm Forecasts

    dcblogs writes A supercomputer upgrade is paying off for the U.S. National Weather Service, with new high-resolution models that will offer better insight into severe weather. This improvement in modeling detail is a result of a supercomputer upgrade. The National Oceanic and Atmospheric Administration, which runs the weather service, put into production two new IBM supercomputers, each 213 teraflops, running Linux on Intel processors. These systems replaced 74-teraflop, four-year old systems. More computing power means systems can run more mathematics, and increase the resolution or detail on the maps from 8 miles to 2 miles.

    77 comments | about 2 months ago

  • Google To Build Quantum Information Processors

    An anonymous reader writes The Google Quantum AI Team has announced that they're bringing in a team from the University of California at Santa Barbara to build quantum information processors within the company. "With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture." Google will continue to work with D-Wave, but the UC Santa Barbara group brings its own areas of expertise with superconducting qubit arrays.

    72 comments | about 3 months ago

  • IBM Opens Up Its Watson Supercomputer To Researchers

    An anonymous reader writes IBM has announced the "Watson Discovery Advisor" a cloud-based tool that will let researchers comb through massive troves of data, looking for insights and connections. The company says it's a major expansion in capabilities for the Watson Group, which IBM seeded with a $1 billion investment. "Scientific discovery takes us to a different level as a learning system," said Steve Gold, vice president of the Watson Group. "Watson can provide insights into the information independent of the question. The ability to connect the dots opens up a new world of possibilities."

    28 comments | about 3 months ago

  • Unboxing a Cray XC30 'Magnus' Petaflops Supercomputer

    Bismillah (993337) writes The Pawsey Supercomputing Centre in Australia has started unboxing and installing its new upgraded 'Magnus' supercomputer, which could become the largest such system in the southern hemisphere, with up to one petaFLOPS performance.

    71 comments | about 4 months ago

  • How a Supercomputer Beat the Scrap Heap and Lived On To Retire In Africa

    New submitter jorge_salazar (3562633) writes Pieces of the decommissioned Ranger supercomputer, 40 racks in all, were shipped to researchers in South Africa, Tanzania, and Botswana to help seed their supercomputing aspirations. They say they'll need supercomputers to solve their growing science problems in astronomy, bioinformatics, climate modeling and more. Ranger's own beginnings were described by the co-founder of Sun Microsystems as a 'historic moment in petaflop computing."

    145 comments | about 4 months ago

  • A Peek Inside D-Wave's Quantum Computing Hardware

    JeremyHsu writes: A one-second delay can still seem like an eternity for a quantum computing machine capable of running calculations in mere millionths of a second. That delay represents just one of the challenges D-Wave Systems overcame in building its second-generation quantum computing machine known as D-Wave Two — a system that has been leased to customers such as Google, NASA and Lockheed Martin. D-Wave's rapid-scaling approach to quantum computing has plenty of critics, but the company's experience in building large-scale quantum computing hardware could provide valuable lessons for everyone, regardless of whether the D-Wave machines live up to quantum computing's potential by proving they can outperform classical computers. (D-Wave recently detailed the hardware design changes between its first- and second-generation quantum computing machines in the the June 2014 issue of the journal IEEE Transactions on Applied Superconductivity.)

    "We were nervous about going down this path," says Jeremy Hilton, vice president of processor development at D-Wave Systems. "This architecture requires the qubits and the quantum devices to be intermingled with all these big classical objects. The threat you worry about is noise and impact of all this stuff hanging around the qubits. Traditional experiments in quantum computing have qubits in almost perfect isolation. But if you want quantum computing to be scalable, it will have to be immersed in a sea of computing complexity.

    55 comments | about 4 months ago

  • Computing a Cure For HIV

    aarondubrow writes: The tendency of HIV to mutate and resist drugs has made it particularly difficult to eradicate. But in the last decade scientists have begun using a new weapon in the fight against HIV: supercomputers. Using some of the nation's most powerful supercomputers, teams of researchers are pushing the limits of what we know about HIV and how we can treat it. The Huffington Post describes how supercomputers are helping scientists understand and treat the disease.

    89 comments | about 5 months ago

  • NSF Researcher Suspended For Mining Bitcoin

    PvtVoid (1252388) writes "In the semiannual report to Congress by the NSF Office of Inspector General, the organization said it received reports of a researcher who was using NSF-funded supercomputers at two universities to mine Bitcoin. The computationally intensive mining took up about $150,000 worth of NSF-supported computer use at the two universities to generate bitcoins worth about $8,000 to $10,000, according to the report. It did not name the researcher or the universities."

    220 comments | about 5 months ago

  • Electrical Control of Nuclear Spin Qubits: Important Step For Quantum Computing

    Taco Cowboy writes: "Using a spin cascade in a single-molecule magnet, scientists at Karlsruhe Institute of Technology and their French partners have demonstrated that a single nuclear spin can be realized in a purely electric manner, rather than through the use of magnetic fields (abstract). For their experiments, the researchers used a nuclear spin-qubit transistor that consists of a single-molecule magnet connected to three electrodes (source, drain, and gate). The single-molecule magnet is a TbPc2 molecule — a single metal ion of terbium that is enclosed by organic phthalocyanine molecules of carbon, nitrogen, and hydrogen atoms. The gap between the electric field and the spin is bridged by the so-called hyperfine-Stark effect that transforms the electric field into a local magnetic field. This quantum mechanical process can be transferred to all nuclear spin systems and, hence, opens up entirely novel perspectives for integrating quantum effects in nuclear spins into electronic circuits"

    42 comments | about 6 months ago

  • Stanford Bioengineers Develop 'Neurocore' Chips 9,000 Times Faster Than a PC

    kelk1 sends this article from the Stanford News Service: "Stanford bioengineers have developed faster, more energy-efficient microchips based on the human brain – 9,000 times faster and using significantly less power than a typical PC (abstract). Kwabena Boahen and his team have developed Neurogrid, a circuit board consisting of 16 custom-designed 'Neurocore' chips. Together these 16 chips can simulate 1 million neurons and billions of synaptic connections. The team designed these chips with power efficiency in mind. Their strategy was to enable certain synapses to share hardware circuits. ... But much work lies ahead. Each of the current million-neuron Neurogrid circuit boards cost about $40,000. (...) Neurogrid is based on 16 Neurocores, each of which supports 65,536 neurons. Those chips were made using 15-year-old fabrication technologies. By switching to modern manufacturing processes and fabricating the chips in large volumes, he could cut a Neurocore's cost 100-fold – suggesting a million-neuron board for $400 a copy."

    209 comments | about 7 months ago

  • Using Supercomputers To Predict Signs of Black Holes Swallowing Stars

    aarondubrow (1866212) writes "A 'tidal disruption' occurs when a star orbits too close to a black hole and gets sucked in. The phenomenon is accompanied by a bright flare with a unique signature that changes over time. Researchers at the Georgia Institute of Technology are using Stampede and other NSF-supported supercomputers to simulate tidal disruptions in order to better understand the dynamics of the process. Doing so helps astronomers find many more possible candidates of tidal disruptions in sky surveys and will reveal details of how stars and black holes interact."

    31 comments | about 7 months ago

  • Fifty Years Ago IBM 'Bet the Company' On the 360 Series Mainframe

    Hugh Pickens DOT Com (2995471) writes "Those of us of a certain age remember well the breakthrough that the IBM 360 series mainframes represented when it was unveiled fifty years ago on 7 April 1964. Now Mark Ward reports at BBC that the first System 360 mainframe marked a break with all general purpose computers that came before because it was possible to upgrade the processors but still keep using the same code and peripherals from earlier models. "Before System 360 arrived, businesses bought a computer, wrote programs for it and then when it got too old or slow they threw it away and started again from scratch," says Barry Heptonstall. IBM bet the company when they developed the 360 series. At the time IBM had a huge array of conflicting and incompatible lines of computers, and this was the case with the computer industry in general at the time, it was largely a custom or small scale design and production industry, but IBM was such a large company and the problems of this was getting obvious: When upgrading from one of the smaller series of IBM computers to a larger one, the effort in doing that transition was so big so you might as well go for a competing product from the "BUNCH" (Burroughs, Univac, NCR, CDC and Honeywell). Fred Brooks managed the development of IBM's System/360 family of computers and the OS/360 software support package and based his software classic "The Mythical Man-Month" on his observation that "adding manpower to a late software project makes it later." The S/360 was also the first computer to use microcode to implement many of its machine instructions, as opposed to having all of its machine instructions hard-wired into its circuitry. Despite their age, mainframes are still in wide use today and are behind many of the big information systems that keep the modern world humming handling such things as airline reservations, cash machine withdrawals and credit card payments. "We don't see mainframes as legacy technology," says Charlie Ewen. "They are resilient, robust and are very cost-effective for some of the work we do.""

    169 comments | about 8 months ago

  • Mystery MLB Team Moves To Supercomputing For Their Moneyball Analysis

    An anonymous reader writes "A mystery [Major League Baseball] team has made a sizable investment in Cray's latest effort at bringing graph analytics at extreme scale to bat. Nicole Hemsoth writes that what the team is looking for is a "hypothesis machine" that will allow them to integrate multiple, deep data wells and pose several questions against the same data. They are looking for platforms that allow users to look at facets of a given dataset, adding new cuts to see how certain conditions affect the reflection of a hypothesized reality."

    56 comments | about 8 months ago

Slashdot Login

Need an Account?

Forgot your password?