We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!
Paul Fernhout writes: An article in the Harvard Business Review by William H. Davidow and Michael S. Malone suggests: "The "Second Economy" (the term used by economist Brian Arthur to describe the portion of the economy where computers transact business only with other computers) is upon us. It is, quite simply, the virtual economy, and one of its main byproducts is the replacement of workers with intelligent machines powered by sophisticated code. ... This is why we will soon be looking at hordes of citizens of zero economic value. Figuring out how to deal with the impacts of this development will be the greatest challenge facing free market economies in this century. ... Ultimately, we need a new, individualized, cultural, approach to the meaning of work and the purpose of life. Otherwise, people will find a solution — human beings always do — but it may not be the one for which we began this technological revolution."
This follows the recent Slashdot discussion of "Economists Say Newest AI Technology Destroys More Jobs Than It Creates" citing a NY Times article and other previous discussions like Humans Need Not Apply. What is most interesting to me about this HBR article is not the article itself so much as the fact that concerns about the economic implications of robotics, AI, and automation are now making it into the Harvard Business Review. These issues have been otherwise discussed by alternative economists for decades, such as in the Triple Revolution Memorandum from 1964 — even as those projections have been slow to play out, with automation's initial effect being more to hold down wages and concentrate wealth rather than to displace most workers. However, they may be reaching the point where these effects have become hard to deny despite going against mainstream theory which assumes infinite demand and broad distribution of purchasing power via wages.
As to possible solutions, there is a mention in the HBR article of using government planning by creating public works like infrastructure investments to help address the issue. There is no mention in the article of expanding the "basic income" of Social Security currently only received by older people in the U.S., expanding the gift economy as represented by GNU/Linux, or improving local subsistence production using, say, 3D printing and gardening robots like Dewey of "Silent Running." So, it seems like the mainstream economics profession is starting to accept the emerging reality of this increasingly urgent issue, but is still struggling to think outside an exchange-oriented box for socioeconomic solutions. A few years ago, I collected dozens of possible good and bad solutions related to this issue. Like Davidow and Malone, I'd agree that the particular mix we end up will be a reflection of our culture. Personally, I feel that if we are heading for a technological "singularity" of some sort, we would be better off improving various aspects of our society first, since our trajectory going out of any singularity may have a lot to do with our trajectory going into it.
357 comments | 7 hours ago
Esra Erimez writes Peter Bright doesn't speak a word of Spanish but with Skype Translator he was able to have a spoken conversation with a Spanish speaker as if he was in an episode of Star Trek. He spoke English. A moment later, an English language transcription would appear, along with a Spanish translation. Then a Spanish voice would read that translation.
64 comments | 2 days ago
anguyen8 writes Deep neural networks (DNNs) trained with Deep Learning have recently produced mind-blowing results in a variety of pattern-recognition tasks, most notably speech recognition, language translation, and recognizing objects in images, where they now perform at near-human levels. But do they see the same way we do? Nope. Researchers recently found that it is easy to produce images that are completely unrecognizable to humans, but that DNNs classify with near-certainty as everyday objects. For example, DNNs look at TV static and declare with 99.99% confidence it is a school bus. An evolutionary algorithm produced the synthetic images by generating pictures and selecting for those that a DNN believed to be an object (i.e. "survival of the school-bus-iest"). The resulting computer-generated images look like modern, abstract art. The pictures also help reveal what DNNs learn to care about when recognizing objects (e.g. a school bus is alternating yellow and black lines, but does not need to have a windshield or wheels), shedding light into the inner workings of these DNN black boxes.
125 comments | 3 days ago
HughPickens.com writes: Claire Cain Miller notes at the NY Times that economists long argued that, just as buggy-makers gave way to car factories, technology used to create as many jobs as it destroyed. But now there is deep uncertainty about whether the pattern will continue, as two trends are interacting. First, artificial intelligence has become vastly more sophisticated in a short time, with machines now able to learn, not just follow programmed instructions, and to respond to human language and movement. At the same time, the American work force has gained skills at a slower rate than in the past — and at a slower rate than in many other countries. Self-driving vehicles are an example of the crosscurrents. Autonomous cars could put truck and taxi drivers out of work — or they could enable drivers to be more productive during the time they used to spend driving, which could earn them more money. But for the happier outcome to happen, the drivers would need the skills to do new types of jobs.
When the University of Chicago asked a panel of leading economists about automation, 76 percent agreed that it had not historically decreased employment. But when asked about the more recent past, they were less sanguine. About 33 percent said technology was a central reason that median wages had been stagnant over the past decade, 20 percent said it was not and 29 percent were unsure. Perhaps the most worrisome development is how poorly the job market is already functioning for many workers. More than 16 percent of men between the ages of 25 and 54 are not working, up from 5 percent in the late 1960s; 30 percent of women in this age group are not working, up from 25 percent in the late 1990s. For those who are working, wage growth has been weak, while corporate profits have surged. "We're going to enter a world in which there's more wealth and less need to work," says Erik Brynjolfsson. "That should be good news. But if we just put it on autopilot, there's no guarantee this will work out."
658 comments | 3 days ago
An anonymous reader writes: Oren Etzioni has been an artificial intelligence researcher for over 20 years, and he's currently CEO of the Allen Institute for AI. When he heard the dire warnings recently from both Elon Musk and Stephen Hawking, he decided it's time to have an intelligent discussion about AI. He says, "The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. ... To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations." Etzioni adds, "If unjustified fears lead us to constrain AI, we could lose out on advances that could greatly benefit humanity — and even save lives. Allowing fear to guide us is not intelligent."
417 comments | about two weeks ago
An anonymous reader sends this excerpt from Quanta Magazine:
"Using the latest deep-learning protocols, computer models consisting of networks of artificial neurons are becoming increasingly adept at image, speech and pattern recognition — core technologies in robotic personal assistants, complex data analysis and self-driving cars. But for all their progress training computers to pick out salient features from other, irrelevant bits of data, researchers have never fully understood why the algorithms or biological learning work.
Now, two physicists have shown that one form of deep learning works exactly like one of the most important and ubiquitous mathematical techniques in physics, a procedure for calculating the large-scale behavior of physical systems such as elementary particles, fluids and the cosmos. The new work, completed by Pankaj Mehta of Boston University and David Schwab of Northwestern University, demonstrates that a statistical technique called "renormalization," which allows physicists to accurately describe systems without knowing the exact state of all their component parts, also enables the artificial neural networks to categorize data as, say, "a cat" regardless of its color, size or posture in a given video.
"They actually wrote down on paper, with exact proofs, something that people only dreamed existed," said Ilya Nemenman, a biophysicist at Emory University.
45 comments | about two weeks ago
Rambo Tribble writes In a departure from his usual focus on theoretical physics, the estimable Steven Hawking has posited that the development of artificial intelligence could pose a threat to the existence of the human race. His words, "The development of full artificial intelligence could spell the end of the human race." Rollo Carpenter, creator of the Cleverbot, offered a less dire assessment, "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." I'm betting on "ignored."
574 comments | about three weeks ago
Nerval's Lobster writes: If you took your cubicle, four wheels, powerful AI, and brought them all together in unholy matrimony, their offspring might look something like the self-driving future car created by design consultants IDEO. That's not to say that every car on the road in 2030 will look like a mobile office, but technology could take driving to a place where a car's convenience and onboard software (not to mention smaller size) matter more than, say, speed or handling, especially as urban areas become denser and people potentially look at "driving time" as a time to get things done or relax as the car handles the majority of driving tasks. Then again, if old science-fiction movies have proven anything, it's that visions of automobile design thirty or fifty years down the road (pun intended) tend to be far, far different than the eventual reality. (Blade Runner, for example, posited that the skies above Los Angeles would swarm with flying cars by 2019.) So it's anyone's guess what you'll be driving a couple decades from now.
144 comments | about three weeks ago
An anonymous reader writes "Writer and professor of philosophy at the University of California, Berkeley Alva Noe isn't worried that we will soon be under the rule of shiny metal overlords. He says that currently we can't produce "machines that exhibit the agency and awareness of an amoeba." He writes at NPR: "One reason I'm not worried about the possibility that we will soon make machines that are smarter than us, is that we haven't managed to make machines until now that are smart at all. Artificial intelligence isn't synthetic intelligence: It's pseudo-intelligence. This really ought to be obvious. Clocks may keep time, but they don't know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeopordy! with Watson. We used 'it' the way we use clocks.""
455 comments | about a month ago
mrspoonsi tips news of further research into updating the Turing test. As computer scientists have expanded their knowledge about the true domain of artificial intelligence, it has become clear that the Turing test is somewhat lacking. A replacement, the Lovelace test, was proposed in 2001 to strike a clearer line between true AI and an abundance of if-statements. Now, professor Mark Reidl of Georgia Tech has updated the test further (PDF). He said, "For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Creativity is not unique to human intelligence, but it is one of the hallmarks of human intelligence."
68 comments | about a month ago
Rambo Tribble writes Using machine learning techniques, Google claims to have produced software that can better produce natural-language descriptions of images. This has ramifications for uses such as better image search and for better describing the images for the blind. As the Google people put it, "A picture may be worth a thousand words, but sometimes it's the words that are the most useful ..."
29 comments | about 1 month ago
coondoggie writes The $50,000 challenge comes from researchers at the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence. The competition, known as Automatic Speech recognition in Reverberant Environments (ASpIRE), hopes to get the industry, universities or other researchers to build automatic speech recognition technology that can handle a variety of acoustic environments and recording scenarios on natural conversational speech.
62 comments | about 1 month ago
aesoteric writes: Australian researchers have programmed industrial robots to tackle the vast array of e-waste thrown out every year. The research shows robots can learn and memorize how various electronic products — such as LCD screens — are designed, enabling those products to be disassembled for recycling faster and faster. The end goal is less than five minutes to dismantle a product.
39 comments | about a month ago
An anonymous reader writes Researchers working on artificial intelligence at Queen Mary University of London have taught a computer to create magic tricks. The researchers gave a computer program the outline of how a magic jigsaw puzzle and a mind reading card trick work, as well the results of experiments into how humans understand magic tricks, and the system created completely new variants on those tricks which can be delivered by a magician.
77 comments | about a month ago
mikejuk writes The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped, and one of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program. The neurons communicate by sending UDP packets across the network. The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm's nose. If anything comes within 20cm of the 'nose' then UDP packets are sent to the sensory neurons in the network. The motor neurons are wired up to the left and right motors of the robot. It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward. The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge. Is the robot a C. elegans in a different body or is it something quite new? Is it alive? These are questions for philosophers, but it does suggest that the ghost in the machine is just the machine. The important question is does it scale?
200 comments | about a month ago
HughPickens.com writes: IBM has recently delivered a string of disappointing quarters, and announced recently that it would take a multibillion-dollar hit to offload its struggling chip business. But Will Knight writes at MIT Technology Review that Watson may have the answer to IBM's uncertain future. IBM's vast research department was recently reorganized to ramp up efforts related to cognitive computing. The push began with the development of the original Watson, but has expanded to include other areas of software and hardware research aimed at helping machines provide useful insights from huge quantities of often-messy data. "We're betting billions of dollars, and a third of this division now is working on it," says John Kelly, director of IBM Research, said of cognitive computing, a term the company uses to refer to artificial intelligence techniques related to Watson. The hope is that the Watson Business Group, a division aimed making its Jeopardy!-winning cognitive computing application more of a commercial success, will be able to answer more complicated questions in all sorts of industries, including health care, financial investment, and oil discovery; and that it will help IBM build a lucrative new computer-driven consulting business.
But Watson is still a work in progress. Some companies and researchers testing Watson systems have reported difficulties in adapting the technology to work with their data sets. "It's not taking off as quickly as they would like," says Robert Austin. "This is one of those areas where turning demos into real business value depends on the devils in the details. I think there's a bold new world coming, but not as fast as some people think." IBM needs software developers to embrace its vision and build services and apps that use its cognitive computing technology. In May of this year it announced that seven universities would offer computer science classes in cognitive computing and last month IBM revealed a list of partners that have developed applications by tapping into application programming interfaces that access versions of Watson running in the cloud. Big Blue said it will invest $1 billion into the Watson division including $100 million to fund startups developing cognitive apps. "I very much admire the end goal," says Boris Katz, adding that business pressures could encourage IBM's researchers to move more quickly than they would like. "If the management is patient, they will really go far."
67 comments | about a month and a half ago
ashshy writes Tesla, Google, and many other companies are working on self-driving cars. When these autopilot systems become perfected and ubiquitous, the roads should be safer by orders of magnitude. So why doesn't Tesla CEO Elon Musk expect to reach that milestone until 2013 or so? Because the legal framework that supports American road rules is incredibly complex, and actually handled on a state-by-state basis. The Motley Fool explains which authorities Musk and his allies will have to convince before autopilot cars can hit the mainstream, and why the process will take another decade.
320 comments | about 2 months ago
First time accepted submitter agent elevator writes In a wide-ranging interview at IEEE Spectrum, Michael I. Jordan skewers a bunch of sacred cows, basically saying that: The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool's errand; and despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.
145 comments | about 2 months ago
An anonymous reader writes The better question may be whether it will ever be ready for the road at all? The car has fewer capabilities than most people seem to be aware of. The notion that it will be widely available any time soon is a stretch. From the article: "Noting that the Google car might not be able to handle an unmapped traffic light might sound like a cynical game of 'gotcha.' But MIT roboticist John Leonard says it goes to the heart of why the Google car project is so daunting. 'While the probability of a single driver encountering a newly installed traffic light is very low, the probability of at least one driver encountering one on a given day is very high,' Leonard says. The list of these 'rare' events is practically endless, said Leonard, who does not expect a full self-driving car in his lifetime (he’s 49)."
287 comments | about 2 months ago
MojoKid (1002251) writes A new interview with Assassin's Creed Unity senior producer Vincent Pontbriand has some gamers seeing red and others crying "told you so," after the developer revealed that the game's 900p framerate and 30 fps target on consoles is a result of weak CPU performance rather than GPU compute. "Technically we're CPU-bound," Pontbriand said. "The GPUs are really powerful, obviously the graphics look pretty good, but it's the CPU that has to process the AI, the number of NPCs we have on screen, all these systems running in parallel. We were quickly bottlenecked by that and it was a bit frustrating, because we thought that this was going to be a tenfold improvement over everything AI-wise..." This has been read by many as a rather damning referendum on the capabilities of AMD's APU that's under the hood of Sony's and Microsoft's new consoles. To some extent, that's justified; the Jaguar CPU inside both the Sony PS4 and Xbox One is a modest chip with a relatively low clock speed. Both consoles may offer eight CPU threads on paper, but games can't access all that headroom. One thread is reserved for the OS and a few more cores will be used for processing the 3D pipeline. Between the two, Ubisoft may have only had 4-5 cores for AI and other calculations — scarcely more than last gen, and the Xbox 360 and PS3 CPUs were clocked much faster than the 1.6 / 1.73GHz frequencies of their replacements.
338 comments | about 2 months ago