Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Robotics Technology

Neural Networks-Equipped Robots Evolve the Ability To Deceive 116

pdragon04 writes "Researchers at the Ecole Polytechnique Fédérale de Lausanne in Switzerland have found that robots equipped with artificial neural networks and programmed to find 'food' eventually learned to conceal their visual signals from other robots to keep the food for themselves. The results are detailed in a PNAS study published today."
This discussion has been archived. No new comments can be posted.

Neural Networks-Equipped Robots Evolve the Ability To Deceive

Comments Filter:
  • Mhm (Score:5, Funny)

    by alexborges ( 313924 ) on Wednesday August 19, 2009 @03:02PM (#29123027)

    I mean, yesterday, they built an certified evil robot. Today they made a lying one....

    Cant tag it for some reason but... what could possibly go wrong?

    • Re: (Score:3, Funny)

      I'm sure these people know what they're doing... /Famouslastwords

    • Re:Mhm (Score:5, Funny)

      by netruner ( 588721 ) on Wednesday August 19, 2009 @03:24PM (#29123499)
      Wasn't there also a story a while back about robots fueled by biomass? This was twisted to mean "human eating" and we all laughed.

      Combine that with what you said and we could have a certified evil, lying and flesh eating robot - What could possibly go wrong indeed.....
      • Not much of a problem if they weed each other out. See this other old Slashdot story
        Robots Learn To Lie [slashdot.org]
      • by Abreu ( 173023 )

        But, but... I thought they wanted us plugged in so that we could serve as batteries! (or neural networks!)

      • Re: (Score:2, Interesting)

        by skine ( 1524819 )

        Actually, Cracked.com used this news story to determine how stupid the user bases of a few websites actually are.

        Slashdot got two stupids out of ten.

        http://www.cracked.com/blog/which-site-has-the-stupidest-commenters-on-the-internet/ [cracked.com]

      • I for one would like to see a beowulf cluster of these highly welcome Evil Lying Flesh eating robots.

        But does anyone know, do they run linux?
      • Wasn't there also a story a while back about robots fueled by biomass? This was twisted to mean "human eating" and we all laughed. Combine that with what you said and we could have a certified evil, lying and flesh eating robot...

        with weapons... [gizmodo.com]

      • by FSWKU ( 551325 )

        Combine that with what you said and we could have a certified evil, lying and flesh eating robot - What could possibly go wrong indeed.....

        Not too much, actually. Congress has been this way for YEARS, and the upgrade to flesh-eating will just mean they devour their constituents who don't make the appropriate campaign contributions. Quoth Liberty Prime: "Democracy is non-negotiable!"

      • Hey eLaFER, have you seen fluffy?

        evil, Lying and Flesh Eating Robot: No. ...

        Hmm. That name makes me think of a robotic flesh eating Joker character. "Why so delicious?"

      • Re: (Score:2, Funny)

        by thoi412 ( 1604933 )
        This is only alcohol consumption away from being Bender!
      • by Geminii ( 954348 )
        They could get re-elected.
    • ... what could possibly go wrong?

      You could bite my shiny metal ass.

    • what could possibly go wrong

      this is the call of every fear monger. welcome to the club.

  • Considering that they learned to lie to survive with this limited AI, I wonder what they could do when they are become really sophisticated. Damn, When is Terminator gonna come to kill them all?
  • Define deception? (Score:5, Interesting)

    by Rival ( 14861 ) on Wednesday August 19, 2009 @03:04PM (#29123079) Homepage Journal

    This is quite interesting, but I wonder how the team defines deception?

    It seems likely to me that the robots merely determined that increased access to food resulted from suppression of signals. To deceive, there must be some contradiction involved where a drive for food competes with a drive to signal discovery of food.

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday August 19, 2009 @03:15PM (#29123303) Journal
      The question of what exactly constitutes deception is a fun philosophical problem but; in the context of studying animal signaling, it is generally most convenient to work with a simpler definition(in particular, trying to determine whether an animal that doesn't speak has beliefs about the world is a pile of not fun). I'd assume that the robot researchers are doing the same thing.

      In that context, you essentially ignore questions of motivation, belief, and so on, and just look at the way the signal is used.
      • by capologist ( 310783 ) on Wednesday August 19, 2009 @07:35PM (#29126985)

        Yes, but not flashing the light near food seems like a simple matter of discretion, not deception.

        I'm not constantly broadcasting my location on Twitter like some people do. Am I being deceptive?

        • Re:Define deception? (Score:5, Informative)

          by fuzzyfuzzyfungus ( 1223518 ) on Wednesday August 19, 2009 @07:59PM (#29127215) Journal
          In the specific, limited, not-all-that-similar-to-ordinary-english-usage, sense of "deception" I suspect that they are using, there really isn't much of a difference.

          If a species has a discernable signalling pattern of some sort(whether it be vervet monkey alarm calls[with different calls for different predator classes, incidentally], firefly flash-pattern mating signals[amusing, females of some species will imitate the flash signals of other species, then eat the males who show up, classic deceptive signal] or, in this case, robots flashing about food), adaptive deviations from that pattern that serve to carry false information can be considered "deceptive". It doesn't have to be conscious, or even under an organism's control. Insects that have coloration very similar to members of a poisonous species are engaged in deceptive signalling, though they obviously don't know it.

          Humans are more complicated; because culturally specified signals are so numerous and varied. If twittering your activities were a normal pattern within your context, and you started not twittering visits to certain locations, you would arguably be engaged in "deceptive signaling" If twittering were not a normal pattern, not twittering wouldn't be deceptive.
          • by mqduck ( 232646 )

            adaptive deviations from that pattern that serve to carry false information can be considered "deceptive".

            But that's the thing, nowhere does it say the robots gave false information. It simply said they chose not to give any information.

            The article is very brief, though. It mentions that some robots actually learned to avoid the signal when they saw it, so there may be more to the story than reported.

            • by hazah ( 807503 )
              I hate to nit-pick, but in this case the false information is the signal that there's no information when there is. As long as something could be interpreted, it is a signal. Moving on to a new location while not revealing that something was found, is signaling that nothing was found -- false signal -- deception.
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      It seems likely to me that the robots merely determined that increased access to food resulted from suppression of signals.

      My thoughts exactly.

      We would really need to see the actual study to possibly believe any of this.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      The robots learned to not turn on the light when near the food. This is concealing not deceiving. To be deceiving wouldn't the robots need to learn to turn the light on when they neared the poison to bring the other robots to the poison while it hunted for the food? But all they learned was to conceal the food they found.

      • If they can eat without turning on the light, then they simply learned to optimise the unnecessary steps out from the necessary ones. Turning on the light would be about as useful as walking away from the food before walking back to it. If there's a time-penalty involved, then not doing that would simply be better.

    • Re:Define deception? (Score:5, Informative)

      by odin84gk ( 1162545 ) on Wednesday August 19, 2009 @03:44PM (#29123927)
      Old news. http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie [discovermagazine.com]

      These robots would signal other robots that poison was food, would watch the other robots come and die, then move away.

      • Re: (Score:3, Informative)

        Old News (even covered by Slashdot):

        http://hardware.slashdot.org/story/08/01/19/0258214/Robots-Learn-To-Lie?art_pos=1 [slashdot.org]

        Gizmodo reports that robots that have the ability to learn and can communicate information to their peers have learned to lie. 'Three colonies of bots in the 50th generation learned to signal to other robots in the group when then found food or poison. But the fourth colony included lying cheats that signaled food when they found poison and then calmly rolled over to the real food while other robots went to their battery-death.'

      • Re: (Score:2, Funny)

        by pinkushun ( 1467193 )
        They say repetition is good for a growing mind. They say repetition is good for a growing mind.
    • Re: (Score:3, Funny)

      by CarpetShark ( 865376 )

      but I wonder how the team defines deception?

      You'll never know for sure.

  • They have a light, which at first flickers randomly; they learn to turn the light off so that other robots can't tell where they are. To my mind that's not really sophisticated enough to qualify as "deceptive". (Still interesting though)
    • by orkysoft ( 93727 )

      From just reading the summary, I guessed that the light went on when the robot found food, and that other robots would move towards those lights, because they indicate food, and that some robots evolved to not turn on the light when they found food, so they didn't attract other robots, so they had it all to themselves, which would be an advantage.

      • Re:Hardly deceptive (Score:5, Informative)

        by CorporateSuit ( 1319461 ) on Wednesday August 19, 2009 @03:38PM (#29123823)

        From just reading the summary, I guessed that the light went on when the robot found food, and that other robots would move towards those lights, because they indicate food, and that some robots evolved to not turn on the light when they found food, so they didn't attract other robots, so they had it all to themselves, which would be an advantage.

        The summary didn't include enough information to describe what was going on. The lights flashed randomly. The robots would stay put when they had found food, and so if there were lights flashing in one spot for long enough, the other robots would realize the first robots had found something and go to the area and bump away the original robot. The robots were eventually bred to flash less often when on their food, and then not flash at all. By the end, robots would see the flashing as a place "not to go for food" because by that point, none of the robots would flash when parked on the food.

    • decepticon (Score:5, Funny)

      by FooAtWFU ( 699187 ) on Wednesday August 19, 2009 @03:26PM (#29123555) Homepage

      They have a light, which at first flickers randomly; they learn to turn the light off so that other robots can't tell where they are. To my mind that's not really sophisticated enough to qualify as "deceptive".

      Yeah. It's more like the robots are hiding from each other. You could, in fact, describe them as "robots in disguise".

  • by billlava ( 1270394 ) on Wednesday August 19, 2009 @03:06PM (#29123125) Homepage
    A robot that learned not to flash lights that would give away the location of robot food to its competitors? The next step is clearly a robot that learns not to flash lights when it is about to wipe out humanity and take control of the world!

    I for one welcome our intelligent light-eating bubble robot overlords.
    • Re: (Score:2, Offtopic)

      by Rival ( 14861 )

      I haven't laughed out loud at a Slashdot post in awhile, but that caught me completely off guard. Bravo, good sir. I wish I had mod points for you. :-)

    • by julesh ( 229690 ) on Wednesday August 19, 2009 @04:46PM (#29124923)

      The next step is clearly a robot that learns not to flash lights when it is about to wipe out humanity and take control of the world!

      It's something that hollywood robots have never learned.

      Next thing you'll be saying that terrorists have learned that having a digital readout of the time left before their bombs detonate can work against them...

      • No, the best thing you can do as a terrorist isn't leaving out the visible clock, it's having it go off when the clock either stops working OR when it hits some randomly assigned time instead of 00:00.

    • by rcamans ( 252182 )

      And wiping out humanity / vermin is bad because...
      Oh, wait, I am supposed to conceal my robotness...

  • Mis-Leading (Score:3, Insightful)

    by ashtophoenix ( 929197 ) on Wednesday August 19, 2009 @03:06PM (#29123129) Homepage Journal
    To use the term "learned" for a consequence of evolution to what seems to me to be a Genetic Algorithm seems mis-leading. So the generation that emitted less of the blue light (hence giving less visual cues) was able to score higher, and hence the genetic algorithm favored that generation (that is what GAs do). Isn't this to be expected?
    • Re:Mis-Leading (Score:4, Interesting)

      by Chris Burke ( 6130 ) on Wednesday August 19, 2009 @03:29PM (#29123635) Homepage

      To use the term "learned" for a consequence of evolution to what seems to me to be a Genetic Algorithm seems mis-leading.

      "Learned" is a perfectly good description for altering a neural network to have the "learned" behavior regardless of the method. GA-guided-Neural-Networks means you're going to be using terminology from both areas, but that's just one method of training a network and isn't fundamentally different from the many other methods that are all called "learning". But you wouldn't say about those other methods that they "evolved", while about GA-NN you could say both.

      Isn't this to be expected?

      It's expected that the GA will find good solutions. Part of what makes them so cool is that the exact nature of that solution isn't always expected. Who was to say whether the machines would learn to turn off the light near food, or to turn on the light when they know they're not near food to lead other robots on a wild goose chase? Or any other local maximum.

      • I agree. I personally love GAs although they leave you a bit wanting exactly because you don't know the exact nature of the solution that will turn up. That is it "feels" more like a brute force solution rather than something consciously predicted and programmed.

        But surely there are nifty ways in which you can intelligently program GAs, customize your selection/rejection/scoring process based on the domain of the problem and hence contribute in the final solution.

        • But surely there are nifty ways in which you can intelligently program GAs, customize your selection/rejection/scoring process based on the domain of the problem and hence contribute in the final solution.

          Well that's what's so fun about them -- as far as the GA is concerned, optimizing for your scoring process is the problem, and any disconnect between that and the actual problem you're trying to solve can lead to... fun... results.

          Like the team using GA-NN to program their robotic dragonfly. Deciding to s

      • It's expected that the GA will find good solutions. Part of what makes them so cool is that the exact nature of that solution isn't always expected. Who was to say whether the machines would learn to turn off the light near food, or to turn on the light when they know they're not near food to lead other robots on a wild goose chase? Or any other local maximum.

        I'd even say it was likely if they continued the experiment for 'no light' to start signaling food, while 'light' signaled poison, and then cycle back.

        • I'd even say it was likely if they continued the experiment for 'no light' to start signaling food, while 'light' signaled poison, and then cycle back.

          But it's so simple! Now, a clever robot would flash their light when near the food, because they would know that only a great fool would trust their enemy to guide them to food instead of poison. I am not a great fool, so clearly I should not head toward you when your light is lit. However you would know that I am not a great fool, and would have counted o

    • And how is this any different from the conditioned reflexes exhibited in animals in response to action / reward stimuli.

      A single neuron outputs (using a combination of chemical / electrical systems) some representation of it's inputs. As some of those inputs may be "reward" stimuli and other sensory cues, and the output may be something that controls a certain action ... given enough of them linked together, who's to say we aren't all very evolved GA's ?

    • Pretty much what I was thinking. I don't think it detracts from the "cool" factor, though. Life on earth, in general, is pretty cool. Evolution really seems to entail two things. One, those patterns which are most effective at continuing to persist, continue to persist. That's really a tautology when you think about it, and not very interesting. What IS interesting is how the self-sustaining patterns of the universe seem to become more complex. I can't think of any simple reason why this complexity arises,

      • Nice post. Science, logic can explain processes but not their underlying reason, w.r.t. your leaves on the road example. For example, we know that an atom has protons and neutrons and electrons that know how to revolve around the nucleus, but how did they come to be? There must be some very basic particle that comprises of everything else. Science may explain the process, the characteristics of this particle, but it hasn't yet been able to explain how they came to be. Same thing with gravitational forc
  • by vertinox ( 846076 ) on Wednesday August 19, 2009 @03:09PM (#29123185)

    In this instance they were playing against other robots for "food".

    In that regards I'm sure that is the evolutionary drive for most species in acquiring meals and keeping the next animal from taking it away from him.

    Like a dog burying a bone... He's not doing it to be evil. Its just instinctive to keep his find from other animals because it helped his species survive in the past.

    • Re: (Score:3, Funny)

      Like a dog burying a bone... He's not doing it to be evil.

      Unless he has shifty eyes...then you KNOW he's evil.

    • Re: (Score:3, Insightful)

      by alexborges ( 313924 )

      Intent is of no importance.

      Evil deeds are evil.

      • Good vs evil is argued by those of low intelligence.

        • Shut up you evil, evil, eeeeevil man!

          What more proof do you want than President Bush "the Axis of Evil"?

          Huh? HUH? HUH?!!!!

          Lets see you answer that steep curveball now! HAW HAW

        • What about arguing about arguing about good vs evil?
      • Re: (Score:2, Insightful)

        by jemtallon ( 1125407 )
        I disagree. Evil is not a factual property naturally accuring in the universe. It is not something that can be scientifically measured. It is something we humans have created and assign to the world around us. Different people and groups of people define different things and actions as evil. Sometimes those definitions are directly opposed to each other.

        Since evil deeds are not inherently evil, only subjectively judged to be, any number of factors can be used to make said judgements. Contrary to what you
      • by khallow ( 566160 )
        Actions or consequences are of no importance either. Evil cucumbers are evil too.
      • Stupidest view I've ever seen.

        I design a machine that gives candy to babies. And then some nefarious person - unknown to me - replaces all the baby candy with live hand grenades. I run the program and blow up a bunch of babies. Was my act then evil? I did, after all, blow up a bunch of babies. Of course, I didn't *intend* to do that, I *intended* to give them candy.

        Or for a non-random system where I know all the facts, if through some contrived means the only way to save a bus-full of orphans involves s
    • So yeah, the idea of "deception" is a human construct, as is the idea of "evil." And one could argue (as a previous poster did) that successive generations developing behaviors which are in their own self interest (so they get more food) but may (as a byproduct) be deleterious to others (since they get less food) is not a surprise. But extrapolate this to humans [hbes.com], and you get the kinds of behaviors that we call "deceptive" and, since we have ideas about the virtue of altruism [nature.com], we call such behaviors "evil.

    • Re: (Score:3, Interesting)

      Unless, of course, the robot already has sufficient food and is simply stockpiling for the future. This in itself is not a bad thing, until such tactics prevent other robots from getting just the bare necessities they need to survive.

      Obviously, this is simply survival of the fittest, but are we talking about survival of the fittest, or are we talking about keeping ALL the robots fed?

      At this point we have to decide whether or not the actions of hoarding are good for the stated goal of having so many robots i

  • by lalena ( 1221394 ) on Wednesday August 19, 2009 @03:16PM (#29123343) Homepage
    From the article, staying close to food earned the robot points. I think a better experiment would be a food collection algorithm. Pick up a piece of food from a pile of food and then return that food to the nest. Other robots could hang out at your nest and follow you back to the pile of food or see you going to your nest with food and assume that the food pile can be found by going in the exact opposite direction. Deception would involve not taking a direct route back to the food, walking backwards to confuse other robots...
    I've done Genetic Programming experiments using collaboration between "robots" in food collection experiments, and it is a very interesting field. You can see some experiments here: http://www.lalena.com/ai/ant/ [lalena.com] You can also run the program if you can run .NET 2.0 through your browser..
  • by Anonymous Coward

    and thus were politicians born...

  • by gubers33 ( 1302099 ) on Wednesday August 19, 2009 @03:25PM (#29123551)
    That if they kill the humans they will have nothing stopping them from getting more food.
  • FTA: The team "evolved" new generations of robots by copying and combining the artificial neural networksof the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations. The "scientists" changed the code so that the robots didn't blink the light as much when it was around food. Therefore other robots didn't come over and therefore got more points then the other robots. The "scientists" then propagated that ones code to the other robots because it wo
    • by jasonlfunk ( 1410035 ) on Wednesday August 19, 2009 @03:40PM (#29123863)
      (Fixed formatting)

      FTA: The team "evolved" new generations of robots by copying and combining the artificial neural networksof the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations.

      The "scientists" changed the code so that the robots didn't blink the light as much when it was around food. Therefore other robots didn't come over and therefore got more points then the other robots. The "scientists" then propagated that ones code to the other robots because it won. The AI didn't learn anything.

      • Re: (Score:2, Insightful)

        The AI didn't learn anything.

        I think you're right. If the robots had, without reprogramming, efectively turned off their blue lights, then we could talk about "learning". Or, if the robots could reproduce based on their success on finding food, we could talk about evolution. Or we could make up new meanings for the words "learning" and "evolution" thus making the statement a correct one ;)

        • Re: (Score:3, Informative)

          by zippthorne ( 748122 )

          Or, if the robots could reproduce based on their success on finding food, we could talk about evolution.

          That's exactly what happened. There is a whole field of optimization strategies known as "Genetic Algorithms" which are designed to mimic evolution to achieve results. In fact, their successes are one of the best arguments for evolution, given that they are, by definition, controlled laboratory experiments in the field.

        • Re: (Score:3, Insightful)

          by Chris Burke ( 6130 )

          I think you're right. If the robots had, without reprogramming, efectively turned off their blue lights, then we could talk about "learning".

          They reprogrammed themselves between 'generations'.

          Or, if the robots could reproduce based on their success on finding food, we could talk about evolution.

          Such as choosing which versions of the robot to use in the next 'generation' based on their score in the current generation, and randomly combining parts of those best solutions to create new robots for the next gene

          • FTA:

            The team "evolved" new generations of robots by copying and combining the artificial neural networksof the most successful robots. The scientists also added a few random changes to their code to mimic biological mutations.

            They did not reprogram themselves. The team "evolved" them. Note the quotation marks used by the author of the article. They picked the most successful robots by hand, manually reprogrammed them and modified the code to mimic genetic mutations.

            • They did not reprogram themselves. The team "evolved" them. Note the quotation marks used by the author of the article. They picked the most successful robots by hand, manually reprogrammed them and modified the code to mimic genetic mutations.

              Yes they used quotes because GA isn't literal "evolution". It's an algorithm for searching solution spaces inspired by and patterned after evolution. The description they gave in TFA is a bog-standard and perfect description of Genetic Algorithms, and combined with

      • No, they did "learn" (Score:5, Informative)

        by Chris Burke ( 6130 ) on Wednesday August 19, 2009 @06:12PM (#29126145) Homepage

        The "scientists" changed the code so that the robots didn't blink the light as much when it was around food.

        No, they didn't change the code. The Genetic Algorithm they were using changed the code for them. You make it sound like they deliberately made that change to get the behavior they wanted. But they didn't. They just let the GA run and it created the new behavior.

        The part about adding random changes, and combining parts of successful robots, is also simply a standard part of Genetic algorithms, and is in fact random and not specifically selected for by the scientists. The scientists would have chosen from a number of mutation/recombination algorithms, but that's the extent of it.

        The "scientists" then propagated that ones code to the other robots because it won.

        Yes, because that's what you do in a Genetic Algorithm. You take the "best" solutions from one generation, and "propagate" them to the next, in a simulation of actual evolution and "survival of the fittest".

        The AI didn't learn anything.

        Yes, it did. Genetic Algorithms used to train Neural Networks is a perfectly valid (and successful) form of Machine Learning.

        If you mean that an individual instance of the AI didn't re-organize itself to have the new behavior in the middle of a trial run, then no, that didn't happen. On the other hand, many organisms don't change behaviors within a single generation, and it is only over the course of many generations that they "learn" new behaviors for finding food. Which is exactly what happened here.

        With the domain of robots, AI, Neural Networks, and Genetic Algorithms, this was learning.

  • by Anonymous Coward

    Finally a computer AI program that can perform all the functions of a Congressman!

  • The smarter robot would blink his light continuously to burn the bulb out. That way when a new source of "points" is found it will not by instinct blink it's lights.

    Also, the truly deceptive robot would blink it's lights in a random pattern as to throw the other robots off the trail of food/points.

    • by geekoid ( 135745 )

      Unless the lights are used to signal for mating as well.

      The truly deceptive robot is disguised as a scientists.

  • Did Skynet just become self aware?
  • 74 posts, and not a single joke about PNAS has popped up.

    Doh!!

  • We are all just robots based off sloppy biological coding.

    • by geekoid ( 135745 )

      Sloppy? it's pretty damn good coding. Adaptable, changeable, and self propagating random changes that are only used if needed.

  • by Baron_Yam ( 643147 ) on Wednesday August 19, 2009 @09:47PM (#29128029)

    I'd love to see the robots given hunger, thirst, and a sex drive. Make 1/2 the robots girls with red LEDs and 1/2 the robots boys with blue LEDs.

    Make the food and water 'power', and give them the ability to 'harm' each other by draining power.

    The girls would have a higher resource requirement to reproduce.

    It'd be interesting to see over many generations what relationship patterns form between the same and opposite sex.

    • Re: (Score:3, Funny)

      by muckracer ( 1204794 )

      > I'd love to see the robots given hunger, thirst, and a sex drive. Make 1/2
      > the robots girls with red LEDs and 1/2 the robots boys with blue LEDs. Make
      > the food and water 'power', and give them the ability to 'harm' each other
      > by draining power. The girls would have a higher resource requirement to
      > reproduce. It'd be interesting to see over many generations what
      > relationship patterns form between the same and opposite sex.

      I can tell you:

      First the girl robots would seductively blin

  • I still think, "If we build the hardware, consciousness will come" is a stupidly inefficient imitation of evolution at best.

  • Couple this with the robots that eat organic matter on the battle field... and the throwable robots.... will they learn to kill for food?
  • Years ago when i discovered /. the articles had hyperlinks to all that was relevant to them. Nowadays there is a sentence such as:

    "detailed in a PNAS study published today." Without any reference whatsoever to the paper itself. I checked PNAS's today's table of contents and found no such article. It must be there somewhere, but i am losing time to find it. Where is it? Shouldn't it be hyperlinked in the article itself? Who are the authors?

    And after 115 replies no one seems to have mentioned the original art

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...