Announcing: Slashdot Deals - Explore geek apps, games, gadgets and more. (what is this?)

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

Stealthy Dopant-Level Hardware Trojans

Soulskill posted about a year ago | from the getting-in-before-the-rush dept.

Security 166

DoctorBit writes "A team of researchers funded in part by the NSF has just published a paper in which they demonstrate a way to introduce hardware Trojans into a chip by altering only the dopant masks of a few of the chip's transistors. From the paper: 'Instead of adding additional circuitry to the target design, we insert our hardware Trojans by changing the dopant polarity of existing transistors. Since the modified circuit appears legitimate on all wiring layers (including all metal and polysilicon), our family of Trojans is resistant to most detection techniques, including fine-grain optical inspection and checking against "golden chips."' In a test of their technique against Intel's Ivy Bridge Random Number Generator (RNG) the researchers found that by setting selected flip-flop outputs to zero or one, 'Our Trojan is capable of reducing the security of the produced random number from 128 bits to n bits, where n can be chosen.' They conclude that 'Since the Trojan RNG has an entropy of n bits and [the original circuitry] uses a very good digital post-processing, namely AES, the Trojan easily passes the NIST random number test suite if n is chosen sufficiently high by the attacker. We tested the Trojan for n = 32 with the NIST random number test suite and it passed for all tests. The higher the value n that the attacker chooses, the harder it will be for an evaluator to detect that the random numbers have been compromised.'"

Sorry! There are no comments related to the filter you selected.

I wonder (-1)

kakaburra (2508064) | about a year ago | (#44839667)

I wonder if that can be called a trojan

Re:I wonder (2, Informative)

Anonymous Coward | about a year ago | (#44839741)

Yes. A device that contains something concealed and malevolent? That's a hardware trojan right there.

Re:I wonder (2)

liquidpele (663430) | about a year ago | (#44840529)

I dunno, seems more like a method of sabotage than a trojan.

Re:I wonder (3, Insightful)

daem0n1x (748565) | about a year ago | (#44841121)

Sabotage would be to make something stop working. The mentioned chips will work just fine, but their RNGs will be predictable. Only the ones who caused it know and will take advantage of it. Looks like a trojan to me.

Re:I wonder (1)

Shavano (2541114) | about a year ago | (#44839747)

You brought it inside the walls on the advertisement that it was a big wooden horse, but it has the enemy inside. Yep.

Re:I wonder (2)

Anonymous Coward | about a year ago | (#44839751)

What else would you call physical access to your dopant masks? /sarcasm

Repeat after me: physical access to <insert item here> allows for a much greater security risk.

Re:I wonder (4, Funny)

Beardo the Bearded (321478) | about a year ago | (#44840689)

Sure, it's obscure, except all our chips are being made in a country that is actively in an electroni


Re:I wonder (3, Interesting)

GameboyRMH (1153867) | about a year ago | (#44839825)

I wonder if they also considered that the NIST random number test suite might also be compromised by the NSA...

Re:I wonder (1)

trigeek (662294) | about a year ago | (#44841049)

I've considered this as well (I will be using the NIST random number test suite in the near future). However, what can they accomplish with this? I see two approaches they could have taken: 1. Flag a non-random generator as "random". However, just because I use the NIST test suite does not mean that I don't use any other test suites, that would presumably catch this. This seems high-risk from the NSA's point of view - just one publicly available test that proves NIST is gamed shows their hand. 2. Flag something that is random as "non-random". This gets truly random generators disqualified. However, if my TRNG was disqualified, I would look into why, and that would likely reveal the NSA's hand as well. Are there scenarios that I am missing?

Re:I wonder (0)

Anonymous Coward | about a year ago | (#44841139)

Yes you are missing a big piece. Given suitably secure block cipher, its output shouldn't be able to be distinguished from random data. In fact the Intel RDRAND instruction uses AES to distill the entropy sources. An AES encrypted block stream encrypting a simple counter would pass most random number test suites.

Re:I wonder (1)

trigeek (662294) | about a year ago | (#44841323)

Yes, I know this. However, this would not require them to compromise the NIST Random number test suite - No reasonalbe test suite would be able to detect this sort of scenario anyway.
So, back to the original question: Is the NIST Random number test suite compromised? What could they gain by doing this?

Fascinating... (1, Insightful)

CajunArson (465943) | about a year ago | (#44839681)

So all the NSA needs to do is kidnap your chip, microscopically re-dope it, and shove it back in your computer without you noticing!

Phew... I'm glad there are absolutely no other simpler ways for the NSA to spy on us other than re-doping chips! I'll just superglue mine into the socket so I know I'm safe.

Re:Fascinating... (1)

Anonymous Coward | about a year ago | (#44839759)

Silicon is just politics by other means. So presume both the Chinese and the West are trying to flood supply channels with compromised/counterfit silicon in hopes of it finding its way the other side's hardware.

Re:Fascinating... (0)

Anonymous Coward | about a year ago | (#44839831)

Some chips are packaged at a different location than the chip fab. This is very likely to happen for the fabless companies that use contract manufacturing. Some agencies could swap them at the packaging plant or in transit.

FUDscinating... (1)

Shavano (2541114) | about a year ago | (#44840457)

Are tinfoil hats on special this week? It's not very likely to happen to anybody who isn't a very big target because to make such a modification have to completely understand your chip design, know how you're going to use it and judge that compromising YOUR chip design is sufficiently valuable to reap rewards.

If you consider very widely used device, there's greater likelihood of being compromised, and it would more likely be done with the cooperation of the chip designers than otherwise, in which case it is probably visible in the regular metal masks, etc. because the only people who have access to the design are complicit. When is the last time you took equipment you bought apart decapped the chips, imaged them with high resolution 3D x-rays or lapped them down layer by layer to examine whether it they had hidden features? Hell, most users never see their BOARDS.

Re:Fascinating... (1)

h4rr4r (612664) | about a year ago | (#44839951)

Why would they bother with that, when they can have someone working at the fab do it?

Re:Fascinating... (5, Insightful)

Anonymous Coward | about a year ago | (#44839983)

NSA? Probably not. The Chinese chip fab that has been known to have a third shift and has full access to masks and such? Certainly.

The NSA isn't the only agency wanting to know everything a person does.

Re:Fascinating... (1)

the_B0fh (208483) | about a year ago | (#44840003)

Why? So many other avenues of attack. Don't bring out the silliest arguments and expect us to debate it from the extremely silly point of view.

Re:Fascinating... (0)

Anonymous Coward | about a year ago | (#44840587)


Did you not notice the 6 inch deep pool of sarcasm that you were standing in?

Re:Fascinating... (3, Interesting)

omnichad (1198475) | about a year ago | (#44840025)

All they need to do? It's already been done at the fab! Why else would this be coming out now? These researchers have been under a gag order for years and only now got bold enough to stand up to the NSA.

Opinions above are exaggerated for entertainment purposes only

Re:Fascinating... (1)

Joce640k (829181) | about a year ago | (#44840463)

So all the NSA needs to do is kidnap your chip, microscopically re-dope it, and shove it back in your computer without you noticing!

They could have a batch of compromised chips and replace the one in your computer.

Would you ever know? I really doubt it.

Can an entire agency... (2, Insightful)

Overzeetop (214511) | about a year ago | (#44839685)

Can an entire three-letter-agency get a corporate hard-on? 'Cause if they can, this gave our favorite one the biggest boner in the known universe.

Re:Can an entire agency... (0)

Anonymous Coward | about a year ago | (#44839723)

Why? They probably already have backdoors in the architecture.

Re:Can an entire agency... (0)

Anonymous Coward | about a year ago | (#44839931)

what about the backdoors into the place behind those backdoors? the motherfuckers have hacked out the other side of the matrix...

Re:Can an entire agency... (0)

Anonymous Coward | about a year ago | (#44839907)

Can an entire three-letter-agency get a corporate hard-on? 'Cause if they can, this gave our favorite one the biggest boner in the known universe.

Or, this may be old news to them.

(Gotta love how people just sit back and assume that advanced classified operations manipulating hardware somehow don't exist and haven't been going on for decades because we're not supposed to know about them...)

Re:Can an entire agency... (2)

interkin3tic (1469267) | about a year ago | (#44840187)

How likely is it that the NSA or whoever already uses this? It seems to me that with many science fields, the agencies are more than happy to sit back and let someone else spend time and money to develop the tech, then they steal it, copy it, or as a last resort, buy it with taxpayer money. But then obviously, we wouldn't know if they ARE actually coming up with innovation, since they'd obviously keep it secret.

In general though, it seems like the best and brightest scientists have strong disincentives to work in secret government labs. Working and publishing your results openly gets you known for your accomplishments and helps advance technology, and the private sector pays more if that doesn't interest you. What can the NSA or CIA offer you besides uncertainty about whether they're going to kill you and make it look like a suicide after they're done with you?

Re:Can an entire agency... (2)

AmiMoJo (196126) | about a year ago | (#44840843)

China is developing its own x86 compatible CPUs, so perhaps they know something we don't.

If the NSA/CIA wants you I'm not sure you can say no.

Re:Can an entire agency... (1)

tilante (2547392) | about a year ago | (#44841105)

Except that we know for sure that the NSA has made breakthroughs in the past, putting them years ahead of academia in cryptanalysis. They knew about differential cryptanalysis before it was officially discovered. Bruce Schneier points out that according to documents leaked by Snowden, the NSA's "research and development" budget for cryptanalysis is more than is being spent on cryptanalysis research by all of academia combined.

So what can they offer? A larger budget for your research than you would ever get in a university setting, plus no "publish or perish" pressures, no having to spend time teaching classes, and working with other people who are also on the cutting edge of cryptologic research. The NSA is also known to have their own chip fabrication facilities, so they can create custom hardware - which isn't something you're generally going to get to work with on a university budget.

Re:Can an entire agency... (2)

93 Escort Wagon (326346) | about a year ago | (#44840181)

Can an entire three-letter-agency get a corporate hard-on? 'Cause if they can, this gave our favorite one the biggest boner in the known universe.

On the contrary... more likely, either the NSA or the Chinese (or both!) will read this and say "Crap! They figured it out!"

If it's the NSA, we'll see some new laws passed soon giving them broad new secret vetoing power over publishing in scientific journals.

Re:Can an entire agency... (1)

JanneM (7445) | about a year ago | (#44840365)

If it's the NSA, we'll see some new laws passed soon giving them broad new secret vetoing power over publishing in scientific journals.

How would you know they don't have that already?

Get Your Tinfoil Hats (4, Informative)

stewsters (1406737) | about a year ago | (#44839689)

I would guess that an intelligence agency figured this out a few years ago. One that can plant moles at Intel. That's why they also want to remove rdrand from Linux.
http://linux.slashdot.org/story/13/09/10/1311247/linus-responds-to-rdrand-petition-with-scorn [slashdot.org]

Proxy whistleblowing? (Re:Get Your Tinfoil Hats) (4, Interesting)

Anonymous Coward | about a year ago | (#44840037)

If I were a disgruntled member of the intelligence industrial complex and knew that this was actually being done by a government agency, and I did not relish the thought of a Russian sabbatical, couldn't I surface the news by telling researcher friends of mine how to do it?

Re:Get Your Tinfoil Hats (1)

Anonymous Coward | about a year ago | (#44840275)

Geez - are you a functional illiterate or did you not even read the thread that you linked?

Even if rdrand provides 0 entropy, it doesn't make the entropy pool any worse. Removing rdrand is dumb and it can be turned off anyway by setting a single flag.

Re:Get Your Tinfoil Hats (1)

thoromyr (673646) | about a year ago | (#44840755)

This is a real problem with incomplete understanding of entropy and how it is used. The question is not "does rdrand provide X entropy" it is "does rdrand provide at least X entropy that it is being credited for".

If a process in linux asks for a random number the current pool is evaluated. Each input to the pool provides (theoretically) some X entropy and is credited with having provided some Y entropy where (presumably) X >= Y. If the *credited* entropy is enough then a number is returned, otherwise it depends on whether or not the blocking or non-blocking call was used.

So if rdrand is *credited* with providing X bits of entropy, but in fact provides 0 bits, and the "lie" causes credited entropy to cross the threshold then you will get a number generated from insufficient entropy.

Now, I haven't looked at the kernel or read up on this to see what the case is but the consideration is "does rdrand provide at least the entropy it is being credited for?"

If rdrand is used as a source of entropy but is *never* credited then it could only possibly hurt if there was some magic that allowed it to *reduce* the entropy pool by its inclusion. That seems more than a little far fetched.

If rdrand is used as a source of entropy and is credited for at least 1 bit then the inclusion is harmful if it has been compromised to the extent that it is credited.

Multipple Entropy? (0)

Anonymous Coward | about a year ago | (#44839695)

Several different methods of entropy should be employed? Heck what about random generator devices?

Re:Multipple Entropy? (2)

fuzzyfuzzyfungus (1223518) | about a year ago | (#44839795)

"Heck what about random generator devices?"

The whole point of TFA is about a technique for (mostly undetectably) modifying a good hardware RNG and turning it into a really lousy one.

Getting your entropy from multiple places probably helps (if they don't know what 6 RNGs you chose it's harder to dope them all, and even if they do, they still have to slog through the entropy from multiple crippled sources rather than only a single one (and, while it is possible to cripple the RNG entirely, that will show up on tests, so plausible real-world implementations would still provide some entropy, just less than advertised).

Re:Multipple Entropy? (0)

Anonymous Coward | about a year ago | (#44841281)

Why use an RNG? There's enough noise in the world around us to generate "noise" without an RNG. Point a directional (but not parabolic) microphone at an interstate (or similarly sized road in your country) at a distance of 1 mile (or a few km if you're not from the US). Inspect the output signal. You'll never run out of what are essentially random audio samples. To increase the randomness, use an omnidirectional mic, so it picks up nearby noises as well. You can generate fixed tones near the mic to reduce the randomness a bit, but that's mostly ineffective because, hey, what happens if a bird flies past? What if a squirrel starts digging at the mic to see if its edible? You need to ruggedize the hardware a bit, but it will provide more than enough randomness that can't be messed with easily, and certainly not remotely.

tl;dr: You only need quantized noise, and there's plenty in that sunny place called "outside".

Dopant? (0)

Anonymous Coward | about a year ago | (#44839703)

Then you'd better count up your sins!

I don't get it, sorry. (0)

Joining Yet Again (2992179) | about a year ago | (#44839735)

If you modify a chip, you can make it behave differently?

What's the news here please?

Re:I don't get it, sorry. (0)

Anonymous Coward | about a year ago | (#44839773)

Did you read the part about it being undetectable by normal inspection?

Re:I don't get it, sorry. (2)

Joining Yet Again (2992179) | about a year ago | (#44839877)

1. Changing the dopant in a transistor is undetectable by visual inspection - clearly;

2. Randomness isn't the same as predictability.

I skimmed through the paper thinking that the innovation was that they'd actually been able to modify an Intel chip. But they appear to be saying little more than that you can manufacture a chip "wrongly" (after a LOT of waffle - you'd never get away with this writing math papers!).

Re:I don't get it, sorry. (0)

Anonymous Coward | about a year ago | (#44839793)

They're modifying the chip to influence the random number generator, but more specifically it is modified in a way that cannot be detected very easily if at all. The important part is the not being detected. It's easy to modify a chip and make it behave differently, it's another thing to modify a chip and have it go unnoticed even under close scrutiny.

Re:I don't get it, sorry. (0)

Anonymous Coward | about a year ago | (#44839821)

Wow, I know most tech geeks like to pretend that the achievements of others are tiny insignificant things, and that they could have got the same result in 5 seconds if you'd only asked them, but this is taking it to a whole new level.

Re:I don't get it, sorry. (1)

Joining Yet Again (2992179) | about a year ago | (#44839901)

The "discoveries" in this paper are:

1) A chemical change is undetectable by visual inspection;

2) Reducing the number of bits used for randomisation may be undetectable.

That's not worth a multi-page paper, is it?

Re:I don't get it, sorry. (0)

Anonymous Coward | about a year ago | (#44840069)

Don't you have TPS reports to write today, sonny?

Re:I don't get it, sorry. (3, Insightful)

Hizonner (38491) | about a year ago | (#44840129)

Yes, yes it is.

In security, you're trying to change the behavior of corporate drones, idiots, and people who are invested in the status quo. People use these papers as ammunition for that.

The drones will call your attack "theoretical" and "impractical" unless you spell out exactly how to do it, step by step. If they hadn't detailed exactly how to do it, the attitude would basically have been that nobody could possibly figure out the impossible complexity of weakening a REAL RNG. I mean, look at the self tests! Nobody could get around that! In fact, even people who weren't complete idiots might have guessed, at first glance, that the self tests would be hard to defeat, or that you couldn't do this hack without screwing up the chip.

Even with a detailed paper, they will probably be ignored until somebody actually does it in the field. If you wrote a one-pager that said "Warning! Somebody could alter the behavior of gates by tweaking the dopants", they would 1000 percent ignore it.

As for the verbose background information, it's standard in the field (although they went a bit heavy on it). It has zero cost, and readers in the field who don't need it simply skip it. So I don't know why you're getting so upset about it.

Please don't trash people's work in fields you don't even slightly understand.

Re:I don't get it, sorry. (2)

kermidge (2221646) | about a year ago | (#44840641)

This is not my field by a long stretch. After reading the pdf this morning, what I got from the paper was a method to undetectably make relatively easily-done changes to various transistors such that those changes offer an entry point for external reading and possibly manipulation to potentially useful effect within real-world manufacturing methods. Do this, pwn chips. Profit.

What these guys have done strikes me as impressive - and wonderfully, elegantly sneaky. I know there are some design and fab people here - what say you?

optical inspection? (0)

nten (709128) | about a year ago | (#44839763)

There are easy numeric methods for determining how random data is. Optical inspection would be unnecessary to discover this modification. You might even get away with generating a few megabytes of data, zipping it, and then comparing the resulting compression ratio to a known good chip.

Re:optical inspection? (4, Insightful)

Anonymous Coward | about a year ago | (#44839893)

There are easy numeric methods for determining how random data is.

Actually, no. Technically speaking, there is no such thing as random data, only a random process. You can certainly test how random a data stream seems, but if the data source is a black box, you never really know.

From TFS:

Since the Trojan RNG has an entropy of n bits and [the original circuitry] uses a very good digital post-processing, namely AES, the Trojan easily passes the NIST random number test suite if n is chosen sufficiently high by the attacker. We tested the Trojan for n = 32 with the NIST random number test suite and it passed for all tests.

What if your black box is just feeding you encrypted bits of pi? You would never know, but the black box's maker could trivially reproduce your "random" numbers.

Re:optical inspection? (1)

the_B0fh (208483) | about a year ago | (#44840027)

Oh, you mean like RSA tokens and the seed files? :P

seems random (2)

nten (709128) | about a year ago | (#44840611)

The NIST 800-22 test has bit length parameters. The article doesn't indicate if it passed the 128 bit NIST test after they reduced the entropy to 32 bits, only that it passed *some* NIST test. From another poster it seems the standard NIST parameters used for the NIST test may not be sufficient to test that the prng exhibits the level of entropy that people are relying on it to exhibit. The lavarnd folks pass a billion bit NIST test, so it is possible to run longer versions of the test. If the reduced entropy source is still passing a higher entropy test, we have a problem with our testing method.

Your other (very valid) point is that just because data is random, doesn't mean you are secure. The data stream has to be both random and unknown to your attacker, which PI would not be. In this case they do not have a way to set the seed, or all inputs to the prng, only to limit the prng's bit length, so the attacker will not know the random sequence or even its statistics. It simply makes a brute force attack much less time consuming.

It still concerns me that a 32 bit prng might have passed a 128 bit 800-22 test. Does anyone know more about that aspect of it?

Re:seems random (1)

thoromyr (673646) | about a year ago | (#44840863)

It would have to be based on a statistical analysis which means it isn't a proof, it is demonstrated to a confidence level. How confident do you need to be?

Secondly, to properly evaluate to greater number of bits of entropy is going to require a larger sampling and I expect this increases exponentially. How much time do you have to reach your confidence?

The testing would be balancing those two questions, but in no case could an absolute answer be found.

But, from the horses mouth:

The subject of statistical testing and its relation to cryptanalysis is also discussed, and some recommended statistical tests are provided. These tests may be useful as a first step in determining whether or not a generator is suitable for a particular cryptographic application. However, no set of statistical tests can absolutely certify a generator as appropriate for usage in a particular application, i.e., statistical testing cannot serve as a substitute for cryptanalysis. The design and cryptanalysis of generators is outside the scope of this paper.

Random Number Generation [nist.gov]

In other words, NIST says their recommended tests are statistics based and insufficient.

Re:optical inspection? (2)

moteyalpha (1228680) | about a year ago | (#44840771)

As a person who has worked in semiconductors since the first SSI 7400 , I can say for certain that many things have been done and there are some really talented people who can do things that -almost- defy reason. I know that engineers put their own little signatures in ASICs and that some engineers are far more competent than can be understood by most. I have seen many circuits that were situationally controlled or externally controlled by means that would not be obvious without an understanding of the physics, electromagnetic conditions, and software. It can even be done at the layout level. Early CMOS was notoriously susceptible to EM induction. I have seen a board that used an unconnected trace to an input pin used as an RC circuit.
The greatest problem that I see in this type of behavior is that it assumes perfect security and there is no such thing. If you put a means to invade or disable systems in all products, you are hurting every individual and business. If you also create a system where people cannot verify your identity as a secret police without committing a crime, you have created a back door in the social engineering realm. If I am party to a security request, I then know what documents, methods and verifications are being used and thus it can be used as a spoof attack on anybody else with little chance of discovery.
I would not be the least bit surprised if it was discovered that IBM, INTEL, Motorola, and others were subjected to this same security theater. The problem with hardware is that once the flaw becomes exposed and if it is bad enough, the entire system must be replaced. It is rational to have different circuitry for military applications, but when it creeps into consumer and business products it is wrong in many ways and though the intent may be for the military to do what it thinks will solve -their- problem, without oversight it becomes paradoxical and if they destroy the means to do business and make profit through their tampering, then it is full circle and the funds and efforts that support the government and military are damaged.
The problem is in oversight, defence must be limited in its scope of action. Isn't this what all the fuss is about with Syria and Iraq? The convential military action is assumed to have overstepped the boundaries into what is consired socially acceptable and this NSA condition is no different. It is a failure in leadership and oversight that offends the sensibilities. Nazi Germany had a very effective military and it would have been a non-issue if they had been guided by people with empathy and reason.
Say what!? Optical inspection at 14 nanometers? Did I miss a memo or something?

Re:optical inspection? (1)

Anonymous Coward | about a year ago | (#44839909)

We tested the Trojan for n = 32 with the NIST random number test suite and it passed for all tests.

While your assessment is true, the scope needed to identify the difference between 32 bits of entropy and 128 bits is inconvenient. Also, each bit of entropy added doubles the time to confirm (just as each bit doubles the time to break) so my main take from this article is that RNG testers do not do enough tests to confirm half the level of chaos that people are attempting to use.

Re:optical inspection? (2, Interesting)

Anonymous Coward | about a year ago | (#44840781)

You can still generate an arbitrary amount of entropy with a compromised RNG if you know it's compromised. Let's say you have a ridiculously compromised RNG with only 1-bit of entropy and 32-bit output, such an RNG could trivially fail statistical tests, if it used simple combinatorials to mix the nth output with the n-1th output, or it could be almost undetectable, if it uses complex combinatorials, such as the AES method used in the Intel RDRAND. In either case, each word will contain some entropy, even if it is much less than stated "on the box".

Let's say it outputs a 32-bit word (the RDRAND32 instruction does), and each word is supposed to contain 32-bits of entropy (I dunno), but only contains 8-bits of entropy. If I mix 4 words of output to produce an output of 32-bits, I have reliably produced 32-bits of entropy.

The danger here is that a software implementation takes the manufacturers word on the entropy content of the output, since we can't distinguish between genuine entropy and the output of a strong cipher with a hidden state (as is the case in RDRAND), rather than mixing the RNG output down to a smaller number of bits (for example by chain-ciphering N consecutive words of RNG output together to form one word of output).

One potential mitigation to most of these compromised RNG scares is to have the user initialise an S-box or cipher key manually (flip coins, roll dice), and feed all RNG output through a strong cipher in feedback mode. The predictability of the RNG is no longer usable for cryptanalysis as the output of the cipher is not predictable without breaking the cipher and discovering the key. The key can't be discovered by cryptanalysis, because it's only ever used to cipher "random" (though partially compromised) input, and cryptanalysis of users of the RNG is thwarted because there is no longer identifiable correspondence between the RNG output and the random values used. Even if the key for the random post-processing is known, the correspondence between random-system output and RNG output is non-trivial, and there is no way to know the internal state of the ciphers feedback register, as it is constantly accumulating partial entropy from the RNG, which is never revealed.

Most of this doesn't apply to fake RNGs (PRNGs) which have been compromised to generate no entropy after initialisation, as eventually sufficient state will percolate through the cipher to regenerate the seed value and a sliding window attack will recover the offset. Unfortunately a PRNG can be designed to be statistically indistinguishable from an RNG for computationally impractical long runs of output 2**96 bits or longer if the internal state of the PRNG can't be obtained (many existing block ciphers fulfill this requirement).

The descibed attack seems to describe weakening the entropy of the RNG rather than reducing it's entropy to an initial constant, and so while less than ideal, would not compromise a prudently designed random number system.


Anonymous Coward | about a year ago | (#44839809)

ROFLMAO!!! Talking to God. Pretty retarded to resist overwhelming force. Don't fuck with a being that powerful.
forethought devilish white bitter imparts semblance shady
infinitude Example commenders suggestions transitory furnish
announce cupidity loquacity

Still detectable (1)

gr8_phk (621180) | about a year ago | (#44839813)

This should still be detectable. It just requires more time. One could also reduce the time by looking at the combined output of an entire batch of chips. If they all have the same mask, they will all produce the same reduced set of random numbers. So one additional meta-test of data from a lot could show they have been compromised.

Re:Still detectable (1)

VortexCortex (1117377) | about a year ago | (#44840139)

Tell me, what hardware will you test the chips via?

You are now aware that the infamous Ken Thompson Compiler / Microcode Hack was well known to the government before he pontificated on it during his ACM acceptance speech / paper. [bell-labs.com]

I first read of the possibility of such a Trojan horse in an Air Force critique (4) of the security of an early implementation of Multics.

Which was published in the very apt. year of 1984, I might add...

Tell me, indeed, how exactly would you select the chips that did not already have such modification for comparison? Oh it should take more time indeed, but far much more than you realize. Get out your Oscilloscope and Soldering Iron, you're going to be creating a reference implementation on a bread board the size of Texas.

Re:Still detectable (0)

Anonymous Coward | about a year ago | (#44840865)

This should still be detectable. It just requires more time.

Since the output of the RNG feeds an AES cipher in feedback mode, you would just need a lot more time, AES has a 128-bit block size, so you have an average collision every 2**64 blocks of output, if the RNG is gimped to output only 32-bits of entropy with each word, you need 2**32 collisons.

So yea, you ONLY have to collect 2**96 output words to detect the lack of entropy. Oh and you don't get the full word of output, only 32-bits of it, so add 2**96 to your complexity, you have to collect 2**192 output words.

Got RAM?

It's a small risk (1)

John Burton (2974729) | about a year ago | (#44839827)

Well yeah it's worth being aware of the possibility. But frankly there are very much bigger risks to worry about first

BTW... (1)

CajunArson (465943) | about a year ago | (#44839843)

Since the Ivy-bridge random number generator is supposedly "unauditable" how are these researchers able to prove anything about re-doping a black box design? Shouldn't they just look at it and spot the massive array of transistors that spells out "NSA BACKDOOR UNIT" instead of having to worry about all this subterfuge?

Re:BTW... (1)

h4rr4r (612664) | about a year ago | (#44839981)

What do you mean unauditable?
Do you mean inconvenient to audit? It might take a long time but there are methods to check how good the random number generator is.

Re:BTW... (2)

ssam (2723487) | about a year ago | (#44840493)

no there aren't. The digits of pi have no patten other than being the digits of pi, so they will pass a random number tests. A good pseudo random number generator will pass randomness tests, but can be easily reproduced if you know the starting seed. Also putting a simple sequence (1,2,3,4...) through an encryption algorithm will give you an output that passes randomness tests.

Re:BTW... (0)

Anonymous Coward | about a year ago | (#44841025)

Do you mean inconvenient to audit? It might take a long time but there are methods to check how good the random number generator is.

Sure, if you can break AES (in KFB, so there is no key to discover).

Or if you have a few million bucks, you can cut the silicon and wirebond around the AES-KFB filter stuck between the RNG and the output to see if your one (now destroyed) device was functioning correctly.

The point of the article is that, unlike compromised metalisation, this type of modification can't be identified through non-destructive testing. Somestate concerned with security could at reasonable cost audit a golden-sample CPU, then send their CPUs for Xray imaging, check they match the golden-sample and stick them back into their machines. Such a test would be ineffectual as an xray micrograph will not reveal the modification, as it would if a similar modification was made in the more traditional way by modifying a metal layer of the chip.

Re:BTW... (0)

Anonymous Coward | about a year ago | (#44840093)

Your sig is naïve, ignorant, and logically flawed.

But your irrational bias is quite effectively displayed. Good job.

Re:BTW... (0)

CajunArson (465943) | about a year ago | (#44840579)

No, my sig is basically saying that the Patron Saint of Global Warming's actions belie his public propaganda.

You see, it's not that *I* don't believe in global warming, it's that Al Gore really doesn't believe in it either*.

* Oh, he believes in it as a profit-opportunity, but despite his rhetoric he doesn't think the apocalypse is upon us.

Re:BTW... (1)

mattpalmer1086 (707360) | about a year ago | (#44841253)

A belief in GW is entirely compatible with having a beach front house. The problem is that it is slow moving but inexorable.

Personally, I'm with the vast, vast majority of scientists who claim it's real and extremely dangerous. From what I've seen of the human race, we won't do anything until we get badly burned.

I guess everyone will know for sure one way or the other in a few decades. I just hope we can live with it.

Re:BTW... (1)

mattpalmer1086 (707360) | about a year ago | (#44840523)

I thought we already covered this in the linux rdrand story. It's called unauditable because it whitens the raw entropy output using encryption on chip, making even quite non-random source data appear to be random. It is not called unauditable because it's a black box design. The paper states that the design is very well known.

The attack described in this paper is to modify both the entropy source output "c" and the post-processing encryption key "K", undetectably setting a fraction of them to constant bit values. This weakens the effective random number generation to some chosen n bits of entropy, instead of 128 bits. But because the AES encryption post-processing stage does a very good job of making its output appear random, it will still pass random number tests.

If we had access to the raw entropy source, we could see that it was not providing nearly enough entropy to the encryption post-processing stage.

Re:BTW... (2)

IamTheRealMike (537420) | about a year ago | (#44840971)

I looked at the paper from CRI, they apparently did do testing on the raw (pre-whitening) entropy source on test chips that give direct access to it. Unfortunately the goal of that audit was to build confidence in the general design, the NSA wasn't an issue when that was done.

What I take away from this is - the good news is, the RDRAND circuitry has an open, well documented design which is apparently robust. Thus, if we can obtain confidence that it's not backdoored by the NSA, it's a great feature to have. Note to people talking about China, etc, Intel run all their own fabs. The chance of a technique as complicated as crypto backdoors using dopant trojans being inserted into the manufacturing process inside Intel-controlled fabs is close to zero. If it's done, it's done with the knowledge and co-operation of management.

The question is how can the world build such confidence? The standard way would be to decap some randomly chosen chips and analyze with an SEM, but I have no idea if that's feasible for something as complicated as a modern Intel core. Presumably Intel themselves can do it for debugging purposes, but whether it can be done in the absence of lots of proprietary information is unclear. Also, the output of RDRAND could presumably be patched using microcode updates, so just because the chips ship without a backdoor doesn't mean one couldn't be introduced later through a firmware/BIOS update.

Re:BTW... (1)

mixed_signal (976261) | about a year ago | (#44840525)

This shouldn't be a question of auditing the quality of the number generator; the research shows you might be fooled. Whatever the actual end design is, production tests are constructed to verify the chip is manufactured to match the design. There are some posts further down discussing production test.

It's not the NSA you should worry about... (0)

Anonymous Coward | about a year ago | (#44839911)

It's the Chinese Government. Obviously this has been happening for some time...

Re:It's not the NSA you should worry about... (1)

the_B0fh (208483) | about a year ago | (#44840115)

Why? Is one necessarily better or worse than the other? Because the Bible said so? Or something else said so?

Re: It's not the NSA you should worry about... (0)

Anonymous Coward | about a year ago | (#44841301)

The Chinese government has large labor camps that they regularly put dissidents in. Make enough noise about this within China and you'll find yourself a new resident

The US government has Gitmo, which has a handful of prisoners, and loud active group openly protesting about it's existence.

Don't be dumb and ramble on about equivalence.

Software only (1)

return 42 (459012) | about a year ago | (#44840049)

I wonder if it's possible for an attacker to mess with microcode in such a way as to trojan things like random number generation, without having any other effects that would be more easily noticed. It doesn't seem likely.

Of course, true RNG depends on things like timing keystrokes, mouse clicks, network packets, etc. The LSBs of such times probably aren't used for anything else, and could thus be tampered with more easily.

It's pretty hard to get reliable crypto when your adversaries are the SIGINT arms of some of the most powerful nations in history; they're not constrained by law, ethics, or budget; and the one in your own nation can coerce cooperation and silence. Bad deal, all around.

Edward Snowden should be canonized.

Not a problem for linux (1)

Okian Warrior (537106) | about a year ago | (#44840079)

Linux uses the Ivy Bridge random number generator in the kernel, along with other sources of randomness.

That makes it OK, because as everyone knows, mixing the other sources with a predictable string makes the output even more random!

Didn't Linus completely settle [slashdot.org] this issue?

Re:Not a problem for linux (1)

gweihir (88907) | about a year ago | (#44841119)

Also notice that this attack does not make RdRand unusable. It still gives you some bits of entropy per output value, just a lot less than expected. However if you expect nothing or very little, the output is good even in the compromised version. And for various reasons, RdRand has a lot less entropy than 1bit/1bit anyways (theoretically as low as 1bit/512bit), so hashing it together large-scale is necessary in any case (I bet many people overlooked that little gem...).

Will not past verification - Scan. (2, Informative)

RichMan (8097) | about a year ago | (#44840121)

These parts would not pass the standard verification process and would be rejected from being assembled into devices.
Standard testing of ICs for functional faults includes a scan process. Per the design specification that the part was supposed to buildt a number of scan vectors are passed through the devices. These scan vectors check as much of the device as possible. The goal is to check every flop and every logic path between flops. The tests are to detect manufacturing errors. And can find single faults in devices.
Typical errors are stuck at 1 or stuck at 0, also shorts and would easily expose modifications of this sort, especially of such a scale as to radically change things.

Re:Will not past verification - Scan. (3, Insightful)

return 42 (459012) | about a year ago | (#44840467)


"Hello, Intel. Under the terms of this national security letter, you must change your verification software to ignore certain errors. The engineers who carry out this order must not reveal anything about this. Anyone who does will be subject to a term of incarceration not exceeding..."

Tell me why this would not happen.

Re:Will not past verification - Scan. (1)

ssam (2723487) | about a year ago | (#44840527)

So intel runs a scan to check that the random number generator gives the correct output?

well that settles it.

Re:Will not past verification - Scan. (1)

RichMan (8097) | about a year ago | (#44840989)

1) computer generated" random numbers" of the type this covers are fully state to state defined they are not random in any way. To make them random you need to seed the initial state and then reduce the output.
2) the automated scan check is bit by bit on the logic it does not care that 64 bits make a random number it looks at the logic cone input for every single bit independently and verifies the functionality. This is done to make sure all the logic works.

I doubt it is undetectable (1)

cold fjord (826450) | about a year ago | (#44840179)

I doubt that an altered chip would pass BIST [siliconfareast.com] testing.

Re:I doubt it is undetectable (0)

Anonymous Coward | about a year ago | (#44840569)

Not if you alter the transistor(s) for BIST result to always pass.

Re:I doubt it is undetectable (1)

cold fjord (826450) | about a year ago | (#44840711)

That wouldn't work out so well.

Re:I doubt it is undetectable (1)

Anonymous Coward | about a year ago | (#44840919)


BIST only tests functional blocks, it doesn't test every gate.

How can you test the functionality of a part designed to be non-deterministic?

The exact problem with Intel's RDRAND implementation is that the internal state is a black-box and can't be interrogated, so there is no way to verify that the input to the feedback cipher is not deterministic or constant.

Re:I doubt it is undetectable (1)

cold fjord (826450) | about a year ago | (#44841331)

So you're thinking Intel has no way to test a major functional part of their chip to know if it's good? I doubt it.

production test would catch this (1)

mixed_signal (976261) | about a year ago | (#44840207)

Digital ICs are treated production with scan tests guaranteed to cover around 95 to 99% of possible faults.

Re: production test would catch this (1)

mixed_signal (976261) | about a year ago | (#44840233)

Should have said 'tested' not treated... Using swipe on a tablet...

Re:production test would catch this (1)

gl4ss (559668) | about a year ago | (#44840367)

well obviously the production test would be skipped if the manufacturer did this...

Re:production test would catch this (1)

mixed_signal (976261) | about a year ago | (#44840497)

They probably wouldn't just skip scan testing altogether. Too many bad chips would go through and the customer would see a high(er) failure rate of bad chips being received.

The manufacturer could alter the test to match their circuit level change, though. This is easy enough to do.

This attack will succeed if the end customer is relying on the manufacturer to verify the chip electrically and if the customer only performs an optical inspection. The end customer has to run the full electrical tests, as well. Optical measurements can verify the masks are correct and electrical (scan and otherwise) verify the design behavior. This why there are 'trusted foundries' in the U.S. ...

Optical verification at chip level is quite difficult, and often destructive. There would have to be a sampling scheme in place to hope to catch every die site on the reticle... (A reticle is an array of IC die that is stepped across the wafer for lithographic exposure of resist layers for patterning the material on the wafer.)

On the topic of Trojens (0)

Anonymous Coward | about a year ago | (#44840269)

OT, I know... but...
I always wondered why people use condoms named after them...?

accidental misdoping even more troubling (3, Interesting)

hormiga (600498) | about a year ago | (#44840293)

Given Hanlon's razor, an accidental, rather than malicious, error in doping would be even more likely. If the chip were inadvertently doped incorrectly, it would pass visual inspections and even software tests without awareness of the defect. How many defective dice, not merely with RNGs but also with other circuits, are already in service due to inspection failures?

Although this paper shows how insidious a threat from a well-funded adversary might be, even more it shows the need for more comprehensive inspection mechanisms to discover misdoping which might go undetected by existing standard procedures.

BTW, the paper includes a well written and readable introduction to the context of the problem. Good job.

Re:accidental misdoping even more troubling (1)

BoRegardless (721219) | about a year ago | (#44840621)

For us uninformed, please define doping.

Re:accidental misdoping even more troubling (3, Informative)

hormiga (600498) | about a year ago | (#44841031)

In semiconductor manufacturing, doping is the introduction of slight amounts of impurities into a semiconducting material, to create a condition of surplus or deficit electrons. Donors such as arsenic and phosphorus add electrons, creating n-type semiconductors, while acceptors such as boron and aluminum cause a deficit of electrons, making a p-type semiconductor. The terms surplus and deficit are relative to a state where all of the atomic orbitals are filled and the semiconductor has almost no conductivity. Thus, doping makes semiconductors into conductors.

Doping is commonly done by exposing the wafer of semiconducting material at high temperatures to a gas containing the dopant. The dopant diffuses into the surface of the wafer. A mask covers the wafer so that the diffusion only takes place where the wafer is uncovered. Note that the mask has microscopic detail, the quantities of dopants employed are low, and the chemicals used are nasty.

The circuit is created by the arrangement of the doped materials. For example, a p-type region adjacent to an n-type region makes a diode, while three adjacent regions in series make a bipolar transistor. The circuit is wired together using layers of metal (such as aluminum) deposited onto the surface and etched away in a pattern, done similarly to the way printed circuit boards are made.

Re:accidental misdoping even more troubling (1)

floodo1 (246910) | about a year ago | (#44840651)

hard and fast rules are always wrong.

Re:accidental misdoping even more troubling (1)

CaptBubba (696284) | about a year ago | (#44840909)

A misdoping would light up the equipment alarms, in-line electrical tests, end-of-line electrical tests (both on the chips themselves and special test regions in the lines between the chips). Doping is performed relatively early in the manufacturing process and Intel et al know just how big a risk a misdoping is and test for it extensively in-line. This is because if you only catch it at the end of the line you potentially have hundreds of millions of dollars worth of product to scrap because from the 20 days or so it took for the first wafers to hit test and fail you have equipment churning out 150-400+ wafers per hour of faulty product 24/7.

Re:accidental misdoping even more troubling (2)

hormiga (600498) | about a year ago | (#44841261)

I would agree almost all the time. An error in doping, not being selective, would likely be obvious, because it would affect the other components on the same layer.

However, there is a small amount of boutique production which is done almost by hand, and more subject to errors. The chips are usually less complex, and given the right kind of circuit (such as the RNG from the paper) errors are more likely to slip through, especially if the circuit were to be confined, by itself, to layers not used in the interface electronics.This kind of specialty chip is sometimes used in obscure military and security devices. These are not chips you will find in mass-produced electronics.

The term, by hand, may be misleading. In fact, custom chip making is so well automated that a foundry can spit out dissimilar batches one after another, given instructions in electronic form. I've seen students design and make small batches of their own chips using commercial services. Here's the rub: all of the testing for a boutique chip must be defined for that chip, and if the designer/customer fails to specify the design or test correctly, a bad batch might emerge.

I've seen so many mistakes in my career, almost nothing surprises me now, although I'm sometimes amazed how long it takes to find them.

We need open source RNGs (1)

Burz (138833) | about a year ago | (#44840555)

Then we can buy them from fabs that we trust, and they will have to more explicitly compete on the issue of trust.

There is also some possibility that buyers could inspect the manufacturing processes.

Anomalies in other computational functions are less of a concern, IMHO, because any environment with a mix of CPUs and chipsets should reveal tainted chips at least occassionally. Random number generation is an exception here.

Re:We need open source RNGs (0)

Anonymous Coward | about a year ago | (#44841201)

If you want a cheaply trustable RNG, you don't want one from a fab, because you don't want a complex IC device. You simply need to amplify the output of a resistor to saturation, then bias it so it's probability for a positive output is close to 0.5 and it has a fairly flat spectrum, plug it into your soundcard and feed the output into a cipher for conditioning.

You can make it even simpler, take a microphone, jam it inside your case, and do something like:

arecord | openssl aes-256-cbc -k myrandomkey

But not exactly that, because that prepends 'Salted_' and a salt, so you need to look up how to get it to remove this wrapping.

Random generation is not hard. You can flip coins, get a couple of 16 sided dice, disembowel chickens over a number board. Even if you fuck it up a bit, using a strong cipher to filter your randomness will absolve your sins.

CRC32 To the Rescue!! (0)

Anonymous Coward | about a year ago | (#44841051)

Who had the bright idea to protect this with CRC 32 - for those who didn't RFA the BIST (built-in self test) verifies the output by checking a CRC 32 result for a predefined input. This allows them a feasible attack of 2^31 in order to find appropriate constants to set. Considering they've got the AES hardware right there - they should have used AES and compared to 256 bits of output. Attacking hardware should not be this easy.

Limited scope (2)

gweihir (88907) | about a year ago | (#44841077)

This can only be used for attacks on things that can be compromised in a way such that they do not need to perform their original function perfectly anymore. A CPRNG is an ideal target, as it does not need to produce good _and_ bad number after the attack, it is sufficient if it produced bad numbers that look good. The AES whitener in the CPRNGs this was demonstrated on make this very easy and while it looks convenient, it may have been inserted in there exactly to make compromised versions of this CPRNG hard to detect. On the other hand, if you attacked, say, a hash function or a block cipher in this way, it would start producing wrong outputs, potentially for a large number of cases and not only would it fail at its original function, this would also be pretty obvious.

Still, this is a significant attack and underlines why a single source of entropy should never be fully trusted and that CPRNGs should always be open software and use multiple entropy sources that get mixed.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?