Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Microsoft Power Hardware Linux

The Linux-Proof Processor That Nobody Wants 403

Bruce Perens writes "Clover Trail, Intel's newly announced 'Linux proof' processor, is already a dead end for technical and business reasons. Clover Trail is said to include power-management that will make the Atom run longer under Windows. It had better, since Atom currently provides about 1/4 of the power efficiency of the ARM processors that run iOS and Android devices. The details of Clover Trail's power management won't be disclosed to Linux developers. Power management isn't magic, though — there is no great secret about shutting down hardware that isn't being used. Other CPU manufacturers, and Intel itself, will provide similar power management to Linux on later chips. Why has Atom lagged so far behind ARM? Simply because ARM requires fewer transistors to do the same job. Atom and most of Intel's line are based on the ia32 architecture. ia32 dates back to the 1970s and is the last bastion of CISC, Complex Instruction Set Computing. ARM and all later architectures are based on RISC, Reduced Instruction Set Computing, which provides very simple instructions that run fast. RISC chips allow the language compilers to perform complex tasks by combining instructions, rather than by selecting a single complex instruction that's 'perfect' for the task. As it happens, compilers are more likely to get optimal performance with a number of RISC instructions than with a few big instructions that are over-generalized or don't do exactly what the compiler requires. RISC instructions are much more likely to run in a single processor cycle than complex ones. So, ARM ends up being several times more efficient than Intel."
This discussion has been archived. No new comments can be posted.

The Linux-Proof Processor That Nobody Wants

Comments Filter:
  • by wvmarle ( 1070040 ) on Sunday September 16, 2012 @10:36AM (#41352413)

    Nice advertisement for RISC architecture.

    Sure it has advantages, but obviously it's not all that great. After all Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back. It seems no-one can beat the price/performance of the CISC-based x86 chips...

    • by Anonymous Coward on Sunday September 16, 2012 @10:48AM (#41352491)

      Like I posted elsewhere, intel hasn't made real CISC processors for years, and I don't think anyone has.
      Modern Intel processors are just RISC with a decoder to the old CISC instruction set.
      RISC beats CISC in price performance trade-off, but backwards compatibility keeps the interface the same.

      • Re: (Score:3, Interesting)

        by vovick ( 1397387 )

        The question is, how much can the hardware optimize the decoded RISC microcode? Or the optimization does not matter much at this point?

        • First, a piece of terminology: The Intel term for what you call "decoded RISC microcode" is "uop". The "u" is meant to be a mu, but it's usually pronounced "u". It's short for micro-operation.

          So there are essentially two kinds of optimisation available:

          1. How the uops are scheduled. The CPU has a lot more freedom here than a typical RISC processor because the CPU did the code generation, rather than the compiler.

          2. If the uop doesn't use a functional unit, don't generate any uops for it. The common case i

      • by Dogtanian ( 588974 ) on Sunday September 16, 2012 @11:35AM (#41352881) Homepage

        Like I posted elsewhere, intel hasn't made real CISC processors for years, and I don't think anyone has. Modern Intel processors are just RISC with a decoder to the old CISC instruction set.

        Exactly. Intel has been doing this ever since the Pentium Pro and Pentium II came out in the 1990s. Anyone who knows much at all about x86 CPUs is aware of this, and Perens certainly will be. That's why I'm surprised that that article misleadingly states:-

        So, we start with the fact that Atom isn't really the right architecture for portable devices (*) with limited power budgets. Intel has tried to address this by building a hidden core within the chip that actually runs RISC instructions, while providing the CISC instruction set that ia32 programs like Microsoft Windows expect.

        The "hidden core" bit is, of course, correct, but the way it's stated here implies that this is (a) something new and (b) something that Intel have done to mitigate performance issues on such devices, when in fact it's the way that all Intel's "x86" processors have been designed for the past 15 years!

        Perhaps I'm misinterpreting or misunderstanding the article, and he's saying that- unlike previous CPUs- the new Atom chips have their "internal" RISC instruction set directly accessible to the outside world. But I don't think that's what was meant.

        (*) This is in the context of having explained why IA32 is a legacy architecture not suited to portable devices and presented Atom as an example of this.

        • by im_thatoneguy ( 819432 ) on Sunday September 16, 2012 @03:32PM (#41355105)

          It also ignores the fact that in flops per watt Intel still dominates ARM.

          It's like comparing a moped to a bus and saying "see look how much more fuel efficient the moped is!"

          True... but then fill a bus with people and suddenly the mpg per person goes through the roof for the bus. You could get 300mpg per person from a bus. Good luck getting that with a moped.

          And like the introduction of plugin hybrids competing with even Mopeds for single occupancy MPG--you can also see RISC x86 chips out-competing ARM too on RAW watts. The next generation of Intel chips are going to be not only substantially faster but also on parity for watts.

          Simply stripping down technology inevitably will come back to bite you in the ass. I think the domination of ARM in the mobile space is about to evaporate within the next year on every conceivable metric.

          • My bicycle is significantly more efficient getting me to the train station than the bus is.

            I walk because it costs 150 yen or so to park the bike. That's still more efficient. I don't live close to a bus stop. Lots of people near me don't live close to a bus stop.

            More than half the people going into the station at any particular time of the morning have not come in on a bus. And most buses at this station are about half-full, not operating at maximum efficiency.

            The plain and simple fact is that we are not a

        • by reiisi ( 1211052 )

          I think you are confusing Intel with AMD in the '90s.

          Sure, Intel (and Motorola) were using RISC tech in their CISC designs from back in the mid-'80s. Bits and pieces of the tech. Not full (almost-)RISC cores running CISC instructions by emulation circuitry (contrary to the propoganda), but cherry-picked RISC techniques. (8 GP registers do not a RISC make.)

          AMD's 64 bit CPU was the first real x86 CISC-on-RISC. (And Intel had to go cap-in-hand to AMD for that, in the end.)

    • Re: (Score:3, Informative)

      by stripes ( 3681 )

      Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back

      FYI, all of Apple's iOS devices have ARM CPUs, which are RISC CPUs. So I'm not so sure your "don't seem to be in any hurry to move back" bit is all that accurate. In fact looking at Apple's major successful product lines we have:

      1. Apple I/Apple ][ on a 6502 (largely classed as CISC)
      2. Mac on 680x0 (CISC) then PPC (RISC), then x86 (CISC) and x86_64 (also CISC)
      3. iPod on ARM (RISC), I'
      • iPhone and iPad are not known as powerful devices; computing power lags far behind a typical desktop at double the price. Form factor (and the touch screens) add a lot of cost.

        So far RISC is only found in low-power applications (when it comes to consumer devices at least).

        • by stripes ( 3681 )

          So far RISC is only found in low-power applications (when it comes to consumer devices at least).

          Plus printers (or at least last I checked), game consoles (the original Xbox was the only CISC in the last 2~3 generations of game consoles not to be a RISC). Many of IBMs mainframes are RISCs these days. In fact I think the desktop market is the only place you can randomly pick a product and have a near certainty that it is a CISC CPU. Servers are a mixed bag. Network infrastructure is a mixed bag. E

          • by LostMyBeaver ( 1226054 ) on Sunday September 16, 2012 @01:14PM (#41353693)
            Consoles choose RISC vs. CISC for a much simpler reason. The performance isn't really that important. It's typically an issue of endianess.

            It has become quite simple in modern times to make a CPU emulating JIT (meaning treating the binary instruction set of one CPU as source code and recompiling it for the host platform.) what is extremely expensive execution-wise is data model conversion on loads and stores. Unless Intel starts making load and store instructions that can function in big endian mode (we can only dream), data loading in an emulator/JIT will always be a huge execution burden.

            The result being that while an x86 can run rings around any of the console processors, a perfect one to one JIT can't be developed to make big-endian code run on a little endian CPU with a 1 to 1 mapping.

            As an example of this, if you look at emulators for systems that make use of little endian ARM, performance of the JIT is perfect. In fact, the JIT can sometimes even make performance better. But if you look at a modern 3.4Ghz Quad-Core Core-i7, it still struggles with emulating the Wii which is insanely low performance.

            So, don't read into RISC vs. CISC here. It's really an issue of blocking emulators in most cases.
    • The summary is outright incorrect. First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC. Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops. Third, RISC may mean less gates for the same tasks, but it also means that some tasks get broken up into multiple tasks.

      ARM doesn't scale as far as x86, so in the high end you need more cores and some tasks are less parallelizable than others. ARM should be recognized as the current

      • The single-cycle rule is bogus. Plenty of ARM instructions (branches, multiply, load/store multiple) take more than 1 cycle, and plenty of x86 instructions only take 1.
      • by stripes ( 3681 ) on Sunday September 16, 2012 @11:34AM (#41352873) Homepage Journal

        First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC

        LOAD and STORE aren't single cycle instructions on any RISC I know of. Lots of RISC designs also have multicycle floating point instructions. A lot of second or third generation RISCs added a MULTIPLY instruction and they were multiple cycle.

        There are not a lot of hard and fast rules about what makes things RISCy, mostly just "they tend to this" and "tend not to that". Like "tend to have very simple addressing modes" (most have register+constant displacement -- but the AMD29k had an adder before you could get the register data out, so R[n+C1]+C2 which is more complext then the norm). Also "no more then two source registers and one destination register per instruction" (I think the PPC breaks this) -- oh, and "no condition register" but the PPC breaks that.

        Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops.

        Yeah, Intel invented microcode again, or a new marketing term for it. It doesn't make the x86 any more a RISC then the VAX was though. (for anyone too young to remember the VAX was the poster child for big fast CISC before the x86 became the big deal it is today).

    • by UnknowingFool ( 672806 ) on Sunday September 16, 2012 @11:14AM (#41352695)

      I would argue the problem for Apple wasn't about performance but about updates, mobile, and logistics.. PowerPC originally held promise as a collaboration between Motorola, IBM, and Apple. IBM got much out of it as their current line of servers and workstations run on it. Apple's needs were different than IBM's. Apple needed new processors every year or so to keep up with Moore's law. Apple needed more power efficient mobile processors. Also Apple needed a stable supply of the processors.

      Despite ordering millions of chips a year, Apple was never going to be a big customer for Motorola or IBM. Their chips would be highly customized that none of their other customers needed or wanted and Apple needed updates every year. So neither Motorola or IBM could dedicate huge resources for a small order of chips as they could make millions more for other customers. PowerPC might have eventually come up with a mobile G5 that could rival Intel but it would have taken many years and lots of R&D. IBM and Motorola didn't want to invest that kind of effort (again for one customer). So every year Apple would order enough chips they thought they needed. If they were short, they would have order more. Now Motorola and IBM like most manufacturers (including Apple) do not like carrying excess inventory. So they were never able to keep up with Apple's orders as their other customers had more steady and larger chip orders.

      So what was Apple to do? Intel represented the best option. Intel's mobile x86 chips were more power efficient than PowerPC versions. Intel would keep up the yearly updates of their chips. If Apple increased their orders from Intel, Intel could handle it because if Apple wasn't ordering a custom part, they were ordering more of a stock part. There are some cases where Apple has Intel design custom chips for them, mostly on the lower power side; however, Intel still can sell these to their other customers.

      As a side note, as a difference in the relationship between IBM and Apple look at the relationship between MS and IBM for the Xbox 360 Xenon chip [wikipedia.org]. This was a custom design by IBM for MS, but the basic chip design hasn't changed in seven years. As such chip manufacturing has been able to move the chip to smaller lithographies (90nm --> 45nm in 2008) both increasing yield and lowering cost.

    • by gweihir ( 88907 )

      Unless you have energy constraints, that is. Then RISC architecture rules. Given that most computers today are smartphones (and most run Linux, some run iOS), and many other CPUs are in data-centers were energy consumption also matters very much, I think discounting RISK this way does not reflect reality. Sure, enough people do run full-sized computers with wired network and power-grid access at home and these will remain enough to keep that model alive, but RISK has won the battle for supremacy a while ago

  • oversimplified (Score:5, Insightful)

    by kenorland ( 2691677 ) on Sunday September 16, 2012 @10:39AM (#41352425)

    ia32 dates back to the 1970's and is the last bastion of CISC,

    The x86 instruction set is pretty awful and Atom is a pretty lousy processor. But that's probably not due to RISC vs. CISC. IA32 today is little more than an encoding for a sequence of RISC instructions, and the decoder takes up very little silicon. If there really were large intrinsic performance differences, companies like Apple wouldn't have switched to x86 and RISC would have won in the desktop and workstation markets, both of which are performance sensitive.

    I'd like to see a well-founded analysis of the differences of Atom and ARM, but superficial statements like "RISC is bad" don't cut it.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      What really kills x86's performance/power ratio is that it has to maintain compatibility with ancient implementations. When x86 was designed, things like caches and page tables didn't exist; they got tacked on later. Today's x86 CPUs are forced to use optimizations such as caches (because it's the only way to get any performance) while still maintaining the illusion that they don't, as far as software is concerned. For example, x86 has to implement memory snooping on page tables to automatically invalidate

      • Re:oversimplified (Score:5, Interesting)

        by kenorland ( 2691677 ) on Sunday September 16, 2012 @02:05PM (#41354165)

        As it turns out, it's much easier for software to do that job

        As it turns out, that's false. Optimizations are highly dependent on the specific hardware and data, and it's hard for compilers or programmers to know what to do. Modern processors are as fast as they are because they split optimization in a good way between compilers and the CPU. Traditional CISC processors got that wrong, as well as hardcore traditional RISC processors; the last gasp of the latter was the IA64, which proved pretty conclusively that neither programmers nor compilers can do the job by themselves.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        For example, x86 has to implement memory snooping on page tables to automatically invalidate TLBs when the page table entry is modified by software, because there is no architectural requirement that software invalidate TLBs (and in fact no instructions to individually invalidate TLB entries, IIRC). Similarly, x86 requires data and instruction cache coherency, so there has to be a bunch of logic snooping on one cache and invalidating the other.

        Err... Not quite:

        • x86 TLBs aren't coherent with main memory; you need to do an explicit invalidate every time you change a PTE.
        • The instruction to invalidate individual TLB entries is called invlpg, and was introduced with the 486. Admittedly, it's quite slow, so it doesn't get used much, but it is there.
        • x86 has only very limited I-D cache coherence. You need to issue a serialising instruction whenever you modify anything which might have been cached in the I-cache.

        Basically, there's nothing in the x86 arch

    • Re:oversimplified (Score:4, Insightful)

      by stripes ( 3681 ) on Sunday September 16, 2012 @11:13AM (#41352687) Homepage Journal

      I'de say the x86 being the dominant CPU in the desktop has given Intel the R&D budget to overcome the disadvantages of being a 1970s instruction set. Anything they lose by not being able to wipe the slate clean (complex addressing modes in the critical data path, and complex instruction decoders for example), they get to offset by pouring tons of R&D onto either finding a way to "do the inefficient, efficiently", or finding another area they can make fast enough to offset the slowness they can't fix.

      The x86 is inelegant, and nothing will ever fix that, but if you want to bang some numbers around, well, the inelegance isn't slowing it down this decade.

      P.S.:

      IA32 today is little more than an encoding for a sequence of RISC instructions

      That was true of many CPUs over the years, even when RISC was new. In fact even before RISC existed as a concept. One of the "RISC sucks, it'll never take off" complaints was "if I wanted to write microcode I would have gotten onto the VAX design team". While the instruction set matters, it isn't the only thing. RISCs have very very simple addressing modes (sometimes no addressing modes) which means they can get some of the advantages of OOO without any hardware OOE support. When they get hardware OOE support nothing has to fuse results back together and so on. There are tons of things like that, but pretty much all of them can be combated with enough cleverness and die area. (but since die area tends to contribute to power usage, it'll be interesting to see if power efficiency is forever out of x86's reach, or if that too will eventually fall -- Intel seems to be doing a nice job chipping away at it)

    • by Bert64 ( 520050 )

      Performance hasn't got a lot to do with it... Backwards compatibility is what matters, closely followed by price and availability.
      While they were being actively developed and promoted, RISC architectures were beating x86 quite heavily on performance. However Intel had economies of scale on their side, they were able to sell millions of x86 chips and therefore outspend the RISC designers quite heavily.

      Intel tried to move on from x86 too, with IA64... They failed, largely because of a lack of backwards compat

      • While they were being actively developed and promoted, RISC architectures were beating x86 quite heavily on performance

        At times, a high end RISC chip would beat a similarly priced high end x86 chip, but performance advantages were modest and didn't last long.

        Intel tried to move on from x86 too, with IA64... They failed, largely because of a lack of backwards compatibility...

        Backwards compatibility at the instruction set level matters little to people who need high performance. If IA64 had worked well, peop

    • Re:oversimplified (Score:5, Insightful)

      by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday September 16, 2012 @11:34AM (#41352871) Homepage

      I'd like to see a well-founded analysis of the differences of Atom and ARM, but superficial statements like "RISC is bad" don't cut it.

      i've covered this a couple of times on slashdot: simply put it's down to the differences in execution speed vs the storage size of those instructions. slightly interfering with that is of course the sizes of the L1 and L2 caches, but that's another story.

      in essence: the x86 instruction set is *extremely* efficiently memory-packed. it was designed when memory was at a premium. each new revision added extra "escape codes" which kept the compactness but increased the complexity. by contrast, RISC instructions consume quite a lot more memory as they waste quite a few bits. in some cases *double* the amount of memory is required to store the instructions for a given program [hence where the L1 and L2 cache problem starts to come into play, but leaving that aside for now...]

      so what that means is that *regardless* of the fact that CISC instructions are translated into RISC ones, the main part of the CPU has to run at a *much* faster clock rate than an equivalent RISC processor, just to keep up with decode rate. we've seen this clearly in an "empirical observable" way in the demo by ARM last year, of a 500mhz Dual-Core ARM Cortex A9 clearly keeping up with a 1.6ghz Intel Atom in side-by-side running of a web browser, which you can find on youtube.

      now, as we well know, power consumption is a square law of the clock rate. so in a rough comparison, in the same geometry (e.g. 45nm), that 1.6ghz CPU is going to be roughly TEN times more power consumption than that dual-core ARM Cortex A9. e.g. that 500mhz dual-core Cortex A9 is going to be about 0.5 watts (roughly true) and the 1.6ghz Intel Atom is going to be about 5 watts (roughly true).

      what that means is that x86 is basicallly onto a losing game.... period. the only way to "win" is for Intel and AMD to have access to geometries that are at least 2x better than anything else available in the world. each new geometry that comes out is not going to *stay* 2x better for very long. when everyone has access to 45nm, intel and AMD have to have access to 22nm or better... *at the same time*. not "in 6-12 months time", but *at the same time*. when everyone else has access to 28nm, intel and AMD have to have access to 14nm or better.

      intel know this, and AMD don't. it's why intel will sell their fab R&D plant when hell freezes over. AMD have a slight advantage in that they've added in parallel execution which *just* keeps them in the game i.e. their CPUs have always run at a clock rate that's *lower* than an intel CPU, forcing them to publish "equivalent clock rate" numbers in order to not appear to be behind intel. this trick - of doing more at a lower speed - will keep them in the game for a while.

      but, if intel and AMD don't come out with a RISC-based (or VILW or other parallel-instruction) processor soon, they'll pay the price. intel bought up that company that did the x86-to-DEC-Alpha JIT assembly translation stuff (back in the 1990s) so i know that they have the technology to keep things "x86-like".

  • The details of Clover Trail's power management won't be disclosed to Linux developers.

    So sign up as a Windows developer, get the info, and use it to improve Linux.

  • by UnknowingFool ( 672806 ) on Sunday September 16, 2012 @10:47AM (#41352485)

    Some here were immediately crying anti-trust and not understanding why Intel won't support Linux for Clover Tail. It's not an easy answer but power efficiency for Intel has been their weakness against ARM. If consumers had a choice between ARM based Android or Intel based Android, the Intel one might be slightly more powerful in computing but comes at the cost of battery life. For how tablets are used for most consumers, the increase in computing isn't worth the decrease in battery life. For geeks, it's worth it but general consumers don't see the value. Now if the tablet used a desktop OS like Windows or Linux, then the advantages are more transparent; however, the numbers favor Windows are there are more likely to be desktop Windows users with an Intel tablet than desktop Linux users with an Intel tablet. For short term strategy, it makes sense.

    Long term, I would say Intel isn't paying attention. Considering how MS have treated past partners, Intel is being short-sighted if they want to bet their mobile computing hopes on MS. Also have they seen Windows 8? Intel based tablets might appeal to businesses but Win 8 is a consumer OS. So consumers aren't going to buy it; businesses aren't going to buy it. Intel may have bet on the wrong horse.

  • by guidryp ( 702488 ) on Sunday September 16, 2012 @10:48AM (#41352495)

    "ARM ends up being several times more efficient than Intel"

    Wow. Someone suffered a flashback to the ancient CISC vs RISC wars.

    This is really totally out to lunch. Seek out some analysis from actual CPU designers on the topic. What I read generally pegs the x86 CISC overhead at maybe 10%, not several times.

    While I do feel it is annoying that Intel is pushing an Anti-Linux platform, it doesn't make sense to trot out ancient CISC/RISC myths to attack it.

    Intel Chips have lagged because they were targeting much different performance envelopes. But now the performance envelopes are converging and so are the power envelopes.

    Medfield has already been demonstrated at competetive power envelope in smartphones.

    http://www.anandtech.com/show/5770/lava-xolo-x900-review-the-first-intel-medfield-phone/6 [anandtech.com]

    Again we see reasonable numbers for the X900 but nothing stellar. The good news is that the whole x86 can't be power efficient argument appears to be completely debunked with the release of a single device.

    • If it were possible to give a +6, then your post would deserve one...

      One other thing about the pro-ARM propaganda on this site practically every day: How come the exact same people throwing a hissy-fit over Clovertrail never make a peep when ARM bends over backwards to cooperate with companies like Nokia & Apple whose ARM chips don't work with Linux in the slightest? By comparison, making a few tweaks to turn on Cloverfield's power saving features will be trivial compared to trying to get Linux running

      • by Truekaiser ( 724672 ) on Sunday September 16, 2012 @11:51AM (#41353021)

        arm does not make their own chips. They design the instruction sets and the silicon photo masks(look up how chips are made) but other companies make the actuall physical silicon product. Those companies can pick and choose what parts of the cpu they want to use and what instruction sets they want in it.

        to use food as a analogy, Intel is every store or restaurant that you can buy food pre made and ready to eat. arm would be like someone selling a recipe to you. it's up to you to make it, and what you put into it.

        So it's not arm's fault for not supporting linux on the nokia and apple variants of the arm v7 instruction set. It's those respective companies. So if you had enough money and access to either rent or own a cpu fab plant, you too could make your own version of a arm chip and make it only be support on haiku os for example.

    • Thanks for posting that. The article felt nothing like a hit piece against all things Intel and AMD just because they're not officially supporting one processor on Linux at the time of release. Intel is very good at releasing Linux drivers for their GPUs etc. compared to others. I think they figure that too many Linux folks won't be falling over themselves buying Windows 8 touch tablets and running Ubuntu on them. The Slashdot consensus seems to be that Windows 8 tablets suck and will be a massive failure,

      • The 1 percent Linux hobbyist market has miraculously changed into the 50 percent Android market in the last two years. Chip makers should care about that.
    • Intel in the 90's was performance at any power cost. Then in the last 10 years, it was performance within a limited power envelope, aiming at laptops and desktops. The power they were aiming at was much higher than smartphones, so although they got more "power efficient", you do very different things when aiming at 1W than when aiming at 10W or 100W. If you can waste 5W and get 20% more performance, that's a great thing to do. But not for phones.

      I think what you're seeing is Atom was a kludge. If Intel

  • x86 to blame? (Score:5, Insightful)

    by leromarinvit ( 1462031 ) on Sunday September 16, 2012 @10:49AM (#41352499)

    Is it really true that x86 is necessarily (substantially) less efficient than ARM? x86 instruction decoding has been a tiny part of the chip area for many years now. While it's probably relatively more on smaller processors like Atom, it's still small. The rest of the architecture is already RISC. Atom might still be a bad architecture, but I don't think it's fair to say x86 always causes that.

    Also, there is exactly one x86 Android phone that I know of, and while its power efficiency isn't stellar, the difference is nowhere near 4x. From the benchmarks I've seen, it seems to be right in the middle of the pack. I'd really like to see the source for that claim.

    • I don't understand why people put so much weight on instruction-level compatibility. As if compiler technology does not exist. Heck, even today compilers can translate efficiently from one instruction-set to the other (see e.g. virtual machines, emulators, etc).

      Granted, there will always be some parts of code (the "innermost loops") that need to be handcrafted to be as efficient as possible, but I don't believe this is so important to base your whole roadmap on as a semiconductor design house.

  • just send me the hardware.
  • by leathered ( 780018 ) on Sunday September 16, 2012 @11:03AM (#41352601)

    .. and the reason is not efficiency or performance.. Intel enjoys huge (50%+) margins on x86 CPUs that simply will not be tolerated by the tablet or mobile device vendors. Contrast this with the pennies that ARM and their fab partners make for each unit sold. Even Intel's excellent process tech can't save them cost wise when you can get a complete ARM SoC with integrated GPU for $7. [rhombus-tech.net]

  • by fermion ( 181285 ) on Sunday September 16, 2012 @11:25AM (#41352795) Homepage Journal
    Most 70's era microprocessor pretty much had 50 opcode and a few registers. It was possible to memorize these all and decompile from hex in your head. I never had the mental acuity to do so, but many of my friends in high school could. By the 1980's, there was a lot of big iron that used RISC, but as I recall these had more opcodes than, say, a 6502, and I know that RISC does not just mean reduced instruction. It is a simplified instruction set. Right now I think we have a lot of hybrid chips on the market. The war between CISC and RISC has come to place where both are used as needed. In the x86 space, legacy is an issue. MS has not done what Apple does which is to say support a machine for 3-5 years, then develop something that meets current demands. The common person would not even see a RISC processor until Apple switched to the PowerPC, which brought the conflict between CISC and RISC to the public. It is interesting to have this conversation now because this was exactly what was said back them. RISC is more efficient, so the chip can be about half as fast, and still be as fast as the CISC chip.

    So this OS specific chip is nothing new, and *nix exclusion is not new. Many microcomputers could not run *nix because they did not have a PMMU. The ATT computer ran a 68K processor with a custom PMMU. Over the past 10 years there have been MS Windows only printers and cameras which offloaded work to the computer to make the peripheral cheaper.

    Which is to say that there are clearly benefits for RISC and CISC. MS built and empire on CISC, and clearly intends to continue to do so, only moving to RISC on a limited basis for high end highly efficient devices. For the tablet for the rest of us, if they can ship MS Windows 8 on a $400 device that runs just like a laptop, they will do so., If efficiency were the only issue, then we would be running Apple type hardware, which, I guess, on the tablet we are. But while 50 million tablets are sold, MS wants the other 100 million laptop users that do not have a tablet, yet, because it is not MS Windows.

  • In other words, Intel says they failed at hiding their power consumption details from the API (instruction set).

  • by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Sunday September 16, 2012 @11:32AM (#41352853) Homepage

    The only advantages x86 has over ARM are performance and the ability to run closed source x86-only binaries...

    Performance is generally less important than power consumption in an embedded device, and this CPU is clearly designed for lower power use so it may not be much faster than comparable ARM designs...

    And when it comes to x86-only binaries, there is very little linux software which is x86 only and even less for android... Conversely there are a lot of closed source android applications which are arm-only... So at best you have a linux device which offers no advantages over ARM, at worst you have an android device which cannot run large numbers of android apps while costing more, being slower and having inferior battery life.

    Windows on the other hand does have huge numbers of apps which are tied to x86, which for some users may outweigh any other downsides. On the other hand, most windows apps are not designed for a touchscreen interface and might not be very usable on tablets, and any new apps designed for such devices might well be ported to arm too.

    • by Dwedit ( 232252 )

      You want a good x86-only Linux program? Wine. There's a good one for you.

    • by smash ( 1351 )

      The only advantages x86 has over ARM are performance and the ability to run closed source x86-only binaries... Performance is generally less important than power consumption in an embedded device,

      2 things: with the advent of smartphones that play 3d games - CPU performance is becoming more important. Also, that ability to run closed source x86 binaries is huge.

      In terms of performance per watt, intel is doing pretty well. Phones and tablets are becoming less about absolute minimum consumption, and mor

  • Reality check (Score:4, Interesting)

    by shutdown -p now ( 807394 ) on Sunday September 16, 2012 @11:34AM (#41352869) Journal

    If nobody wants it and it's a dead-end for technical and business reasons, then how come that there is a slew of x86 Win8 devices announced by different manufacturers - including guys such as Samsung, who don't have any problems earning boatloads of money on Android today?

    Heck, it's even funnier than that - what about Android devices already running Medfield?

  • by smash ( 1351 )

    No wireless. Less space than a nomad. Lame.

    I predict clover trail will be a roaring success.

  • > It had better, since Atom currently provides about 1/4 of the power efficiency of the
    > ARM processors that run IOS and Android devices.

    Don't bet on it. The ARM design in itself is more efficient for sure, but Intel are frankly well ahead of anyone else in actual manufacture.

    If they decide to build these with their Finfets and the latest node they have, then the gap between Intel Atoms and ARMs made at Samsung, TSMC or anyone else won't be so noticeable, unless that is that the Atoms actually pull ah

  • "The details of Clover Trail's power management won't be disclosed to Linux developers." ...Perhaps this is because Microsoft is helping to fund development of the Intel solution behind the scenes? Perhaps they have worked out an agreement of some sort to prevent Linux from finding its way onto the chip.

    I would like to know why any information would be withheld from Linux developers--the only reason I could imagine for doing so would be to help Microsoft stage a lead on use of the chip. I can think of no go

  • by PCM2 ( 4486 ) on Sunday September 16, 2012 @02:15PM (#41354265) Homepage

    This is from an Intel rep:

    There is no fundamental barrier to supporting Linux on Clover Trail since it utilizes Intel architecture cores, we are simply focusing our current efforts for this Clover Trail product on Windows 8. Our Medfield products support Android-based smartphones and tablets on the market today, and we may evaluate supporting Linux-based OSes on other tablet products in the future.

    Just quoting, believe what you want.

  • by clevershark ( 130296 ) on Sunday September 16, 2012 @04:53PM (#41355907) Homepage

    ...is that this will fail miserably and cost enough that other manufacturers will think twice before accepting bribes from Microsoft for making something that actively shuts out non-Windows OS's.

  • by DaneM ( 810927 ) on Sunday September 16, 2012 @05:36PM (#41356239)

    I'm just waiting for the day when I can get an ARM-based mid-high-end PC and expect it to run all the applications and games I currently expect from an x86_64 CPU. It's becoming apparent (to me, at least) that ARM is a much better kind of CPU than x86 derivatives, so naturally, I want one--so long as it doesn't put me in the same boat as Mac users were in 10 years ago.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...