Industry sponsors:
HOME | NOTEBOOKS | Tablets | Handhelds | Panels | Embedded | Rugged Definitions | Testing | Tech primers | Industry leaders | About us
Sponsors: Advantech | Dell Rugged | Getac | Handheld Group | Juniper Systems | MobileDemand
Sponsors: Motion Computing | Samwell Ruggedbook | Trimble | Winmate | Xplore Technologies

« August 2016 | Main | October 2016 »

September 05, 2016

Intel introduces Kaby Lake, the 7th generation of Core processors

In August, Intel officially introduced the first few of its 7th generation Core processors, codenamed "Kaby Lake." That comes at a time where the news about PCs generally isn't very good, where Microsoft has a very hard time convincing users to switch to Windows 10, and where it's becoming increasingly more difficult for vertical market hardware manufacturers to keep up with Intel's rapid-fire release of new generations of high-end processors.

7th generation Kaby Lake also comes at a time where 4th generation "Haswell" processors are considered quite up-to-date in the mobile and industrial PC arena, 5th generation "Broadwell" makes customers wonder how it's better than Haswell, and 6th generation "Skylake" leaves them befuddled because, well, what happened to Broadwell? And the rather expensive (US$281 to US$393) new 7th generation chips also come at a time where customers balk at paying more than a hundred bucks for a tablet or more than a two or three hundred for a basic laptop.

So what is Intel thinking here? That they simply must follow "Moore's Law" which says that the number of transistors that fit on a given piece of chip real estate doubles every 18 month? Or that, like Disney, catering to a small clientele to whom price is not an issue is the profitable way to go? It's hard to say, especially since the generation game really hasn't been about meaningful increases in performance for a good while now.

That's certainly not to say that the new chips aren't better. They are. Intel loves to point out how many times faster new generations are per watt than older ones. And that's really getting closer to why all of this is happening. It's mostly about mobile. See, back in the day everyone knew that you just got an hour and a half max from a notebook before the battery ran out, and that was grudgingly accepted. But then came Steve Jobs with the iPad that ran 10 hours on a charge. And somehow that's what people came to expect.

On desktops, performance per watt hardly matters. You plug the PC in and it runs. Compared to heating and air conditioning, toasters, ovens, TVs and a houseful of light bulbs, whether a chip in a desktop runs at 17 watts or 35 watts or 85 watts hardly matters. But in mobile devices it does. Because Steve Jobs also decreed that they needed to be as slim as possible, so big, heavy batteries were out. It all became a matter of getting decent performance and long battery life. And that's one of the prime motivations behind all those new generations of Core processors.

Now combine that battery saver imperative with a quest to abide by Moore's "law" (which really just was a prediction) and — bingo — generation after generation of processors that each is a bit more efficient and a bit quicker.

How was it done? By coming up with a combination of all sorts of clever new power-saving techniques and by continuously shrinking the size of the transistors that are the basic building blocks of a microprocessors. To provide an idea of just how small things are getting inside a microprocessor, consider this:

A human hair is on average about 100 micrometers thick, a tenth of a millimeter or about 4/1000th of an inch. The 8080 processor that started the PC revolution in the late 1970s with early microcomputers like the MITS Altair was based on 6 micrometer lithography, or "process technology." Process technology is generally defined as "half the distance between identical features in an array." So the smallest distance between two transistors in an 8080 was 12 micrometers, or about an eighth of the thickness of a human hair.

Over the decades since then, process technology has been miniaturized again and again and again. Whereas with that old 8080 chip (which cost three or four bucks at the time) it was 6 micrometer, which is 6,000 nanometers, the 7th generation of Intel Core processors use 14 nanometer process technology. Down from 6,000 to 14. So whereas the old 8080 had about 6,000 transistors total, with 14 nanometer process technology, Intel can now fit over a billion transistors onto the same amount of chip real estate. And with the die size of your average 7th generation Core processor larger than that of the little old 8080, it's probably more like five billion transistors or more. The head spins just thinking about it.

The upshot of it all is that the hugely larger number of logic gates on a chip offer vastly greater computing performance which you'd think would require vastly more power. But thanks to the hugely smaller size of all those transistors, that's actually not the case. Between the tiny size and all those logic gates available to run ultra-sophisticated power-savings operations, the chips are both more powerful and use less energy.

Now that said, there appears to be a law of diminishing returns. It's a bit like video games where early on each new generation had much better graphics, but now things are leveling off. The visual difference between mediocre and acceptable is huge, the difference between very good and terrific much smaller, and the difference between super-terrific and insane smaller yet. Same with processor technologies.

As a result, the performance and efficiency increases we've seen in the benchmark testing we do here in the RuggedPCReview lab have been getting smaller and smaller. By and large, 5th generation Broadwell offered little more than 4th generation Haswell. And 6th generation Skylake didn't offer all that much over Broadwell. The last really meaningful step we've seen was when 4th generation Haswell essentially allowed switching mobile systems from the standard voltage to ultra-low voltage versions of chips for much better battery life (or a much smaller battery) at roughly the same performance. Yes, each new generation has tweaks and new/improved features here and there but, honestly, unless you really, really need those features, larger power gains are to be had via faster storage or a leaner OS.

So there. As is, as of late Summer 2016, there are now six 7th generation Kaby Lake Core processors, all mobile chips. Three are part of the hyper-efficient "Y" line with thermal design power of just 4.5 watts, and three of the only super-efficient "U" line with TDPs of 15 watts. The primary difference between the two lines is that the "Y" chips run at a very low default clock speed, but can perform at a much higher "turbo" clock speed as long as things don't get too hot, whereas the "U" chips have a higher default clock speed with less additional "turbo" headroom. Think of it like the difference between a car with a small, very efficient motor that can also reach very high performance with a big turbo, versus a vehicle with a larger, more powerful motor with just a bit of extra turbo kick.

In general, Intel has been using what they call a "tick-tock" system where generations alternate between "tick" (yet smaller process technology, but same microprocessor architecture) and "tock" (new microprocessor architecture). By that model, the 7th generation should have switched from 14nm to 10nm process technology, but it didn't and stayed at 14nm. Apparently it gets more and more difficult to shrink things beyond a certain level, and so Intel instead optimized the physical construction of those hyper-tiny transistors. That, they say, allows things to run a bit cooler and requires a bit less power, resulting in, according to Intel, a 12-19% performance gain, mostly through running the chips at a higher clock speed.

The architectures of both the cores and the graphics haven't really changed. But there are some additions that may be welcomed by certain users. For example, Kaby Lake has much better 4K video capability now, mostly in the hardware encoding/decoding areas. And a new implementation of Speed Shift lets the CPU control turbo frequency instead of the operating system, which means the chip could speed up much faster. We'll know more once we get to compare Kaby Lake performance and efficiency with that of the predecessor processor generations.

There's some disturbing news as well. Apparently, some discussions and agreements between Intel and Microsoft resulted in Kaby Lake not really supporting anything before Windows 10. We don't know if that means older versions of Windows simply would not run, or just that they wouldn't run well. Given that so far (early Sept. 2016), Windows 10 only has 23% of the desktop OS share, any restriction on using older versions of Windows on new chips seems both ham-fisted and heavy-handed.

For a detailed tech discussion of all things Kaby Lake, check here.

Posted by conradb212 at 08:16 PM | Comments (0)