Intel yesterday announced its intention to acquire chipmaker Altera in an all-cash deal. This is not a business news publication, and this story is not about mergers and acquisitions. 

It’s about changing the Internet of Things from a fairy story into a market.

Let’s take a moment to clean the cobwebs around the Internet of Things, and reiterate what we mean by “Things.” 

Ubiquitous Communication

The IoT ideal is that everyday appliances and devices in the home or office will be capable of communicating with a managing server. The obvious reason is so those devices can share what they’re sensing and the results of what they’re doing, with a server that can be accessed through the web.

The second reason isn’t explained very much, maybe because folks think it’s too much “inside baseball.” Ideally, the functions and performance of everyday devices can be improved, even on a day-to-day basis, if they can receive their instructions from the server. 

If you’ve ever upgraded the firmware on your cable modem, you get the general idea.

ARM Wrestling

Up to now, the functions of everyday devices (security systems, automotive emission control, pacemakers) are improved by manufacturing and implemented at the factory. In an IoT situation, servers can transmit any functions that can be improved by logic wirelessly to devices.

Of course, all this assumes that the “things” in question are open to suggestion. 

Programmable logic chips in low-cost devices have historically been very different from the CPUs that run PCs. An application-specific integrated circuit (ASIC) is, simply enough, a chip made to run a program when embedded in a device. 

Its program, by design, is not software, but it can be firmware. ARM, the leading maker of ASICs in the world, is Intel’s arch-nemesis.

Because ARM is so prominent in the world of embedded systems (e.g., door access systems, pipeline monitoring sensors, portable healthcare devices), the devices that make up those systems have been allowed to be the “Things” in the definition of the IoT.

Most people who imagine a genuinely feasible IoT picture a market full of devices that are all full-time residents of ARM’s playground.

The Intel Advantage

But here’s ARM’s Achilles’ heel:  Although they are inexpensive to manufacture, they are expensive to design. You don’t just “program” an ARM chip. You engineer it.

Here is where Intel believes it can gain an edge. Developers for Intel-based systems are accustomed to programming more than engineering. A programmable, low-cost logic chip would not only give Intel a wider inroad into the IoT arena, but would then enable it to redefine “Things” to resemble something closer to Intel’s native territory: all-purpose appliances with multiple functions. 

Home energy and security and irrigation control. 

Automotive emission and brakes and traction control.

Conceivably, it would mean fewer “Things” and more functions.

No, It’s Not Golf

Yet Intel’s Achilles’ heel has been something more like the absence of a foot. You don’t reprogram Intel chips at all

You cannot deliver new functions to the microcode of an Intel CPU, otherwise the little beasts would have already been hacked a decade ago.

Intel started applying itself to the potentially impossible task of resolving this little issue back in 2010, when it began the curious practice of pairing its low-power, low-cost Atom CPUs (originally created to power “netbook” PCs, which fell out of fashion in the tablet era) with a type of processor called FPGA — a processor made by Altera.

A field-programmable gate array, unlike anything Intel typically makes, is intended to be reprogrammed. 

Maybe you can’t change the microcode instructions on an Atom chip, but if you could pair an Atom with a second chip that was reprogrammable, you wouldn’t have to.

“The thing an FPGA gives you is the ability to program, so you can adjust the software that you’re using, and the acceleration model,” explained Intel CEO Bryan Krzanich during a conference call to analysts yesterday.

As an example, Krzanich cited facial recognition algorithms — a technology that should be implemented in more security systems today, even though these algorithms are far from perfect. Recently, Intel CPUs have implemented algorithms in their microcode, but they could not be changed.

Attaching an FPGA gives Intel a way to move the application-specific code off of the CPU and onto an inexpensive part.

Custom Fitted Solutions

With that low-cost part being customizable even though the higher-cost part is not, Intel can move into ARM territory with chip packages custom-fitted to specific industries, including security.

Since 2010, Intel continued to take baby steps in this direction, including last year acquiring network chip producer Axxia

This put Intel in the curious position of one of ARM’s leading customers, at least until it could find a way to produce chips that could replace ARM’s.

You wouldn’t think acquiring Altera — a huge ARM partner — would be the way to do that. But surprisingly, you discover that Altera has been doing with ARM chips what Intel started doing with Atom chips five years ago: pairing them together with FPGAs.

Intel’s plan for Altera moving forward is to keep it operational as a division of the company, continuing to produce FPGAs that can be paired not only with Atom processors for IoT devices, but soon with Xeon processors in cloud data centers.

As soon as the second half of next year, Krzanich told analysts, Intel could be producing Atom + FPGA “on-die” packages geared for particular industries — chips paired together on a single substrate.

The Other IP Network

The long-term goal for Intel and Altera, however, is a bit of a question mark: welding the FPGA and the CPU part onto the same chip. 

Krzanich argued there would be cost savings in doing this, although analysts remain skeptical that this could actually introduce new manufacturing costs.

Where would these cost savings come from? It took a few questions yesterday before Krzanich revealed the answer. When it came, it hit everyone like a ton of bricks (monolithic bricks paired with customized bricks that, together, weigh about a ton).

The CEO essentially gave analysts one of the keys to making this whole IoT idea work in the first place. 

You see, one of the biggest expenses involved in any chip-making process is the royalties the manufacturer pays to rights holders for the use of algorithms, such as video decoding and facial recognition. Usually someone owns the software or firmware that makes a part application-specific in the first place.

(Now you know why so many entrepreneurs are truly excited about the potential of IoT.)

If an application-specific device were programmable out of the box, then it would not actually need to ship with the intellectual property for which royalties would be owed. Instead, you’d switch on that device, and the codecs or other IP would be downloaded from a server. Intel could save billions.

Of course, those costs wouldn’t disappear entirely from the economy. They’d get passed on to someone, and I’ll give you one guess as to whom.

Still, yesterday’s Altera acquisition announcement does dramatically alter the evolutionary picture for an Internet of Things, to become less about “Things” and more about “functions.” 

By making it about functions, IoT services could be accelerated, and Intel could save some cash. You might not, but that’s a bridge we can avoid crossing once we get to it.

Creative Commons Creative Commons Attribution 2.0 Generic License Title image by Christopher Michel.

Simpler Media Group, 2015