187 Views

What is a FPGA and what does it do?

11/03/2019, hardwarebee

Let’s take a few steps back… in the beginning, there was the transistor. This was a form of electronic amplifier or switch that, unlike the prevailing vacuum tubes of the early days, could be made small. In the late 1950s, a bunch of companies, including Texas Instruments under the leadership of Jack Kilby, started figuring out how to put more than one transistor on the same piece of silicon. The integrated circuit, aka “the chip” was born (and Kilby was awarded the Nobel Prize in physics for his work in this).

 

So, for many decades, the idea was simple: you designed a circuit made of transistors. Individual transistors. That circuit was handed to a bunch of people who knew how to “draw” those transistors out of the various layers they could form: doped silicon, silicon oxide, conductive poly, metal, in the process of making an IC. This was originally a pretty intensively manual process.. the microprocessor that pretty much started the home computer revolution, the MOS Technology 6502, was hand layed out in rubylith… a buddy of mine worked on that. The chip worked the very first time they powered it up.

 

Over the years, computers started helping out this process, but they were still doing much of the work by hand in the IC layout process. But chips were getting bigger, more complex to design, and so gradually, first circuitry moved to, at least some times, being designed in a high-level language; not gates, not transistors, but “code” (M-Language, VHDL, Verilog). And layout could be done, at least in part, via “floor plannning” software and silicon compilers. That’s how most chips are done today.

 

Along the way, the notion of making every chip a full custom job started to get questioned. What if, rather than designing every chip from scratch, you made an array of transistors or even general purpose logic elements, and just changed the metal layers — the “wires” used in every chip to hook up circuits to one another. This was created, and dubbed a gate array, or (more often in Europe) an unconfigured logic array (ULA). This is faster and cheaper to develop than a full blown custom chip, since most of the chip is identical for every device made… it’s much more like a ROM than a classic IC, from the manufacturing point of view. And given the fixed transistors, even fairly early floor planning software could do the routing for these devices. Not as cost effective, low powered, or high performance as you could get with a full custom chip, but good enough for many applications.

 

So, in the next step, what if, rather than making those logic block connections with wires, you made them with an array of switches… like a RAM or a flash memory device? If you think about it, every memory chip is just a big array of transistor switches, each set to “0” or “1”. When used as a memory, that array is just read back. But there are other uses… when you make those same memory cells light sensitive, you get a camera sensor. Now make them control the routing of signals throughout a logic array, and you have a Field Programmable Logic Array, or FPGA. An FPGA will either have that block of flash on-chip, or it’ll bootstrap from a flash or ROM device and load up a RAM array. The key here is that the chip itself is now completely the same for every use, no customization at all. It’s a fully programmable chip… with enough resources, you can make it do Ethernet, WiFi, put a CPU in there, etc. Same idea as any other chip, you’re still probably programming it using chip design tools like VHDL or Verilog and a chip-specific floor planning/routing tool. There are even companies that can take your FPGA design as-is and turn it directly into a low cost gate array.

 

But there’s another weird thing happening in FPGAs… particularly as they get to finer and finer chip processes, they’re replacing more and more gate arrays and full custom chips. It’s the combination of economies of scale and the realities of chip manufacture. FPGA companies are very aggressive on process… so while you can do much better building your circuit in a 60nm* full custom or gate array than an FPGA, it might be a different story comparing a 22nm or 16nm FPGA against a 60nm gate array… the bottom line: does the part meet my goals for the best price. At that point, the FPGA is an advantage — you can download hardware fixes from the internet. I did my first such FPGA design that could do this back in the late 90s. This was a “datacasting” modem, part of a set top box, and the FPGA code was actually part of the modem’s device driver… every time the driver was updated, the hardware could change.

 

* when chip folks speak of “65nm” or “22nm” (or the old guys talk about “microns”)… they’re talking about “L-eff”… the effective line size in the chip… the smallest feature you can have. State of the art is somewhere around 7 nanometers… that’s fully production at Samsung and TSMC . Smaller L-eff means, generally, lower power, higher speed, and lower cost for the same design.

 

___________________________________

This is a guest post by Dave Haynie