Engineers at Stanford have come up with a new process to build computers that use carbon nanotubes instead of silicon, taking a significant step forward in the quest for ever-smaller and faster computers.

I talked with Stanford PhD student Max Shulaker who works with Professors Subhasish Mihtra and Philip Wong in the electrical engineering department, and is the lead author on a new paper about the carbon nanotube computer in the journal Nature.

So what is a carbon nanotube?
A carbon nanotube is pretty much exactly what it sounds like. It’s a nanotube of carbon. If you imagine a sheet of carbon atoms just a single atom thick and you roll it up into a tube, then you’ve got yourself a carbon nanotube. They are very, very thin, very small. Their diameter is nominally 1.2 nanometers. To put that into perspective, you can line up thousands or tens of thousands of these carbon nanotubes side by side and they’d still fit within a single human hair.

Stanford grad student Max Shulaker holds a wafer filled with carbon nanotubes.(Photo: Norbert von der Groeben)
Stanford grad student Max Shulaker holds a wafer covered with carbon nanotubes. (Photo: Norbert von der Groeben)

Why do we need an alternative to silicon chips?
Silicon has really been the workhorse of Silicon Valley and computation and computers for decades. And it’s done a phenomenal job, but it’s really being pushed to its limits. Year by year computational power keeps improving. It’s because the transistors, which make up the computers, keep getting smaller and smaller.

But it reaches a point — and we’re kind of reaching that point now — where it’s very challenging to keep making the transistors smaller and smaller to keep getting better and better performance. There are still tricks to be played with silicon to keep improving the performance, but now researchers are starting to look for alternatives, or at least supplements and complements to silicon going forward.

So this is potentially a way to keep Moore’s Law going.
Yeah, so Moore’s Law essentially states: every 18 months, the number of transistors in an integrated circuit basically doubles. With that you get increased computational power, and that’s why your cell phones are way more powerful than those huge computers were a decade ago when I was growing up. Carbon nanotubes are basically an exciting emerging technology which can possibly help extend Moore’s Law and keep improving computational power out several more technology modes in the future.

Why haven’t we been able to switch to using carbon nanotubes before?
Carbon nanotubes have been around for 15 years or so. There was a lot of excitement initially, and it kind of tapered down because it was very challenging to actually build anything out of carbon nanotubes.

Carbon nanotubes are actually grown by mother nature, and there are inherent imperfections with them. These imperfections really kind of stopped any sort of large-scale technology from being made from carbon nanotubes. At Stanford, the group led by Professors Mihtra and Wong has, over the years, come up with techniques on how to overcome these inherent imperfections. By piecing everything together, we were able to show that carbon nanotubes are in fact a “manufacturable” technology with this demonstration of the computer.

In the video below, Shulaker explains how they create carbon nanotube circuits. (SACHA the handshaking robot is not the computer they just unveiled; the one in the paper is named Cedric.)

So explain what you all did.
This first part was coming up with ways of overcoming these inherent imperfections in a scalable manner. If you’re building a circuit with billions of transistors, you can’t go one-by-one and make sure that transistor is okay and fine-tune it to make sure it works.

And with that foundation in place, we really want to demonstrate now that you can use it to actually build useful circuits. So we built a basic computer fabricated completely out of carbon nanotube transistors. This computer can do things which a normal computer can do. It runs a very basic operating system which can actually do multitasking, for instance.

Should we be lining up at Best Buy?
What we built is not a computer that you would get in Best Buy. The reason is, this computer is still small, and it’s limited by the fact that we do all this fabrication here in our academic fabrication facility at Stanford. In terms of academic fabs, it’s marvelous. But compared to an industrial fab, it really can’t compare, both in terms of cleanliness and in terms of its ability to fabricate circuits in a very robust manner and a very small manner.

Shulaker holds a "wafer" full of carbon nanotubes. (Photo: Norbert von der Groeben)
Thousands of nanotubes would fit within the width of a human hair. (Photo: Norbert von der Groeben)

So what we did is a demonstration showing that you actually can manufacture circuits. Now, if this is something industry perceives, that’s where you can start building much larger systems.

When will we start seeing this technology?
That’s the ultimate goal, for this technology to be useful and to be used one day. Typically I think from these sorts of demonstrations, you’re looking at maybe a decade, two decades for this technology to start being commercially viable.

Should we start calling it “Carbon Valley?”
It would be naive of me to think Silicon Valley will one day be called Carbon Valley, or at least overnight because there’s so much time and money and decades of work invested in silicon. And silicon will not just disappear. It’s a great technology, and it’s done amazing things.

What’s important is everything we did, the fabrication, the design, everything, is compatible with silicon. So you can have a silicon chip and you can then build this carbon nanotube circuit right next to it — or right on top of it. And because it’s compatible with silicon, it means integrating it into the current process is very doable, and that also makes it much more commercially viable. So maybe it will be Silicon Valley, and then Silicon Carbon Valley and then who knows from there?

  • Local SEO Orlando

    Year by year computational power keeps improving. It’s because the
    transistors, which make up the computers, keep getting smaller and
    smaller. Great post…


Molly Samuel

Molly Samuel joined KQED as an intern in 2007, and since then has worked here as a reporter, producer, director and blogger. Before becoming KQED Science’s Multimedia Producer, she was a producer for Climate Watch. Molly has also reported for NPR, KALW and High Country News, and has produced audio stories for The Encyclopedia of Life and the Oakland Museum of California. She was a fellow with the Middlebury Fellowships in Environmental Journalism and a journalist-in-residence at the National Evolutionary Synthesis Center. Molly has a degree in Ancient Greek from Oberlin College and is a co-founder of the record label True Panther Sounds.

Sponsored by

Become a KQED sponsor