Skip to main content

Prescient Processing: A Q&A with Intel Futurist Brian Johnson, and Why We Shouldn't Fear the Future

Johnson channels science fiction and people's worries to help chart the future of microprocessors and the technologies that use them


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Much of Intel's success as a microprocessor-maker over the past four decades has come from the company's ability to anticipate the future of technology. Since company co-founder Gordon Moore's famous assertion in 1965 that the number of transistors that can be placed on an integrated circuit would roughly double every two years, Intel's microprocessors have grown steadily smaller, faster and cheaper, helping to give birth to personal computing and mobile devices that once existed only in the realm of science fiction.

So it comes as no surprise that science fiction serves as a key inspiration for the man whose job it is to envisage Intel's future and, to a large degree, the future of computing itself.

Brian David Johnson is hardly the world's first futurist--a vocation of prognosticating scientists and social scientists dating back to the likes of Jules Verne and H. G. Wells. But he is the first to hold that title at Intel. Far from just imagining a future whose fate depends largely on the actions of others, Johnson has the resources at his disposal to transform his future-casting into reality.

For example, Johnson worked directly with Intel's software, hardware and silicon architects on the company's Atom-based system-on-a-chip (SoC) designs for processors used in next-generation compact and mobile devices. The company's software and hardware engineers likewise have consulted the research Johnson presented in his 2010 book Screen Future: The Future of Entertainment, Computing and the Devices we Love (Intel Press) to, as he puts it, "help envision a world of multiple devices and form factors that are all connected together." Johnson is currently working on the design for Intel's CPU circa 2019.

Last month Johnson was in Manhattan at the pop-culture convention New York Comic Con to promote Intel's Tomorrow Project, which engages the public in discussions about the future of computing as well its impact on society. As part of the Tomorrow Project, Intel also publishes annual science fiction anthologies featuring short stories that Johnson, himself a sci-fi writer who has worked at Intel for the past decade, says emphasize the "science" side of the genre and are intended to convey the message that humanity ultimately still controls its own destiny.

Soon after the convention, Scientific American spoke with Johnson about future-casting microprocessors, what scares people most about technology, how we can learn about the future from the past and what it takes to become a futurist—nature, nurture or a little of both?

[An edited transcript of the interview follows.]


How can science fiction influence real-world research and development?
There's a great symbiotic history between science fiction and science fact-fiction informs fact. I go out and I do a lot of lectures on AI [artificial intelligence] and robotics, and I talk about inspiration and how we can use science fiction to play around with these ideas and every time people come to me, pull me aside and say, "You do know the reason why I got into robotics was C3PO, right?" I've become a confessor to some people. I just take their hand and say, "You are not alone. It's okay."

And it's true, science fiction inspires people to what they could do. It captures their imagination, which is incredibly important for developing better technology. Such as, I'm going to write this story based on this research from these artificial intelligence and robotics guys so they have a better image of what they can do with that technology.

Which science fiction authors have inspired you the most?
So there's what inspired me as a kid--the Asimov, the Bradbury, the Heinlein—that forms the core of science fiction. As I got a little older and a little more sophisticated, it was people like Philip K. Dick, J. G. Ballard, and even more recently people like Vernor Vinge and Cory Doctorow and Charlie Stross, those types of guys. Now most of the stuff I'm inspired by is the near future that is very much based on science fact.

How does the past help inform your decisions now and your thinking about the future?
For me, it's all about models. Everything I do is models based on computer science, social science, statistics, economics—that all goes together. What I tell a lot of people is that 80 percent of what I do is actually history because that's where some of the best models are. Not that you can do a copy/repeat, but you can look at what's happened. Not only what happened and where did we go—a lot of economists, for example, will tell you that you can't say, "This is the way that it happened in the past, so this is the way it will happen in the future." But you can say, "This is the way people thought things were going to go." So I do a lot of reading about history. It's not that I want to say, "That guy got it wrong." I don't care about that. Ultimately, it's my job, because I'm an incredibly pragmatic futurist, to come up with a vision we can build.

How does your role future casting for Intel fit in with what the company is doing as a maker of microprocessors?
I sit in front of the company's development road map. For the folks that do the chip manufacturing and chip design, it's my job to literally get out in front of that. So I work with a lot of the chip designers in Israel and elsewhere. And every year they remind me that I need to be thinking about, for example, 2020. I need to get out and inform Intel's movement toward that year. My day job is to create the models that feed the chip designers. I create models of what the experience will be like, so what it will feel like to use a computer in 2020. Intel is an engineering company, so I turn that into requirements and capabilities for our chips. I'm working on 2019 right now.

So you're usually looking ahead nine or 10 years ahead?
About 10 years is the cadence I try to keep to. It varies. A lot of what I do also is called "backcasting." I'll work with the people who are designing ultrabooks [extremely thin, light laptops], and they'll ask, "What should we do for 2015?" And I'll say, I've got this body of data, let's look at what the future of ultrabooks looks like by starting at 2020 and working back five years, instead of starting at 2011 and looking ahead a few years.

How do you ensure that the ideas you have for Intel's future are compatible with the directions that hardware-makers (Apple, Dell, etcetera), who use Intel chips in their PCs and mobile devices, want to go with their products?
The first step in my process is social science. We have ethnographers and anthropologists studying people first and foremost. So all of the future casting work I do starts with a rich understanding of humans, who are going to use the technology after all. Then we get into the computer science. Then I do the statistical modeling. Then I start developing models about what the future is going to look like. Then I hit the road.

A huge part of our work is getting out and talking not just to our customers but the broader ecosystem of government, the military and universities. I ask them, "Where do you see things going? And what will it be like for a person to experience this future?" It is such an important part of my work to get their input.

Can you give us an example of some research with interesting implications for the future of technology?
I've been doing a lot of work with a synthetic biologist named Andrew Hessel doing work with the Pink Army Cooperative, the folks doing cancer research. He's studying the design of viruses as well as DNA. Think of the DNA as the software and an organism--bacteria or virus--as the hardware. You stick the software in and it actually becomes a computational device. Consider this, you take a GPS app and put it into your cell phone, and your cell phone becomes a GPS. But what's really awesome about synthetic biology is that you go to sleep with one organism and when you wake up in the morning there are two, and then there are four. They become self-replicating computational devices. I'm just starting to look into that.

What are some of the most important issues that you're talking to people about now when you're out on the road?
There are three main themes--one is called the secret life of data, the second is the ghost of computing and the third is the future of fear.

Those sound like book titles. How can data have a secret life?
The secret life of data is thinking about what it will be like to live in a world of big data. Consumers already know about big data. They already know about cloud computing, for example. What will that feel like when we're creating so much data about ourselves through sensors and other technology that data begins to take on a life of its own? It's already starting to happen, and it's only going to get bigger. You have algorithms talking to algorithms, machines talking to machines. What does it feel like to be in that world, number one, and number two, how do we make sure that when that data comes back to us that it's meaningful? It's not just synthesizing massive amounts of financial data and spitting me out some credit ratings. We've moved beyond that.

What do you mean when you talk about the "ghost of computing"?
Look at the microprocessor, it keeps getting smaller and smaller and smaller—it's crazy how small it gets. If it keeps getting smaller what happens when that unit of compute gets so small that it disappears? We've been talking about that world for awhile but as you get out 10 or 15 years we're getting closer and closer to it. What happens when computing is in the walls or in a table? So that's one side of it, what does the world look like when we're surrounded by intelligence?

There's another ghost of computing that doesn't look like this invisible specter that's all around us. It looks more like the ghost of [Jacob] Marley, dragging the chains behind him leading to all the cash boxes. We're dragging computer legacy systems behind us. I could go online and book a flight with Orbitz, but Orbitz still needs to talk to the Sabre Global Distribution System, the old system that came out of the mainframes that all of the airlines use. Orbitz still needs to speak with an antiquated piece of software. We can't forget that. New technologies and older technologies aren't mutually exclusive. They're going to have to work together to some extent, and we at Intel need to recognize that.

And the future of fear?
The reason I like talking about fear is that it's a human experience. We know that security is important, and it's only going to get more important. So as we look 10 to 15 years out, what I want to do is to think, what do we really need to be afraid of? I'm on sort of a personal campaign against fear. When we talk about what it means to live in a safe and secure world, there's a lot of misinformation and a lack of information out there. Because of that, people are creating bogeymen. We're creating these irrational things, and that's very dangerous--especially when we're making decisions, whether it's hardware design or something else. We need to take a fact-based approach to what should we be afraid of and what shouldn't we be afraid of. And the stuff that we shouldn't be afraid of, we need to push that aside. The stuff we should be afraid of, we really need to dig into.

What's frustrating is that talking about this fear is not usually a technology question, it's a cultural conversation. When I'm out teaching or lecturing, 50 percent of the questions I'm asked have to do with fear, something that someone is worried about. Let's find out what people are afraid of and attack it. I'm an incredibly optimistic person. The problem with fear is that fear sells. It even has policy implications. I want to pull people away from the fear because otherwise people will gravitate toward it. Very few innovations have come out of being fearful.

What are people afraid of--technology in general or something more specific?
Well, there are some specific fears such as identity theft and online banking. That interests me, but I want to go deeper into the stack. People think about security and privacy as if it is a thing, an element. We have carbon, we have sodium, we have security. That's not true. Security is a social construct. So you have to ask people that when they talk about security, what are they talking about? For example, security and privacy in the United States looks very different than it does in the E.U. or in China. What does it mean to be secure? What is the DNA of security and privacy?

Has anything come up during your discussions on the road that changed your way of thinking about what scares people?
There was one guy about six months ago, he's actually one of the reasons we started this research. It was at an event in San Francisco. We had been talking about futurism and this guy stood up and started talking about fear. He had two daughters and he felt that mobile phones and the Internet were stealing his daughters away from him. His daughters were 13, 14--right at the cusp; texting all the time. "The technology is destroying my daughters' ability to talk to humans," he said. And he started to get really upset. So upset that security was starting to move in. So I started thinking and the first thing I said was, "Stop. You are worried about your daughters because you love them. That fear, that worry, where it's coming from--good, do that. If more people did that we'd be a better society. I'm not discounting your fear, man. That means you're a good dad." So he took a breath, and everyone relaxed.

Smart phones have only been around for a few years, so we're still trying to find out what's socially and culturally acceptable. I asked him, "Are you a family that watches TV while you eat dinner or never watches TV during dinner?" He said that they never watched TV during dinner. So I said, "That's a decision that you make as a family about technology. As technology progresses, you have to remember that you are in control. You want to make sure that your daughters are safe and healthy and smart and ready for the world. That is where it's coming from, and let's also talk about how we can design technology to improve that." Because the fear was so real and visceral, it just hit me. This guy was scared and that just stopped me in my tracks.

How does a future futurist spend his time as a kid?
Growing up, my dad was a radar-tracking engineer and my mom was an [information technology] specialist. My pop used to come home with electrical schematics of the radar and tell me the story of how it worked. A few weeks later he would come home with an actual piece of the radar and say, "Take it apart." And then he would actually show me how to take it apart. I think about when this happened and I realize that it was around the time I was learning to read. I was learning to read schematics the same time I was learning to read, so I grew up immersed in technology.

How does one become a futurist? Can you go to school and get a degree in futurism?
No, but you can go to the college where they first taught futurism, which Alvin Toffler does at the New School [for Social Research] here in New York City.

The New School is known for social research. What does a future futurist study there?
That's the lovely thing about the New School when I went, which was the late '80s, early '90s. You could take whatever you wanted. I studied a lot of computer science, but when I went to the New School it was this great mix where I could study sociology, I could study economics, I could study film and I could go down to [New York University] and take classes. As a futurist I need the technical chops to understand what we're talking about. But I also need the research chops to be able to go out and pull this all together and then have the ability to express it. It's not enough to have a vision of the future, you also have to be able to express it. That's one of the things I took away from the New School.

How did you become Intel's futurist?
I had been using future casting--a combination of computer science and social science and spotting trends--as part of my work on Intel projects looking five or 10 years out, such as the design for system-on-a-chip (SoC) processors, the new type of chip we're putting together that consolidates more processes in less space. Future casting helped us ask ourselves hard questions about the future of technology and figure out what to build. So [Intel chief technology officer] Justin Rattner said to me, "We think you should be Intel's futurist." And I said, "No way." That's a huge responsibility, especially for a place like Intel.

At the time Justin wanted me to get out there and start talking to people about the future. We had such discussions internally, but we hadn't been talking about it with others outside the company. The next week [June 30, 2010,] we released the book Screen Future, which was this work about technology in 2015. I sat down and talked to the press. Almost everyone said, "So you're Intel's futurist." At that point I realized that I already had the job.

What is the greatest misconception that people have about the future?
So many people think the future is something that is set. They say, "You're a futurist, make a prediction." The future is much more complicated than that. The future is completely in motion, it isn't this fixed point out there that we're all sort of running for and can't do anything about. The fact of the matter is, the future is made everyday by the actions of people. Because of that, people need to be active participants in that future. Literally, the future is in our hands. The biggest way you can affect the future is to talk about it with your family, your friends, your government.

Noise to Signal: Yahoo Ties Its Future to Mobile Apps, Personalized TV Viewing

Larry Greenemeier is the associate editor of technology for Scientific American, covering a variety of tech-related topics, including biotech, computers, military tech, nanotech and robots.

More by Larry Greenemeier