A few weeks ago, I woke up uncharacteristically early in the morning in Brooklyn, picked up my car, and drove up the Hudson River to the small Westchester County community of Yorktown Heights. There, amid rolling hills and old farmhouses, is the Thomas J. Watson Research Center, the headquarters of IBM Research designed by Eero Saarinen in the 1960s during the Jet Age.
Deep in this building, through endless hallways and security doors guarded by iris scanners, is where the company’s scientists work hard to develop what the director of research of IBM, Dario Gil, told me is “the next branch of computing”: quantum computers.
I was at the Watson Center to preview IBM’s updated technical roadmap for realizing practical quantum computing at scale. It involved a lot of talk about “count of qubits”, “quantum coherence”, “error mitigation”, “software orchestration” and other topics you would need to be an electrical engineer with a background in computer science and a familiarity with quantum. mechanics to follow entirely.
I’m neither of those things, but I’ve observed the quantum computing space long enough to know that the work done here by researchers at IBM – as well as their competitors at companies like Google and Microsoft, as well as countless startups around the world – is valid for driving the next big leap in computing. Which, given that computing is a “horizontal, all-encompassing technology,” as Gil told me, will have major implications for advancements in everything from cybersecurity to artificial intelligence to by designing better batteries.
Provided, of course, that they can actually make these things work.
Enter the quantum realm
The best way to understand a quantum computer—unless you set aside several years for graduate school at MIT or Caltech—is to compare it to the type of machine I’m typing this article. on: a conventional computer.
My MacBook Air runs on an M1 chip, which contains 16 billion transistors. Each of these transistors can represent the “1” or the “0” of binary information at a time – a little. The number of transistors is what gives the machine its computing power.
Sixteen billion transistors packed on a 120.5 mm² chip is a lot — TRADIC, the first transistorized computer, had less than 800. The semiconductor industry’s ability to design ever more transistors on a chip, a trend predicted by Intel co-founder Gordon Moore in the law that bears his name, is what made possible the exponential growth of computing power, which in turn made possible just about everything the rest.
But there are things that conventional computers can’t and never will be able to do, no matter how many transistors are placed on a square of silicon in a Taiwanese semiconductor fab (or ” fab”, in industry jargon). And that’s where the unique and downright weird properties of quantum computers come in.
Instead of bits, quantum computers process information using qubits, which can represent “0” and “1” simultaneously. How do they do this? You’re straining my level of expertise, but qubits essentially use the quantum mechanical phenomenon known as “superposition”, whereby the properties of certain subatomic particles aren’t defined until they’re measured . Think of Schrödinger’s cat, both dead and alive until you open its box.
A single qubit is cute, but things get really exciting when you start adding more. Classical computing power increases linearly with the addition of each transistor, but the power of a quantum computer increases exponentially. with the addition of each new reliable qubit. This is because of another quantum mechanical property called “entanglement”, whereby the individual probabilities of each qubit can be affected by the other qubits in the system.
All of this means that the upper limit of the power of a workable quantum computer far exceeds what would be possible in classical computing.
Thus, quantum computers could theoretically solve problems that a classical computer, no matter how powerful, could never solve. What kind of problems? What about the fundamental nature of material reality, which, after all, is ultimately based on quantum mechanics, not classical mechanics? (Sorry, Newton.) “Quantum computers simulate problems that we find in nature and in chemistry,” said Jay Gambetta, vice president of quantum computing at IBM.
Quantum computers could simulate the properties of a theoretical battery to help design one far more efficient and powerful than current versions. They could unravel complex logistical problems, discover optimal delivery routes, or improve forecasts for climate science.
On the security side, quantum computers could break cryptography methods, potentially rendering everything from emails to financial data to national secrets uncertain. This is why the race for quantum supremacy is also an international competition, in which the Chinese government is investing billions. These concerns prompted the White House earlier this month to issue a new memorandum to design national leadership in quantum computing and prepare the country for quantum-assisted cybersecurity threats.
Beyond the security concerns, the potential financial benefits could be significant. The companies are already offering the first quantum computing services through the cloud for customers like Exxon Mobil and Spanish bank BBVA. While the global quantum computing market was worth less than $500 million in 2020, International Data Corporation predicts it will reach $8.6 billion in revenue by 2027, with more than $16 billion in investments.
But none of this will be possible unless researchers can do the hard engineering work of turning a quantum computer from what is still largely a scientific experiment into a reliable industry.
The cold room
Inside the Watson Building, Jerry Chow – who runs IBM’s Quantum Computing Experimental Center – opened a 9-foot glass cube to show me something that looked like a gold chandelier: the Quantum System One from IBM. Much of the chandelier is essentially a high-tech refrigerator, with coils that carry superfluids capable of cooling material to 100ths of a degree Celsius above absolute zero — colder, Chow told me, than space. outer space.
Refrigeration is essential to running IBM’s quantum computers, and it also shows why it’s such an engineering challenge. While quantum computers are potentially much more powerful than their classical counterparts, they are also much, much more finicky.
Do you remember what I said about the quantum properties of superposition and entanglement? While qubits can do things that a single bit could never dream of, the slightest change in temperature or noise or radiation can cause them to lose these properties through something called decoherence.
This sophisticated refrigeration is designed to prevent the system’s qubits from decohering before the computer has completed its calculations. The very first superconducting qubits lost their coherence in less than a nanosecond, whereas today IBM’s most advanced quantum computers can maintain coherence for up to 400 microseconds. (Each second contains 1 million microseconds.)
The challenge facing IBM and other companies is to design quantum computers that are less error-prone while “scaling systems beyond thousands or even tens of thousands of qubits, up to perhaps millions of them,” Chow said.
It could take years. Last year, IBM introduced the Eagle, a 127-qubit processor, and in its new technical roadmap, it aims to unveil a 433-qubit processor called Osprey later this year, and a computer with more than 4 000 qubits by 2025. By then, quantum computing could move beyond the experimental phase, IBM CEO Arvind Krishna told reporters at a press conference early of the month.
Many experts are skeptical of the arrival of IBM or any of its competitors, raising the possibility that the engineering problems presented by quantum computers are simply too difficult for the systems to be truly reliable. . “What’s happened over the last decade is that there’s been a huge number of assertions about the most immediate things you can do with a quantum computer, like solving all these problems of ‘machine learning,’ University of Texas quantum computing expert Scott Aaronson told me last year. “But those claims are about 90% bullshit.” To deliver on that promise, “you’re going to need breakthrough development.”
In an increasingly digital world, future progress will depend on our ability to get ever more out of the computers we create. And it will depend on the work of researchers like Chow and his colleagues, working in windowless labs to achieve groundbreaking new development around some of computer engineering’s toughest problems – and along the way, trying to build the coming.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here to subscribe!