Quantum computers are more energy efficient


I was Tweeting about something I want to be true, but my colleagues disagreed. So, here is one theoretical model to scale a quantum computer into a quantum system and the math behind energy usage, processing power, and cost. It shows that today’s quantum computers are 10x more energy efficient than classical supercomputers, ceteris paribus.

Downtown Chicago facing West

Me: “A good reminder that today’s exabyte computers (quintillion FLOPS) require around 20 MW, while a quantum computer will likely use much less”

They: “How many fault-tolerant flops can you get for 20MW with a quantum computer?”

Me: “Using qFlex and RQS simulation model, Villalonga et. al., 2019, and Si spin QPU (e.g., Bristlecone, IBM, Rigetti) I can envision 19.1 peak ExaFLOPS @ ~$10B and ~25MW. Trapped Ion and adiabatic math different. Significant restrictions apply. Assumes 10:1 P/FT qubits.”

Let’s start with the basics.

The Morikami Japanese Gardens, Delray Beach Florida

Quantum computers are self-contained, independent systems bundled with environmental controls. They are small, fitting into a few data center racks. Superconducting qubit systems can be accessible via cloud (e.g., Alibaba, D-Wave, Google, IBM and Rigetti).

Quantum computers have a peak power rating largely based on required cooling: 25kW peak power*. This is because the chip needs to be kept very cold, but the processing is very efficient (<25 watts).

Today’s quantum computers make mistakes. Their operations sometimes return the wrong value. They lose focus (decohere) quickly and can forget their information. They need preventative maintenance, tuning, and take a while to debug and repair. This is called being noisy, or NISQ, which stands for Noisy Intermediate Scale Quantum computers.

So, we need error correction, which has not yet been implemented. I see two options: 1) run a program multiple times, store data multiple times, and check your answers or 2) use poka yoke ポカヨケ to eliminate inadvertent errors as part of the standard model (detect and correct automatically, eliminate possibility of error). Let’s assume we need to use ten quantum physical units to get one error corrected, fault tolerant unit. This is a 10/1 physical to fault tolerant ratio. (more research needed)

We need to maintain extreme environmental controls on superconducting quantum systems (e.g., very cold, quiet and peaceful). Ready spares and trained systems engineers are not easy to find, so let’s also plan to have an A/B system. One system is ‘production = A’ while the other is ‘backup = B.’ The B system is made ready to become ‘A’ but does no work.

This A/B system is very inefficient. It costs us 50.0% of our capability. We can probably do better.

How powerful is one quantum computer, in floating point operations per second (FLOPS)* or ExaFLOPS (10¹⁸ or 1 quintillion FLOPS)?

A simplistic random quantum sampling (RQS) simulation with 49 superconducting qubits (e.g., IBM Q or Bristlecone QPU), and a depth of 40 qubit operations (before it decoheres) gives a peak of 0.381 ExaFLOPS* for a QPU power cost of ~ 15kW. (I use 25kW in this analysis). So, one QPU gives us 0.381 ExaFLOPS in peak performance and a total system power cost of 25kW.

My pricing estimate is $10M per system (all in, including support, maintenance, spares, system software, onsite systems engineers, networking, and installation) for three years. It does not include the facility, utility bills, facility staff, nor business / scientific applications.

We have seen* the $15M price point per system, and know a significant portion of that is for cooling (dilution refrigerator)…but discounts should be available in a highly visible, competitive, volume purchasing scenario.

How will it run? We install two complete quantum computers (A & B) into 500 partitioned (redundant) data center facilities. Each facility keeps one system live, available and in production at all times, while the other is turned on and standing by, unless it is being maintained, repaired, or upgraded. This gives us 500 quantum computers in production (out of 1,000), and creates at least 500 quantum IT jobs.

Long Beach Island, NJ at the Loveladies fishing pier, August 2019

So, how does the math work?

We pay for 1,000 quantum computers @ $10M apiece (over 3 years), and a peak electrical power rating of 25kW. Cost of $10B and electrical peak power of 25mW.

We get 500 computers (the A systems) worth of peak performance of 0.381ExaFLOPS, then assume 90% goes towards error correction and fault tolerance, leaving a total of 19.1 ExaFLOPS. (500 x 0.381 / 10 = 19.1)

OK, so to answer my colleague’s question, we get 19.1 ExaFLOPS of performance for a peak power rating of 25mW and $10B (over 3 years). This is a performance/peak power ratio of 0.764 ExaFLOPS/mW.

How does this compare to the latest ExaFLOPS scale supercomputer, El Capitan @ LLNL, just announced in August, 2019 by the Department of Energy? That system costs $600M, provides 1.5 ExaFLOPS, and requires ~ 20MW. This is a performance/peak power ratio of 0.075 ExaFLOPS/mW.

So, we gain 10x more FLOPS per watt!

View from Barnegat Lighthouse, August 2019 (working on the model)

Significant restrictions apply. Jeffrey, what does that mean?

  1. Quantum computer and classical computer performance should continue to increase, and we are not certain which will improve faster.
  2. We would see different results by trading lower speed for greater depth (coherence times) from using trapped ion qubits instead of superconducting qubits.
  3. We need a quantum wide area network (WAN) and protocol stack based on teleportation or at least quantum transport which would fully interconnect 500 long-distance quantum facilities, as well as centralized data storage and users.
  4. We need a full stack of OS and systems software that works today to be recreated for quantum systems, including schedulers, security, DBMS, OS, compilers, runtimes, management platforms, diagnostic systems, automation, time keeper, job control languages, load balancers, session managers, monitoring and metering, etc.
  5. We need a full set of hardware capabilities used today, including peripherals, to catch up for quantum systems. This includes networking (system control plane, local area network (LAN), output devices, hubs, routers, visualization devices, and diagnostic systems.
  6. We need service providers with capabilities we take for granted today, including ample spare parts, repair services, systems maintenance, onsite systems engineers, and even full IT outsourcing. This requires a trained and enabled quantum workforce in those locations.
  7. All of the quantum computers will need access to the same data, with some time delay, if we want them to operate as a cohesive quantum system (and not independent quantum computers).
  8. We need access to random access memory and system cache in quantum systems. This is tricky for quantum computers because the data being processed is complex (a + bi) and may be in superposition and entangled. It has greater mass, weight, or complexity than real (and binary) numbers like 0/1. Unlike classical RAM the data is persistent (we don’t erase or destroy data).
  9. Local disk (DASD) and remote storage (SAN/NAS/Tape) need to be developed for quantum systems. We may be able to use current technology if we are only storing ‘collapsed’ real, binary numbers.
  10. Database management systems (DBMS) likely need to be invented, or modified, to be used with quantum computing. Data storage indexing schemes, new types of data types and APIs are needed too.
  11. We improve our programming to address and use the large number space of a fully networked QPU in flight (e.g., 64 qubits would contain 2⁶⁴ values), and exploit parallelism. We use, but cannot write or even see all of those values (ask Schrodinger’s cat) without collapsing the values back to real numbers. As an example, if we are analyzing financial markets* we could either slice up the job to run slices on all 500 facilities, then assemble an answer, or run in total on all 500 computers and provide a deeper understanding of the answer.
  12. I understand that certain inputs required to build a superconducting qubit quantum computer may be in limited supply. It may not be a valid assumption that we can support enough cooling capacity for 1,000 new superconducting quantum computers.
  13. Having this much system power at one’s fingertips could have unintended consequences. Let’s not build SkyNet please :)
UC Santa Barbara Campus (August 2019) during the NIST-PQC 2nd Standardization Conference

What applications would you want to run on 500 quantum computers?


Jeffrey Cohen is the President of US Advanced Computing Infrastructure, Inc., an Illinois corporation. Here is our website: www.chicagoquantum.com. Twitter: @Chicago_Quantum



US Advanced Computing Infrastructure Inc.

Jeffrey Cohen, President, US Advanced Computing Infrastructure, Inc., d.b.a. Chicago Quantum (SM).