Quantum computing today takes another fundamental step. The biggest problem in taking advantage of the singularities of subatomic particles, which exponentially increase the possibilities of processing, resides in the errors generated in the manipulation and measurement of qubits (minimum quantum unit of information). Any interaction with them degrades them and nullifies the buff. “This limit can be exceeded thanks to error correction formulas, but these techniques require a significant increase in the number of qubits,” explains Alberto Casas, a research professor at the Institute of Theoretical Physics (CSIC-UAM) in the quantum revolution (Editions B, 2022). This limitation has just been overcome by the scientist Hartmut Neven and more than a hundred of his colleagues from Google Quantum AI, who contribute, in a work published in Nature, “a demonstration of quantum computing where the error decreases as the size of the system increases and allows failure rates low enough to run useful quantum algorithms”. It is the door to robust quantum computing without relying on the development of almost impossible technologies. “It is a milestone in our journey to build a useful quantum computer, a necessary step that any current mature computing technology has to go through,” says Neven.
If a current supercomputer can do millions of operations with bits (the IBM Summit is capable of processing more than 200,000 million calculations per second), a quantum one can execute trillions. This power is based on superposition, a particularity of subatomic particles that allows them to be in two states or in any superposition of them. A bit (the smallest unit in classical computing) can only have a binary value: 0 or 1. The qubit, on the other hand, can be in these two states or in both at the same time. In this way, two bits can store a number, while two qubits store four, and ten qubits can have 1,024 simultaneous states, so the calculation capacity is exponentially expanded for each added qubit.
However, when trying to extract the stored information, the system suffers a phenomenon known as decoherence: the degradation of these quantum superpositions until they become classical states. And this effect is caused by any interaction with the environment: temperature, electromagnetism, vibrations… Any interference generates noise and reduces to microseconds the time in which the overlaps that multiply the computing capacity are maintained.
One way around the limitations is to build isolated computers to unprecedented limits and near absolute zero (-273 degrees Celsius) and gradually expand their capacity. IBM’s Osprey processor has reached 433 qubits and the company plans to reach more than 4,000 by 2025, with the Kookaburra (Kookaburra). “Since 1990, attempts have been made to organize larger and larger sets of physical qubits into logical ones in order to achieve lower error rates. But until now the opposite was the case because the more qubits, the more doors, the more operations that can throw an error”, explains Neven.
Error correction is the only known way to make useful and large-scale quantum computers.
Julian Kelly, Researcher on the Google Quantum AI Team
In this way, the technological race to build increasingly capable computers, devices that provide longer coherence times and provide a net improvement over classical methods, is increasingly complex and requires a complementary path. “The most important technology for the future of quantum computing is error correction, it is the only known way to make useful and large-scale quantum computers,” says Julian Kelly, a researcher on the Google team.
And this is the breakthrough presented this Wednesday: “A shallow-code logic qubit can reduce error rates as system size increases,” meaning that robust quantum computing power can be increased without relying on machines that push the limits of available technology.
The superficial or surface logical qubit is a set of physical qubits grouped and controlled in a certain way so that, once intertwined (the action on one particle instantly affects the other, even if they are separated by great distances), they act as stabilizers of the system to avoid imperfections of states, materials or measurements.
More work is needed to achieve the logical error rates required for an effective computation, but this research demonstrates the fundamental requirement for further development.
Hartmut Neven, Google researcher and lead author of the work published in ‘Nature’
As explained by the main researcher, “it is necessary to control the set by means of the so-called measurement qubits, which detect errors in an intelligent indirect way so as not to destroy the type of quantum superposition state and act accordingly.” “We can’t just measure where errors occur. If we identify, in addition to where, which data qubits had errors and which ones, we can decode and recover the quantum information,” Kelly adds.
“Surface code is,” the researchers explain, “a highly fault-tolerant and robust type of quantum computing.” Today’s systems throw errors at a rate of one in a thousand. These may seem small, but practical applications of quantum computing need to reduce them much further, down to one in a million, Neven points out. This is the path taken by Google and, according to the scientist, “demonstrates that error correction works and informs us of everything we need to know about this system.”
For the demo you post NatureBased on the third generation of Google’s Sycamore, Hartmut Neven’s team created a superconducting quantum processor with 72 qubits and tested it with two surface codes, one larger than the other. The largest (over 49 physical qubits) yielded a failure rate (2.914% logical error per cycle) lower than the smallest (3.028% over 17 physical qubits). “More work is needed to achieve the logical error rates required for an effective computation, but this research demonstrates the fundamental requirement for future development,” says Neven.
Google’s line of research is based on the premise put forth by physicist Richard Feynman, in 1981, when he stated: “Nature is quantum, damn it, so if you want to simulate it, it better be a quantum simulation.” In this way, Feynman limited the possibilities of conventional computing to unravel the quantum world and urged simulating this second reality to achieve it.
From that proposal to compute from quantum physics, numerous applications have emerged, as Google researchers recall, including factorization (key in cryptography), mechanical learning or quantum chemistry. But these still require thousands of operations to minimize the still high error rate. The scientists from the North American multinational believe they have opened the door so that “error correction can exponentially suppress the rates of operational failures in a quantum processor.”
Kelly admits that it’s a necessary and critical result, but not enough. “The results still don’t show the performance scale to the level needed to build a bug-free machine. But it’s really a key scientific milestone because it shows that bug fixing finally works and provides us with key learnings as we move toward our next milestone.”
Nor does it prevent the race to build computers with more than 100,000 qubits, projects in which, in addition to Google, companies such as IBM, Intel, Microsoft or Rigetti work. The correction of errors is complementary. “We’re tackling what we think is most difficult first, which is basically taking quantum information and protecting it from the environment. We are fundamentally trying to use quantum error correction for consistency. The fundamental and key challenge is to demonstrate that this error correction works at a scale to be able to take quantum information and protect it from the environment”, explains Julian Kelly.
You can write to [email protected]will follow THE COUNTRY Technology in Facebook and Twitter or sign up here to receive our weekly newsletter.
#Google #Error #Correction #System #Needed #LargeScale #Quantum #Computing #Technology