The part about novel qubits is a very common sentiment from referees these days and I disagree with it strongly.

A thread: https://twitter.com/KikeSolanoPhys/status/1315334679185240066
The first reason is that it is unrealistic to expect a newly developed qubit often made by a team of one or two students can perform at the level of a qubit that has been optimized for over a decade by many academic groups and now even large industrial efforts.
The current champion of superconducting qubits is the "transmon" qubit. When we first made and measured one, it had a lifetime 10x worse than our charge qubits. It took us 2 years to figure out why and perform a 2 qubit gate, and (the whole field) >10yrs to optimize it.
and as @JoshKoomz mentioned, we are still a long way from ion traps on single qubit performance even there, though I think SC qubits are at least as promising an approach overall.
It is important to publish earlier results, so that we can actually figure out how to make new ideas better and so that even the qubits that never become widely adopted inform all others.
But there is another reason we should be wary of requiring a novel qubit to satisfy a list of requirements, such as being able to do a high-fidelity 2 qubit gate. We may not want them to!
For example a novel qubit that could store and retrieve, with negligible loss of coherence after a long time, but not manipulate that bit would be very valuable as a quantum memory. In fact it would be even better if symmetry prevented it from doing so.
This is not meant to single out @KikeSolanoPhys, who is expressing a very common viewpoint, but is to say I believe we should be very cautious as a community of imposing arbitrary metrics on new ideas, since we are still learning we want from quantum devices.
You can follow @schusterqed.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: