The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Today's Physics News
With the most interesting fact you'll hear today (or, perhaps, this year).
This year's Nobel Prize in Physics was awarded a few days ago to three scientists (Alain Aspect, John F. Clauser and Anton Zeilinger) for their work on "quantum entanglement" - what Albert Einstein called "spooky action at a distance" - whereby measurement of the state of one particle can instantaneously change the measured state of another particle, even if they are separated by light-years.
At my 50th (gulp!) college reunion this year, I went to a lecture, one of the many organized by the reunion staff for interested alumni, by Prof. Stephen Girvin, the Eugene Higgins Professor of Physics at Yale. The subject was quantum computing, something I had been trying, with absolutely no success, to understand for a couple of years.
The lecture was outstanding; though I would hardly say that I came away understanding exactly how quantum computers work, I did get at least get a glimpse of the underlying principles, which was considerably more than I had had before.
I found it reassuring to read that in a recent interview, one of the prize-winners, John Clauser, said, "I confess even to this day that I still don't understand quantum mechanics, and I'm not even sure I really know how to use it all that well…"
There was, though, a moment of truly glorious illumination. Girvin was making the following rather interesting point: Most people understand that theoreticians in physics (and the other natural sciences) have no idea of, and give no thought to, any possible practical applications of their work; they're trying to understand how the universe is put together and how it works, simply for the sake of understanding how the universe is put together and how it works.
Girvin's point, though, was that even the engineers - the ones whose job it is to translate theory into actual things - can't possibly foresee the practical consequences of their work, either. He illustrated this point with the example of the transistor.
When John Bardeen, Walter Brattain, and William Shockley invented the first transistor in 1947 :
(shown above, for which they were awarded the Nobel Prize in 1956), they could not possibly, in their wildest dreams, have imagined how ubiquitous and fundamentally transformative their device would become. "How many transistors," Girvin asked the audience, "do you think were produced around the globe in 2020?"
Pause. And then he flashed the answer on the screen: 20 trillion per second.
There as a very audible gasp from the 100 or so people in the audience; I don't think I've ever heard anything quite like it before. It seems impossibly large; it can't possibly be true.
But it is. While I thought it highly unlikely that the Eugene Higgins Professor of Physics at Yale would put forth a number like that without having checked it pretty carefully, I did a little double-checking myself and it checks out. Consider: The most recent iPhone contains around 15 billion transistors. Apple sold around 220 million iPhones in 2021. That means around 10 100 billion transistors were produced each second just for iPhones. Add the hundreds of millions of other phones, plus all the refrigerators, air conditioners, PCs, automobiles, televisions, etc. etc. etc., every one of which contains many millions or billions of transistors, and you can pretty easily see where the 20 trillion/second figure comes from.
I'm not sure I know what it means, but something tells me it means something important.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I would start with asking "Do they work?". Have they actually created a functional, usable quantum computer, or is it still purely in the realm of theory?
If it's still in the realm of pure theory, then the correct answer to "How does it work?" is "It doesn't").
"If it’s still in the realm of pure theory, then the correct answer to “How does it work?” is “It doesn’t”)."
I believe the standard answer that works across scientific fields, time and space is: "it depends."
Actually, it turns out that they have gotten it to work - in very special conditions (exceedingly low temperatures, for instance) and on very particular problems - but Girvin gave a number of examples of successful application of quantum computing to actual problems.
It's complicated. I doubt any quantum general-purpose computer will ever be practical, which is likely what you're thinking of, but there are quantum computers that do work. They are still prohibitively expensive, extremely specialized, and niche, so it's not practical yet.
There are quantum computers. You, yourself can use one owned by IBM. It is not very "powerful." But is it real not theory. Larger quanume computers of order 1000 qubits also exist.
Thank you for sharing this. The folks in the lab next door were working on theories for quantum computing and tried, unsuccessfully, to explain how they understood it to be. At some point in each conversation I knew I had lost it because the "smile and nod politely" part of the brain kicked in and I ended with the Leonard Nimoy catch all: "fascinating..."
After many hours of experimentation and calculation regarding Cobalt Tungsten nanoparticles and their crystal structures I was still interested in what they were doing and even a little jealous given the possibilities they always kept talking about. But I have always been a I need to know how *this* works (preferably right NOW) person instead of a theorist.
Besides, I've been to gatherings of theorists and they spend WAY too much time talking amongst each other after the shows and presentations are done. Not like say conventions of ACS or MRS folks. #passthebottleletsdance 😀
Reading your post I had the same reaction to the transistor count. That's huge, but if I have billions of transistors in my pocket like a third of humanity that's not huge.
It's always a great time to mention Shockley.
https://nextbillionseconds.com/wp-content/uploads/2021/05/shockley_playboy_1980.pdf
I remember the early days of transistor radio kits. You bought the kit and put the radio together. There were choices to be made about the number of transistors you wanted—among various single-digit alternatives.
Zenith? Heath/Zenith?
I miss those stores.
I forget which physicist [I used his textbook back in the '60s] said after a class, "If you understood what I just said, you weren't paying attention."
FEYNMANN, maybe.
Feynman: "Do not keep saying to yourself, if you can possibly avoid it, 'But how can it be like that? ' because you will get 'down the drain', into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that.”"
Interesting that you list phones, appliances, etc., without considering the mundane PC, where the CPU contains approximately 4 billion transistors, the GPU another 50 billion or so [all approximations], and each gigabyte of memory [DRAM] has at least 1 billion transistors - which in my humble machine totals 32 billion.
So right now I am sitting in front of almost 100 billion transistors, not counting my phone and appliances.
1 GB of memory is 8Gb + error correcting = 10B bits, I no longer know enough about current memory to know how many transistors per bit of DRAM.
my PC memory is not ECC, but these days it's 1 transistor + 1 capacitor per bit. but you are correct that I need to multiply by 8 to get bytes, which leads to my having about 250 billion transistors
I do think the iPhone number is 100 billion per second (did with calculator but math below)
15*(4*55)*10^6*10^9 / (15*4)*60*24*365
(Cancel 15s and 4s and 60 with 55 since they’re close enough. Also 24*365=8760, which is about 10^4)
10^15 / 10^4 = 10^11= 100 billion
Was gonna say the same. Back of the envelope 30M seconds in a year divided into 220M phones is 7 phones per second, and 7*15 is 100.
Aargh. My mistake - thanks for the correction. I've fixed it in the OP
When I played with electronics in the 60s, each transistor was a tiny little can with wires attached. Now they are just junctions on chips.
Integrated Circuits, then LSIC large scale integrated circuits. Then VLSIC very large scale. Then they abandoned that description because they'd be up to 40 Vs at this point.
IF the universe came from one whatever, that whatever would have had some nature, right. And anything with a nature has that throughout its entirety. Action at a distance seems to argue for just that. This is the thought of Wolfgang Smith I think.
The way I think of it is, the universe has a finite number of, essentially, random number generators. So it has to reuse them.
When two particles get entangled, they've been assigned the same random number generator. So, though what you actually measure is random for both of them, it's unavoidably related.
It would be interesting to do an experiment that performed measurements on a large number of particles to see if the number of random number generators is countably finite...
“whereby measurement of the state of one particle can instantaneously change the measured state of another particle, even if they are separated by light-years.”
It’s a bit weirder than that. You’re familiar with the concept in fiction of “retconning”, right? It means to go back and change what had already been in a work of fiction, to conform to some choice you’re making today.
When you quantum entangle two particles, and then measure one, the other gets retconned to have a state consistent with that measurement. It doesn’t change, it retroactively had that state to begin with. Like I said, weird.
It can't be used for communication, because you can't assign the state of the measurement, you get a random result. All you can do is decide WHAT measurement you're going to do. And if you do the same measurement on both particles, the results will be related, if you do different measurements, unrelated.
So, no way to use it for signaling.
Quantum computers, essentially, work by setting up an artificial quantum network that duplicates the properties of the system you’re trying to simulate, and then seeing what it does. Where the artificial network is set up to be easy to measure. It’s really just a form of analog computing, applied to a domain where numerical calculations are incredibly difficult.
In the classic example, Sherlock Holmes dies falling off a cliff, but his body isn't found. Then there's too much demand for new Sherlock Holmes stories for Doyle to leave him dead. So, retroactively, he had survived and gone into hiding.
Having his body found at the bottom of Reichenbach Falls would have broken the entanglement, and made the retrocon impossible. It was only possible because his death was being assumed, had not yet been measured.
So, quantum entanglement is retconning.
So, quantum entanglement is retconning.
It's a nice analogy. Alternatively, it's "1984" where because we're now at war with Eastasia we had always been at war with Eastasia 🙂
I think it's almost impossible for people to think about it without them imagining, contrary to theory, that there really are hidden variables.
I also achieved one of the great conceptual breakthroughs in the understanding of QM when I realised that Warner Bros' Looney Tunes/Merrie Melodies cartoons show what happens when Planck's constant is very very large.
There could really be hidden variables. There just can't be local hidden variables. Non-local hidden variables are just fine.
Yes - I should have been more precise
There is just no evidence for those. Or any theory of how they would work. The experimental test of Bell's inequality have now been tested over great astronomical distances. So it is very unclear what such a theory might be.
But, what if you limited the state of each particle in a two particle entanglement? Say that no matter what the matter was that the particles are only able to express a one or a zero. And that if one of the two particles is set to one, the other across the table or across the cosmos is zero. If you change one, the other always flips to its opposite. I know it might seem old fashioned at first to just use 1's and 0's again but couldn't you then use entanglement for communication?
my wonderment has always extended to the possibility of quantum entanglement functioning over space and time. For this flight of fancy I blame Ziggy and the show Quantum Leap. Percentage prognostications in future or past?
'Oh Boy' 🙂
Nope. Took me a while to understand it myself.
Say you are entangling the spin of an electron with the spin of an other electron. They're now "opposite", and spin is binary.
You can measure spin along three orthogonal directions, and you'll always get a binary result. It's like light polarization that way.
So, if you measure electron A in the up/down direction, and get a left-hand spin, then measure electron B along the same direction, you'll get right-handed. But whether you were going to get left or right when you measured A is random, you can't control that, so the measurement you get on B is random.
The only reason you can tell that they're correlated is that you can compare the random outcomes for pairs, and see that they're opposite. But it communicates no information because there was no way to insist that particle A be left handed. Knowing that two random outcomes are related doesn't keep them from being random.
Those are interpretations, not endorsed by the Nobel folks. In the textbook interpretations, measuring one particle does not affect another.
A lot of Post's other comments are wrong also. Lots of people understand quantum mechanics. The inventors of the transistor did foresee a lot of practical applications. I think one of them even started a company to make commercial transistors.
I suspect that Post elided between people understanding QM and understanding how QM. As you say, lots of people understand QM. I don't know if anyone has ever understood the "how?" *(see the Feynman quote above)
Roger,
1) Give us the exact "textbook quotes."
That measurement of electron 1 determines the state of electron 2 is NOT an interpretation. That is an experimentally measured fact for which Aspect et al got the Nobel Prize.
2) The entangle state is a single quantum system. The measurement of one of the elements determines the state of the entire system.
You cannot use the elements again at a distance because you have disentangled the components.
3) Interestingly, had Clauser's professor not died, he could not have gotten the prize as the prize can only be given to 3 persons.
Just read the Nobel citation. It does not endorse the goofy ideas that Post said. The prize is for experimental work.
Your claim was "measuring one particle does not affect another."
ayou seem to imply that the two electrons are individual quantum systems when entanglement means that they are a single system.
Quantum mechanics can treat two particles as one system. So can classical mechanics.
If you look closely at that 1947 transistor, you will see that it not only amplifies current but proves that black people are intellectually inferior.
I believe it's only 1/3 of the transistor which does that, proving you only have one leg to stand on.
20 trillion per second?!?
That's insane.
How many machines are being used?
Even if there were 100 million machines (can that be true?!?), that still would be 200,000 per second made.
Yeah, they're photographically reproduced in the etching process, so they're produced in parallel. Not sequentially. So that's like being shocked at the number of pixels.
But 20 trillion per second?!?
An Intel 7 processor has about 100M transistors in it, and they're all made at the same time, in parallel, via photographic etching; You shine extreme UV or electrons through a very complex high resolution mask, to reproduce features on the chip. It's like being impressed with the total number of pixels being printed in a magazine run.
and multiple processors are made on a single wafer.
Current desktop- and laptop-class CPUs generally have an order of magnitude or two more transistors than that. Apple's M1 Ultra reportedly has 114 billion transistors across its two dice; the A16 system-on-chip used in the iPhone 14 Pro has 16 billion transistors; or AMD's last-generation desktop chips had up to 10 billion.
And those are borderline "commodity" products.
Photolithography is all kinds of cool. Have you heard the tale of the team that started using Shrinky Dinks to shrink the mask?
What you say is true, but there are so many process steps that it takes months to get a wafer through the fab. When chip makers want prototypes of a new chip, they send a "rocket lot" through the fab which skips to the head of each queue and still takes a month.
I texted this story to someone who immediately responded that human neurons are being produced at nearly 1 THz-neuron: 8G human * 1%/year * 100G neuron/head. To say nothing of insects.
So multiply the number of transistors by the clock rate and that gives you the 20 trillion
The part that struck me was the picture of the first transistor and 'Oh, so that is why the symbol for transistor is that!'
I thought it looked like a still.
That's planned obsolescence for you.
Thanks. Now help me understand string theory.
Strings vibrate. There are lots of dimensions but we can't see them. You are in a maze of twisty little passages all alike.
On the general topic of 'amazing physics numbers', here's a factoid from the webcomic xkcd. Take a crack at answering before scrolling down:
Q: How close would an H-bomb explosion have to be, in order to deliver as much energy to your retina as would a supernova exploding at the distance of our sun (93M miles away)?
.
.
.
.
.
.
.
.
A: You can't get that close. If your eyeball were touching the H-bomb when it detonated, your retina would still only receive one-billionth the energy which it would receive from a supernova explosion 93 million miles away.
How much energy is a supernova putting out? If you could somehow block all the normal radiation, you'd be killed by the neutrinos.
To anybody who knows any particle physics at all, that's just mind blowing.
Yes, and digging a bomb shelter wouldn't keep you alive one second longer no matter how deep you dug. In fact, if the supernova happened at your midnight (effectively turning the entire earth into your blast shield), you'd *still* be out of luck. That's because a supernova produces a LOT of neutrinos - so many in fact that even the miniscule fraction of them which would be emitted in exactly your direction from 93,000,000 miles away, *times* the even more[1] miniscule fraction of *those* which ended up interacting with your body, would still dispatch you handily.[2]
[1] In case you doubt this part: Your typical neutrino is really, *really* interaction-averse. You thought that kid who sat next to you in 3rd grade was shy? That's nothing compared to a neutrino. Your typical neutrino would on average pass through *over a light-year* of lead before interacting with just one lead atom. So in our supernova scenario, while almost all of the neutrinos would pass through all of earth, the ones which encountered your 20cm-thick sleeping body would be *extremely* unlikely to interact with it. This extremely small likelihood would alas be overwhelmed by the sheer number of neutrinos involved, hence the dismal outcome.
[2] Maybe I shouldn't have left this to a footnote, but - our sun will not be going supernova, ever. Not to worry about that particular scenario:-)
That just reminded me of things I learned a long time ago... it's too small, right? It'll dwindle out and go white dwarf?
Right, too low of mass to go supernova. It's going to instead eventually become a red giant, and eventually a white dwarf.
But we've got a few billion years left before that happens. Mind you, the Sun IS gradually getting brighter, and we're going to have to do something about that soon.
I mean, we assume that, but such an experiment has never been done where one particle is light-years away. (We don't have the capacity to measure individual particles from that far away, and none of our instruments on our various probes have attained a distance of light-years yet even if they had the capacity to run such an experiment and transmit the data.) So I don't know that we can say that for sure.
It could end up something like Newton trying to predict what would happen when something attained a speed of twice the speed of light; he couldn't know that his laws would break at those speeds. If quantum entanglement breaks at interstellar distances, we wouldn't currently know it.
It’s also important to acknowledge that those who actually design devices which take advantage of quantum physics or other esoteric scientific principles do not necessarily understand how they work - we simply design based upon their measurable characteristics and performance.
Take, for example, the tunnel diode. It is a simple device though which electrical current appears to flow.
However, scientists can prove that electrons can’t actually cross a semiconductor barrier within the device.
Rather, in a spooky quantum sense, zillions simply cease to exist on one side of the junction, and suddenly begin to exist on the other side.
I had no idea exactly how or why this could possibly happen, but that didn’t stop me from designing a novel new circuit to take advantage of this very property which mysteriously provided a negative resistance.
By the way, when I first submitted the concept in a paper for publication, it was rejected.
The editor said he didn’t want to publish even well-thought-out speculation about something. Come back when you have something concrete to write about, he said.
It’s too bad that law reviews don’t try adopting something like this policy.
Instead of publishing so many wild legal theories and sometimes almost stream-of-consciousness thinking, perhaps they should begin to seek (if not demand) something practical before they publish.