Welcome everyone. Last talk on the Workshop. So we're very happy to end with Lucas who ing us statistic mechanics with LDPC codes. Okay. Yeah. So my pleasure to be here. I'll probably tell you a story that's a bit more whimsical than many of the other speakers. My To will be essentially about trying to connect properties of error correcting codes with phases of matter and sort of the physics of, you know, phases both in their ground states and at finite temperature and to sort of explore this interface. Okay, let me begin by acknowledging my collaborators. So my former student Yifan, who's now a post doc at Maryland, my current students Jing kong and Chow, and colleague Rahul, along with our funding. Okay. So let me first start off at a sort of high level. So error correcting codes protect logical bits of information by encoding them in some redundant way among physical bits. The simplest example of this is the classical repetition code, which corresponds to just repeating a message many times. In this context, if we repeat the message N times, we'll say N is the number of physical bits. K will denote the number of logical bits, which in this case, is one because we only have two different messages. And the codistance is sort of the number of bit flips to get from one message to the other, and this is N for this code. So this is the sort of simplest realization of a classical error correcting code. And now let's also sort of talk about a very simple phase of matter. So thinking about a sort of many body quantum system, albeit a very simple one, we can consider the one dimensional Ising model. And this Hamiltonian has two ground states, which are sort of all spin up and all spin down, or if we like all zero or all one. And those are, of course, precisely the code words of the repetition code. So each of these kind of ferromagnetic interactions would be called a parity check in the sort of jargon of this talk. And if we view this repetition code as a kind of many body system, it's a simple system because all of the parody checks commute with each other, so this is an exactly solvable Hamiltonian. And also, all of the parody checks commute with, in this case, the product of all Polyxs. So from the kind of coding perspective, this product over all polyxs represents the logical operation. It takes us from the all zero state to the all one state. It changes the message. Maybe from a physics perspective, this would simply be sort of Z two symmetry of the model. Okay. And sort of to say this some of these same things from a physics angle, we would say that, you know, the ground states of this Hamiltonian H naught spontaneously break this Z two symmetry corresponding to the logical operator. And this one D Ising model, its ground states are in a ferromagnetic phase. So now that we have this ferromagnet, we can ask a number of questions about the robustness, the stability of this phase to various kinds of perturbations. One of them we might ask is from a kind of Stat neck perspective, whether this phase persists to finite temperature. And as we probably all know, it does not in one dimension. So at any finite temperature, we will not find our Ising model in a ferromagnetic phase. It will be in a sort of paramagnet disordered phase. And why is that? Well, if I imagine, so, you know, creating an error somewhere on one side of the chain, then this is a very low energy state. It could be a very large error. Right. But there's only one flipped parity check. There's only one anti ferromagnetic bond. And that bond can sort of freely diffuse along the chain under the dynamics. And from an error correcting perspective, it's like, if we're trying to locally decode this error, we don't know which way we should grow the domain wall, so to speak. Like, do we flip this one or this one? And so this system is, you know, it's disordered at any finite temperature, and it's lousy at sort of local self correction. Now, of course, from a sort of purely classical information theoretic perspective, this is a very undesirable thing. We would like memory that is sort of self correcting. If I'm storing information in this kind of ancient object, I would like to be able to unplug it from any power outlet and then plug it back in at a later date and cover the message that I had scored. Okay. So how do we get thermally stable memory for classical information? Well, sorry, was there comment or so if we want to make this repetition code sort of stable to thermal fluctuations, we can just interpret the repetition code words as the ground state of a two dimensional Ising model instead. The difference between the one D and the two D Ising model is not in the choice of code. It's well, maybe in some respect, you could argue it's the same code. But what we've done is we've kind of added redundant parody checks to the Hamiltonian. So now if we sort of consider all of the bonds on this two D square lattice, we know that many of these parody checks are redundant because the product of any four of them around the plquet always gives plus one. But we're sort of doing enough checking this time that now I claim we can actually locally correct the code. At finite temperature, why is that? Again, let's look at the same kind of question of how an error cluster would grow. And we notice that this error cluster will essentially flip all of the parity checks along its boundary. Okay. And then there's a sort of nice argument credited to pirals that as this sort of error cluster gets larger and larger, it maybe picks up a domain wall of link L. And we can ask, well, how many such domain walls sort of could there be? Maybe that pass through a specific point. And you can bound that by three to the L, kind of how many ways the domain wall can move at the next step. But of course, the energy penalty of that domain wall is proportional to its length as well. Okay. So when you get to low enough temperature or large enough Beta, it's very sort of unlikely in a thermal ensemble to find a large cluster of errors. And in order to induce a logical error, we have to have a sort of percolating error cluster that stretches through the whole system. So that tells us that we're not going to find these kind of dangerous error clusters, and this system can self correct. Okay. And from the physics perspective, this kind of self correction under a simple local decoder is just associated with the stability of the ferromagnet at finite temperature. Okay. And maybe just one more sort of fact about this model, sort of under certain decoders, at least, the sort of decodab transition is precisely the thermodynamic transition for, let's say, a Gibb sampler decoder. Okay. That's one notion of stability for this simple classical code. Another one we might ask is about the ground state stability itself. So now let's take the Hamiltonian H naught for the one D Ising model. But now let's ask if I add some perturbation epsilon V to the Hamiltonian, is the ground state still a ferromagnet? Do I still have two approximately degenerate ground states or not. For general V, the answer is no. And the perturbation that sort of breaks the ferromagnet is essentially a longitudinal field. And in the error correcting code context, this makes sense because this perturbation is essentially checking which message we've stored, and it picks out one of them as a lower energy state. Okay. So this will turn sort of one of our code words into a false vacuum, a highly excited state. And for Epsimon of order one over, we changed the degeneracy. We would say that this is an unstable phase of matter. But again, sort of here, the message does not commute with the symmetry. It sort of tells apart the two messages we were trying to store, and this is why you might naively think that this perturbation is indeed kind of very dangerous. On the other hand, if we pick perturbations that commute with the symmetry, the product of all Xs, such as single site poly xs instead, now actually we might expect that this phase is robust. And indeed, this is just the transverse field Ising model, and it's a ferromagnet up to the quantum critical point at Epsilon equals one. Okay. So, okay, that seems nice enough. It seems like maybe if the perturbation commutes with asymmetry, this phase is stable, but that's not true because it turns out that long range symmetric perturbations also destabilize the sing phase. Fair imagine. This is not an original This is an old story, but this is a sort of nice recent paper. Okay. So hopefully, this is a sort of friendly introduction, and hopefully nothing here is too unfamiliar just yet. But I'd like to sort of recap what we've learned. So sometimes the ferromagnet is stable only at zero temperature, and it's not it doesn't persist to any finite temperature. Sometimes the ferromagnet does persist to finite temperature. Some kinds of perturbations destabilize the ferromagnet and others don't. And the question that I Well, sort of. Do you want to remind us why long range symmetric perturbations are a problem? Sure. So the sort of high level picture is that so what's a kind of long range symmetric perturbation? It could be Z one ZN. And the problem is that this perturbation kind of locally looks much more like this one. And as you said, it looks locally as if it's breaking this tree. Yeah, you could maybe imagine something like, yeah, where you sort of in most of the chain, try to prefer one code word in some sense, and then this destabilize Okay. Right. So basically, the question that we'd like to ask is, are these sort of stories that we've told about the Ising model just illustrations of more general physics of error correcting codes? Yeah, can we learn something about the phases of matter that might be defined by other kinds of codes, classical codes or quantum codes? Okay. So this is kind of the motivation for the rest of the story. Okay. First, I want to talk a little bit about the statistical mechanics of classical codes. We'll see there are many parallels with the later stories for quantum codes, but we'll sort of, you know, move beyond the Ising model one step at a time. So to sort of generalize to other codes, it's helpful to first kind of reinterpret the Ising model in a more abstract language. The repetition code is an example of what's called a low density parity check code. So rather than working with sort of polyZ which has eigen values plus or minus one, we work with bit strings where this little X can take the value zero or one. And we think of our Hamiltonian as essentially the hemming weight of the product of a parity check matrix labeled with the serif H and the bit string X. Okay. Now, here, what this parity check matrix is doing is just saying, you know, do the first bit and the second bit agree or not? I, you know, one plus one and zero plus zero are both zero and zero plus one is one. Okay. In this more abstract language, there's a sort of number of nice things that we can notice. So firstly, our code words are just the write null vectors of this matrix. For this repetition code, we have one right null vector that's non trivial, which is the all one state. Uh, okay, zero is always a code word. That's a linear algebra fact. Um, the dimension of this null space is just the number of logical bits that we're storing. And the distance of the code is basically the smallest non zero hamming weight of one of those code words. Okay. So, we have this more abstract mathematical framework. Let's now ask, maybe from a coding perspective, the repetition code is a kind of bad code. It's inefficient. It costs a lot of resources to store one message. Can we find better codes? And maybe from a physics perspective, you know, will these better codes be kind of more interesting as phases have matter? Okay. So what's an example of a kind of interesting code that we could look at? And okay, if we started with the sort of one D Ising model, kind of maybe one of the more different things that we could consider is a kind of random is a random code. And a random LDPC code or maybe slightly more specifically a random kind of Delta B coma Delta C code will correspond to choosing this parity check matrix H randomly, subject to the constraint that each parity check or each row of H has exactly Delta C ones. In it. And each bit is involved in Delta B checks. So the LDPC property amounts to the fact that we have sparse rows and sparse columns of this parity check matrix H. Okay. And when you construct one of these random codes, you can show that they live on a kind of expander graph or in more kind of colloquial jargon. They live in infinite dimensions. Okay. And they also turn out to be very asymptotically good codes. So with high probability, the code distance, as well as the number of logical bits of information stored are both proportional to N, the number of physical bits. Okay. So it's very different from the Ising model. And one other sort of technical feature that I'm just going to state for you and not attempt to derive is that these codes with high probability for most parameters here, have what's called linear confinement. So what this means is that if I start at one of the code words, right? I have energy zero, depending on your normalization. I mean, the ground state. And then I start sort of whipping a small number of bits. I start creating an error. And I asked, Well, how much energy is associated with this new configuration? And if I have linear confinement, this error will be at least sort of linear in the number of bits that I've flipped up until some sort of arnous barrier, if you like. And for these random codes, this barrier is sort of order, okay? So you can flip a finite fraction of bits, and you always, uh you always sort of pay a constant energy per bit that you flipped. Yeah. Um, how is your hamming distance? It looks here like it's, like, a continuous thing. Is it. You see very good at drawing in space. Yeah. I mean, I'm trying to take a sort of high dimensional space and, like, draw a cartoon in one dimension. But think of this as, like, I start with this code word and I folks, I have it start up linearly. Yeah, yeah. Okay. Well, I borrowed this from E fun sibling. Anyway, yeah. So yeah. So, okay, just replace these as lines. I mean, maybe Adobe Illustrator just makes everything look wavy instead. But yeah, effectively, we have a linear barrier. This is kind of Alpha N, this distance. Um, one other feature that I'll sort of note and come back to very shortly is that for these random codes, we not only have code words as deep energy wells, but we also have a lot of sort of fake code words. And you can, for example, have a single flip check. So this is a very low energy state, and it's very far from all of the code words, and it itself comes with a very deep barrier, which follows from the linear confinement property as well. Yeah. So given only the matrix H H, how did you define energy? Oh, yes. So the energy is essentially the number of parity checks violated in each configuration. So it's sort of formally defined here. Yeah. Is this linear confinement the same thing as the expanded property of the graph? No. It's a stronger property. There are sort of codes that live on expanders that don't have linear. Okay. All right. So to kind of translate back to physics for a moment, from the physics perspective, we have Hamiltonian H nought. It's a sum over all of the parity checks in the system. And the product of the polys on each parity check, this is a few body Hamiltonian, and it's frustration free. So the ground state is the simultaneous plus one eigenstate of all of these operators. This is a very appealing property to a mathematical physicist. Um, this Hamiltonian has a symmetry group generated by the product of Polyxs on all of the logical code words of the classical code. There's some nice work from Tibor and Vedica last year about kind of symmetries engaging and so on. And okay, now I've introduced enough about these classical LDPC codes. Let's start asking about their stability spaces. Okay. And we'll start by talking about thermal stability. The quantum stability will be a sort of asterisk a bit later in the talk, but the ground state stability will be an asterisk. But for thermal stability, uh, we might expect that these are actually not stable thermally because of the following argument. With high probability, these random codes have no redundant checks, and therefore in one line, I can calculate for you their partition function. Okay. Basically, all possible patterns of flip parity checks are visible. For every pattern of flip parity checks, there are two to the K different bit strings corresponding to that parity check, one for each logical operator, and the partition function matches that of the one D Ising model up to some rescaling. So clearly there is sort of a completely featureless thermodynamic phase diagram, you know, as measured by the textbook free energy. Okay. Sorry, by no redundant checks, do you mean that the parity check matrix is? No left null vectors. Sorry, what? So basically, if we have M, parity checks, K is and minus M. Yeah. Okay. Right. Okay, so thermodynamically, this looks like the one dising model. Okay. But dynamically, we have the following feature. So the memory time under sort of any local dynamics can be bounded essentially by the Markov chain bottle neck theorem. It's bounded by essentially the probability of finding a particle at just the top of the energy barrier in the landscape, and that can be bounded by the number of states in the system times the sort of linear confinement energy barrier at the top. So actually, going into this picture. So we sort of ask, what's the minimal possible energy here? That's this factor. This is the maximum number of states that can be in the bottleneck, which is the number of states in the system. And clearly when Beta is sufficiently large, this memory time diverges exponentially. Okay. So these classical codes are sort of thermally self correcting. Okay. And why is sort of it seems like there's a paradox here, so what's the resolution? Well, remember that the energy landscape of these codes, in addition to having these kind of deep wells around every code word also has all of these kind of fake minima, where a very small number of parity checks are violated. These would look locally like code words because they are local minima, but they're far from any actual code word. The thermodynamics is essentially sampling over this whole landscape. And the density of these kind of fake code words is chosen sort of precisely to make the partition function simple. But error correction is really a local property. So if we're interested in the dynamics of this code under local self correction, we're really just interested in what this looks like. We don't care about the rest of the energy landscape. Yeah. So then is the memory time diversion kind of saying that, if you start near a code word, then, even as time goes to infinity, it will never sample the full, Gibbs, ensemble for the whole. So it's like the two D Ising model. You know, if you start near this code word, you have to wait for a time that diverges in the thermodynamic limit to find yourself near any other code word. So you just kind of rattle around. You know, there's some finite density of errors, but they form disconnected clusters. They're very easy to correct. Okay. Yeah. This ghi, what does that mean in physics? So the physical consequence of this is basically that if you like the sort of ergodicity breaking transition in the dynamics is completely invisible in the free energy. So for many spin glasses like the SK model, you can detect a sort of a transition in the free energy to a spin glass phase. It turns out that model also has a property like this where there's a sort of range of temperatures where ergicity is broken that you can't detect. But this is a kind of really extreme and exactly solvable model where the there's just no thermodynamic transition at all, but there is this arcanicity breaking. Unlike the SK model, this has a solution that fits on one spine. So that's kind of cool. Yeah. So from the error cretrn point of view, I guess maybe just to rephrase this and see if that's correct understanding. I guess this is saying that if I use projection to prepare, like, say, a zero logical initial state? Yes, then that is not easy to do, but this is still a single shot code, right? That's effectively what it would mean in that. Yeah, I mean, okay, maybe for classical codes, the state preparation is fairly simple. Even if it's done imperfectly, right, you would just end up somewhere in this well. And then you just locally gb sample, and you'll stay in that well for a very long time. Oh yeah. Okay. So classically, we see that there's this sort of intriguing maybe separation between the thermodynamic and the dynamic properties of these classical codes. And we could now ask about the same classical code, but now subject to quantum dynamics. So what do I mean by that? Now let's take the Hamiltonian H nought, and I'll add to it some kind of few body perturbation that's completely arbitrary. So now I can have polyX as well as polyZ. For technical reasons, at the very end, I add just a tiny amount of disorder to the system. That's just to prove something in 5 minutes. And let's ask sort of what does dynamics with this Hamiltonian look like? And in particular, let's ask what the sort of typical eigen states of this Hamiltonian look like. Okay. So again, we have this picture of the energy landscape of H naught, and we have these sort of very tall energy barriers between code words or if we like fake code words, depending on the code. And if you solve the Schrodinger equation, maybe not term by term, but nevertheless, you can prove that any sort of solution to the Schrodinger equation for any eigenstate, low energy is sort of exponentially has exponentially small weight sort of in this blue, high energy region that connects the code words. Okay. So basically, all of the weight in the eigenstate is trapped in this red region very close to code words or these fake minima. And if I just focus on like, two of these wells for a moment, we might ask, can an eigenstate of the Hamiltonian really delocalize itself between these two wells? Given that it's so suppressed at the top of the well, you can show that you need a very, very, like an almost perfect resonance between an energy level in this well versus this well. Otherwise, the eigen states will just stay localized. And you can think of this kind of just by this kind of two by two matrix cartoon. So I have these two energy levels E one and E two associated with two of the wells, and there's some exponentially sort of suppressed tunneling between these two wells. Okay. And basically, if E one minus E two is large, compared to the tunneling, then the weight functions just split. Okay. So it seems promising that this is a sort of candidate model for gate many body eigenstate localization. All we need to do is check out sort of what probability there is to find one of these resonances between E one and E two. And, okay, there's four to the end possible energy level pairs to look at. And we know kind of how suppressed the tunneling between wells is. And the role of this randomness that we added to the Hamiltonian is basically to show that with probability one in the thermodynamic limit, when you draw from that random ensemble of Hamiltonians, you find no resonance. Okay. And without any resonances, every single low energy many Body eigen state will be localized in a single well. Okay. And this represents the first rigorous proof that many body eigenstes can be localized. Uh, so this is sort of another, I think, very cool property of these classical codes. Okay. All right. So that's my classical story. Now let's turn to the phases of quantum codes. So first. What do I mean by quantum LDPC code? Well, I essentially take two classical LDPC codes, HX and HZ, and if they obey this compatibility condition, Then this is a quantum LDPC code. Physically, what does this mean? Well, if I write the code as a Hamiltonian, HX essentially tells me that there are products of Polyxs in the Hamiltonian, for each HZ check, I have a product of polyss. The compatibility condition tells me that these two terms, you know, any pair of these terms mutually commute. So again, this is a sort of solvable many body system, and we know it's spectrum. Okay. Logical operators are. So, for example, a Z operator is something that's annihilated by all of the X checks. It commutes with all the X checks, but it's not written, let's say, as a sort of simple product of all of these Z checks. And there will be K logical Zs and K logical Xs. And so this Hamiltonian H naught will then have a two to the K two to the K fold degenerate ground state. So this is our sort of starting point. Again, to sort of give a simple example of this, a simple example of a phase defined in terms of one of these codes would be the two D surface code or to code. I think I'm going to use these words interchangeably. Um, Yeah, we've seen this in a few other talks, but just briefly. So we lay out our qubits, black dots on this two D grid, and then the sort of orange, red squares, the products of checks around the bounce. Yeah, sort of give us X checks at the black dots around the boundary, and the blues are Z checks. We have one logical operator corresponding one logical cubit, and the operators look like these strings. We can again ask the same questions that we asked for classical codes. Is this a stable phase? Yeah, something so Erie you say if one D Ising model becomes a two d surface code. So there's two ways go back to what you were talking about. The previous slide, you combine two classical codes. Mm hm. They become you know, much higher demi or something. Okay. So there's two senses in which you can read this sentence. So maybe this sort of colloquial way to read this statement is that the one D Ising model is kind of the simplest error correcting code that has a clear connection to a physical phase of matter, and that's sort of true, I think, for this code. Like, this is kind of the simplest quantum code that has an interesting interpretation in many body physics. You can also think of it as there's a kind of elegant prescription for converting classical codes into quantum codes called the hypergraph product, and the surface code is basically the hypergraph product of the one D Ising model with itself. So there are What you're doing with these expanded graph codes is a hypergraph product to make the quantum code? That's one way of making quantum codes is to take hypograph products of these of good classical codes. That's a construction I'll talk a little bit about at the end. The thing you said on the last slide is not the hypograph product. Yes, yes. This is not the hypograp connection between that and this. Oh, yeah, yeah. So here I'm just sort of defining this is what a quantum code is. And on this slide, let's study a simple example of a quantum code and ask sort of physics questions about it, and then we'll return to more abstract codes in a little bit. That's kind of this construction is different than the one in the precurs no. This fits into the construction on the 21 dimensional sing models and make a surface code this way. So HX and HZ are not simple one D Ising models. They're a bit more complicated, but you can get them through the hypograph through this hypograph product construction, and they are closely related to one D Ising models. We can talk more later. It's a simple It's a simple enough construction, but mana. Um, get to sidetracked. Okay. So now we can ask the same questions as before. This time to sort of switch things up. I'll talk first about the stability of the ground state and then talk about stability at finite temperature. So this time, let's ask if we sort of perturb H knught by an arbitrary perturbation? Are we in the same phase of matter? We saw that for the classical Eisenmodel this would not work. And actually, for any classical code, the answer is no. The phase is always unstable because you can always add a longitudinal field to sort of pick out one of the code words as the right crown state. But let's ask if the quantum code might be more stable. And we should actually be very optimistic that quantum codes may be stable phases. And the reason is basically that, um, if we think about let's just say first order perturbation theory for the moment, okay? In first order perturbation theory, right, the ground state, degeneracy of H nut would only be split if one of the terms in perturbation theory had a non zero expectation value in our ground states or between any two ground states. But what are the operators that don't have vanishing matrix elements? There checks or logicals. The logicals are big, right? The order square root N, which is the code distance of this quantum code. So we're definitely not going to get a logical at first order and perturbation theory with a few body V. We might get a stabilizer, parity check C, but the Hamiltonian already sort of wants all the stabilizers to be plus one. Okay. So there's no way that we're going to connect sort of we're going to split the degeneracy of this ground state first order in perturbation theory. So unlike the classical codes, that's a sort of very promising sign. And if we sort of maybe get a little more optimistic, we might conjecture that, well, you really have to go all the way to order square root N or the code distance in perturbation theory to build an operator P sort of order by order that can actually split this degeneracy. This is certainly a hand wavy argument, but it can be made precise and the statement turns out to be correct. So it was shown in 2010 in some very beautiful work that for any local pretervation Ve, you can explicitly show that the ground state degeneracy of the surface code is robust. It only Well, it's approximately robust in the sense that it's only split by this exponentially small splitting, and moreover, the gap is maintained to the excited states. And finally, there's a kind of quasi local unitary that rotates the ground states of H naught to those of the perturbed Hamiltonian. Therefore, we say that this is a stable phase of matter, absolutely. So under all local perturbations, um, Okay. You might have thought, based on that sort of quick argument that if I give you a good quantum code where good here means that the distance grows with N, that that's sort of enough to have a stable phase. But this is actually wrong. So let's sort of illustrate why this is wrong with a sort of silly example. So we're going to take the surface code or torque code checks that we had before, but now the Hamiltonian will be the product of adjacent checks. So it's kind of like the Toti Ising model again, okay. This model turns out to have four ground states. It has sort of two associated with the logicals of the surface code, and it has two extras, which are the logicals of the surface code sort of super, you know, after acting with an error pattern that flips every single parity check in the system. Because here, every single parity check is minus one, but the product of any two of them is pos one. Okay. So we have four ground states of Hamiltonian H Dt, and now we'll play the same trick that we did for the Ising model. We'll just add a small perturbation that prefers the actual ground states as opposed to the ones with errors. And this perturbation, when Epsilon is of order one over M, will again split the degeneracy. I will reduce the degeneracy of the ground state 4-2, making sure that this is not a stable phase. Okay. So then the question is, what feature of this code is responsible for the stability of what feature of the original tort code is really responsible for its stability as a phase of matter? Okay. And the answer is that, in 2010 was said to be local topological order. Okay. And let me not sort of explain the details of exactly why, but at a high level, so in the surface code, all of the stabilizers or products thereof form loops of polyXs or Zs. And these loops can be built out of products of the checks along the plaquets inside. And the key point is that we need a kind of local version of the nm of flam condition. That's the local topological order, which basically says that there's no operator that can be contained in one of these small regions that distinguishes between our two ground states or any two of our ground states. So this Hamiltonian fails that test because a single parity check of the surface code distinguishes whether we're in the sort of all plus or all minus sectors. Okay. All right. Okay, I'm confused because if the distance is high Mm hmm. Then it should be that the code words cannot be distinguished locally. So, um, so, okay, maybe Okay, I've been maybe slightly sloppy with this example. So if you want to be sort of pedantic, you should probably add one extra check here to sort of pick out one of these two sectors as the right ground state. But it's sort of still the same problem that so let's say I pick out this sector as the right ground states by just adding a single check. So like this one in the corner, I add to this Hamiltonian. Now I add maybe up to a minus sign this exact same perturbation. And now I sort of pick out these as the preferred ground states. Okay. And the problem is that basically, if I look out here, there is an operator that kind of distinguishes the local ground states of this Hamiltonian, because locally, kind of these and these both look like ground states. That's sort of the local topological order of condition. If you don't add that extra chaq, then it's actually a low distance code. But with that extricaq it's a high distance code, but not stable. Yes. Okay. Yeah. That's sort of the more precise version. Yes. Okay. Alright, so we need both a quantum code as well as this local topological order condition in finite dimensions to have stability of a phase. That's the story so far? Okay. And now we can sort of play the same game that we did classically. Let's ask about kind of more exotic codes, like these quantum typically when people say quantum LDPC codes, what they really are referring to are codes where K is proportional to N. These are called constant rate codes. And also, as of a few years ago, we have codes where the distance is also of order N, which is very exciting. These constructions are very complicated. I don't understand all the details myself, so I'm certainly not going to try to explain them to you. But these codes exist. And unsurprisingly, just like their classical counterparts, these are kind of fundamentally infinite dimensional beasts. They live on expander graphs. And so maybe it's not obvious to what extent our usual notions of phase of matter even make sense. Okay. But in the same way that the Ising model in one D is just one example of a classical low density parity check code. The surface code is an LDP is a quantum LDPC code, and we could ask whether the notions of stability that we have for this model are just sort of illustrations or examples of a more general result. Okay. Um, and so this brings me to sort of some new results of ours. So, so what we showed is that if you take a quantum code Hamiltonian with check soundness, so I'll define that on the next slide. Keep that in mind. You can perturb it by anything local and, you know, with a reasonably small prefactor. And the ground state degeneracy of H naught and H will be the same, you know, up to this tiny splitting. The gap will not close, and there will be a sort of quasi adiabatic, sort of quasi K local unitary that connects the two ground state subspaces. So this is exactly like this sort of hits all the boxes that we had for the stability of the tort code. Okay? But now sort of in infinite dimensions. Okay. So, indeed, sort of many quantum LDPC codes define absolutely stable phases to all perturbations. Okay. Let me acknowledge there's some related work to our story as well. So first, there was a group last May that studied essentially sub expanding codes, but nevertheless, sort of not finite dimensional codes, or they proved a sort of somewhat similar result to this. And then there's a sort of in tandem work with Rs by DeoyKamani At that proved very similar result to Rs, although some slightly different conditions on the applicability of the theorem. So the sort of shocking thing to me, the fun thing about this result is that it breaks sort of the textbooks on statistical mechanics. These are absolutely stable phases of matter that have constant entropy density at all points in the phase. Okay. So the third law of thermodynamics is just wrong. Okay. So that's exciting. Dynamically. I guess you haven'turned on the temperature I haven't turned on the temperature, sorry. Yes. That comes a little bit later. Sorry. But yes. Yeah, so far, we're just talking about the ground states. But we sort of know thermodynamically that this entropy won't go away at finitetemperature. So there's still a problem. I was the classical example you had before, you didn't really have a thermodynamic phase. Right, the reason for that was because you could turn on perturbations that like pick out special code word, and that returns you to zero entropy at zero temperature The quantum ones, there's no mechanism to sort of break this constant entropy density. The degeneracy is just absolutely robust. So it will be a thermodynamic transit? It's a true thermodynamic phase with constant equilibrium, you expect a thermodynamic if you wanted to transition from, like, the surface code to the LDPC code, there would certainly have to be something. But these codes can also have no thermal transition. So you can have you can be in a trivial phase the whole time. It's just the entropy density doesn't go to zero. Um. Yeah. Okay. What is check soundness? So what is this property of codes that makes them into stable phases? So for a code, the Hamiltonian H naught can be written as a sum of commuting stabilizers. Let me just call them CJs here. And we define check soundness. You can think of this as like rebranding of local topological order using words from the coding literature. But check soundness is basically the property that if there's a stabilizer S, namely, it's some product of these stabilizers. And this stabilizer S acts on a small number of sites, M. Then it can be written as a small number of the generators. Okay. So that's checksund. It basically says that there aren't parity checks that are big products of the sort of things in the Hamiltonian. An example of a code which is not check Sound is the one D Ising model because Z Z N is a stabilizer, but it's the product of every single stabilizer in the Hamiltonian. So another example, maybe the surface code, you know, the usual representation has, like, one redundant X operator and one redundant operator. Yes. And if you don't do that, I guess, then it wouldn't satisfy this at all, right? So these can be redundant. Sorry. Maybe the generators is wrong. Sorry, yeah, ya. That's what I'm saying if you remove the redundant ones from the surface code, then it would now violate Distec soundness condition because I don't think so product. You can only you can get it by the product of all the others, right? Oh, oh, okay. Wait. Okay. Yeah, I think I see what you're saying. So yes, for the tort code, if you remove one check, but you still have the two logicals. Yes, then that would not be check Sound. Okay. Yeah. But the surface codes or the torque codes in their usual presentation are check sound, and it's the same argument as local topological order. Okay. So the stabilizers are loops, and they're the products of checks inside the loop. Since you can't have loops with large volume and small area, you have check soundness. And just one comment. So the number of terms in the Hamiltonian should scale for general codes, sort of slower than quadratically in the size. Popular QLDPC codes like the hypergraph products, balance products. These have this parameter A is one. So they have kind of the best possible check soundness, and therefore they form stable faces. Okay. So I'm not going to tell you how you prove this result. It's a very technical thing, but one kind of cool feature of this result, even if you're not interested in LDPC codes, besides the surface code, our theorem still proves that the surface code is stable to the spatially non local perturbations, which, as far as I know, is still a new result. Okay. So that's cool. All right. Now, let's finally kind of close the loop and talk about the stability of quantum codes at finite temperature. This is the last kind of missing piece of the story. Oh, actually, sorry. Before I do that, a couple of generalizations of this result. So if you take a sort of classical code Hamiltonian, so only products of polyss and H naught, this code, if it's check sound is, again, sort of stable to symmetric perturbations with no locality constraints on the perturbations. And for the Ising model, which is not Check Sound, as I mentioned, then it's stable to local perturbations. Better. But this last result kind of comes with a new twist because one thing I didn't stress is the locality of this quasi adiabatic U that rotates you from one phase to the other. We now have exponential and volume bounds on the terms that are in the sort of Hamiltonian that generate this U. And actually, in some nice recent work with Carolyn Jang, we have some very nice applications of these kinds of locality bounds, where volume tailed bounds are very important. So this is a kind of nice technical result which comes from the proof that we have. Okay. All right. Now on to finiteteperature. Okay. So let's return to this question of when quantum codes can self correct. And again, sort of how does this play into the stability of phases at finite temperature, et cetera. Again, let's start with the surface code just does it kind of warm up. So the surface code in two dimensions is neither a memory, it doesn't self correct, and the phase is also non existent at finite temperature. In the quantum context, we would say that we don't have sort of topological order at finite temperature in this model. And why not? Here's again, a sort of nice analogy with the one Did model. So if I start creating an error that sort of runs along part of a logical operator, there's only a single violated check at kind of the end, and that single violated check just diffuses through the system. There's no energetic barrier. So at any finite temperature, these sort of dangerous single flip checks proliferate and there's no stability. Okay. More generally, we sort of know that other two dimensional systems also will not be self correcting. It's two dimensional local quantum codes. Great. So what's the sort of simplest thing that does work? The simplest one that does work is the four D surface code, or four D torque code. And this has a finite temperature transition to topological order. And this is detectable in the partition function because this quantum code has many redundant parity checks. Okay. And below a critical temperature, the sort of redundant parity checks kind of force you all the low energy stat Typical low energy states are topologically order. And then I like this meme. We don't live in four dimensions. We live in three dimensions. That seems important. So maybe this sort of whole memory story is kind of useless. But nevertheless, you know, I. Okay. If we don't live in four dimensions, then, like, why not stop at four? Why not just ask questions about the stability of codes at finite temperature and infinite dimensions. Oh. Okay. These good quantum LDPC codes have the kind of quantum generalization of this linear confinement condition. In the quantum setting, this is just a statement about what are called reduced errors. So up to sort of stabilizer equivalents, many of these good quantum LDPC codes have the same kind of energy landscape that we saw for classical codes. Okay. And this is possible, even for codes which provably have no redundant checks, like hypergraph product codes, or certain hypograph product codes. And these codes are sort of thermodynamically featureless. I guess I didn't show that explicitly on the sign, but just you can trust me on that. For what it's worth, I believe that many perhaps all of the good code constructions, at least that I'm aware of, also have this property, but it's not proven, as far as I know. Okay. It's the same story effectively as in classical Stat MEC. So we think about decoding this quantum code at finite temperature. There will be, you know, local errors that start nucleating, and the sort of dangerous thing is forming a kind of non decodable cluster of size code distance. That will then potentially lead us to make a logical error with our kind of local decoder. But the point is that just like for the classical codes, when you have this linear confinement property, that sort of guarantees for you that the bottle neck, the probability of finding one of these kind of dangerous clusters is exponentially suppressed with the code distance. And so the memory time of this code will sort of scale with the code distance. Okay. So, kind of like in the classical story, these quantum LDPC codes kind of break the analogy between finite temperature, self correction and sort of thermodynamic thermodynamically detected transitions to topological order. In this sort of more recent work, the authors, I think, give this a kind of nice name of the I think the topological quantum spin glass. So topological is coming from this fact that, you know, we have topological order probably in all low energy states, but that's an open conjecture. But certainly, like, the probability of finding one of these clusters, which starts to trivialize the system is very low. Um, okay. And yeah, kind of like the classical codes, these have this kind of glassy breakdown of ergodicity that's invisible to thermodynamics. Okay. So that's basically what I wanted to tell you. If you're going to take one thing away from this talk, it's that, you know, the robustness of error correcting codes to perturbations, the ability to do local decoding is closely related to the robustness of a sort of similar an analogous related phase of quantum matter. And that analogy was sort of previously understood for Ising models, ferromagnets, surface codes, et cetera. But now we understand how this story generalizes to more general LDPC codes, and they give us sort of interesting new phases of matter with kind of quantum glass behavior and with kind of robust violations of the third law of thermodynamics. Okay. And these phases are, again, absolutely stable against all perturbations. Um, maybe to sort of go back and think about error correction with these results. I don't know whether it's true, but it might be interesting to ask whether check soundness is a kind of appealing property purely from the kind of coding perspective in its own right. Perhaps this is a sort of valuable property to look for when building codes. Anyhow, that's all I wanted to say. Thank you. Any questions? Is there some kind of general relation between the density of Redundancies and the thermonmic properties? I mean, I see the extreme limit, have no redundancies at all. Express everything in terms of the gas of checks. Yeah. Once you start putting in runses to decill, okay. So you know for sure that you need a finite density of the redundancies to have anything visible in thermodynamics, but I don't know whether you need a critical density. My guess would be that as long as it's non zero, there could be a transition. I'm not sure. Yeah, I guess I mean, you didn't really talk about fault tolerance, but that's another type of stability. Can you like, comment on the relation? So regarding the sort of dynamics at finite temperature, I would say that if you have a finite temperature transition, you know, in dynamics. So below some finitetemperature, you have long lived memory. Then you know that fault tolerant, there's a non zero threshold. Right, but we also know that don't any of this for fault tolerance, right? That's right. So yeah, so I don't know whether there's a sort of um maybe if you're asking like, is there some condition on the code that's weaker than linear confinement, but that leads to having a non zero threshold, that I'm I'm not sure if there's a sort of property of the code that's more than high distance that's sufficient for that. So the minimum distance code latency check code tells you something about the threshold for the optimal decoding. Where. Where is u the the what you're computing here, but you can compute to the functions that you're showing that tells you something about a threshold for the sub optimal passage to the coding. So the fact that you said that good codes don't imply stability, does that mean that these two thresholds are far from each other in these cases or That would be my expectation. Yeah, that, for example, for the two D surface code, that I don't but I also I think most of the good codes that we know kind of do fit into the frameworks that I've described, as far as I can tell. Like, we didn't check for every single family of good codes, whether they behave exactly like this, although I suspect that they do. Um, but your analysis used in a confinement. But acromaoa will be local testability, maybe. So if there's a crest you have local testable at. So conto is suit but if you have this, it be useful how useful for. Okay. Yeah, so classically, if you have a locally testable code, what likely that does is it kind of restores the thermodynamic transition to So there would be a sort of thermodynamic transition to spin glass order below some temperature. This is my guess. But there may still be this kind of intermediate window where you have ergodic breaking invisible to thermodynamics. And I would assume the same thing holds if we find the quantum in LTC. So the linear confinement property, is that basically the same as the single cut property or is there some distinction between them? It's not the same thing. For example, the 40 torque code doesn't have this because it's four dimensional, but it's still single shot decodable and sort of thermal memory. So this is a much stronger condition. Effectively, this makes the proof of memory rather short. If you don't have linear confinement, that doesn't rule out having self correction. But then you need to think very carefully about the kind of entropics versus energy barriers for growing as, and it becomes a much more complicated problem. It seems like it has to be solved on a case by case basis. One last question. I mean, if you break the linear confinement weekly, I mean, this condition which gives you this rigorous. Is it possible? Like you mean like one minus Epsilon here or something? We haven't thought about it. Yeah, I suspect that as soon as it's one minus Epsilon, you may need to worry in principle about this entropic versus energetics question. I would be a little surprised if you got around that. But A that thank Andy and I'll turn it over to Nima