Comparing resilience of brain preservation with digital data preservation

Welcome to the Sparks Brain Preservation forum
Post Reply
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

When we talk about long-term preservation, it is notable how similar the challenges are between biological preservation (brains, patients, biological specimens) and digital data preservation (scientific data, archives, individual memories). Both must survive not only technical hurdles, but also environmental and geopolitical risks. These risks cannot be understood solely in terms of earthquakes, floods, or blackouts: they are shaped by the broader trajectory of technological progress and social priorities.

After the decades from the post–World War II surge of innovation to the 1970s, driven by massive public investment and accompanied by widespread optimism and a moral drive toward scientific progress, a slower and more uneven progress has been observed, shaped by social choices and political climates. The narrative of “exponential growth” has often been more apparent on paper — through metrics like Moore’s law — than in genuine breakthroughs such as the discovery of the Higgs boson or high temperature superconductors (HTS). Too often, resources have been diverted toward military competition and geopolitical dominance rather than long-term scientific exploration without immediate utility. This historical context matters, because resilience in preservation — whether of brains or of data — depends not only on engineering but on the willingness of societies to sustain infrastructures through periods of stagnation, conflict, or shifting priorities.

Environmental risks
Brain preservation facilities could face earthquakes, flooding, fires, and infrastructure collapse from various causes. Salem (where many of SBP’s brains will likely remain) is located near the Cascadia Subduction Zone, one of the most dangerous seismic areas in the world. By contrast, Alcor deliberately moved to Arizona to minimize seismic and climatic risks, while CI in Michigan faces only moderate seismic exposure.

Digital archives like Zenodo are hosted at CERN in Switzerland, a relatively low-risk seismic region, but their data centers are not underground bunkers. Their resilience comes from redundancy and distributed backups rather than location alone.

Systemic risks
For brains: continuity of preservation protocols is critical. Today SBP primarily uses chemical preservation at –20 °C, which depends on stable refrigeration and power supply. Only a portion of brains remain cryopreserved in liquid nitrogen, where risks involve continuity of LN2 deliveries and secure containment. In both cases, systemic fragility arises from prolonged outages, supply-chain disruptions, or governance failures.

For data: cyberattacks, prolonged blackouts, or even global conflicts. Redundancy helps, but no system is immune to systemic collapse.

Pragmatic resilience and the “exponential growth” narrative
Jordan Sparks once wrote:
If things go really badly, we just shut down all services and the company itself goes into a sort of hibernation mode. The building would just sit there with no activity. Volunteers would top off the LN2 every few weeks for the handful of patients who are cryopreserved. But I seriously doubt it will be centuries. If I survive another 50 years, then we're only looking at less than another 50 years after that before revival will likely be possible. Remember that technological progress will probably continue to be exponential.

This statement captures both pragmatism and optimism: the idea of a “hibernation mode” as a survival strategy, and the reliance on exponential technological progress as a guarantee of eventual revival. Yet here lies a critical issue: such information should not be scattered across Reddit comments or forum threads. It would be reassuring if SBP made public the protective and mitigation criteria it has adopted, to strengthen both member confidence and the credibility of the project. Transparency is not a luxury; it is a prerequisite for trust.

The importance of addressing the unforeseen
The word “unforeseen” should also include risks that go beyond familiar environmental or systemic threats. These risks cannot be reduced to earthquakes, floods, or blackouts; they encompass broader challenges such as:
  • Technological stagnation: History shows that progress can stall. After the post–World War II surge of innovation, the decades since the 1970s have often seen slower, more uneven advances. Moore’s law gave the illusion of exponential progress, but genuine breakthroughs in energy, medicine, and fundamental science have been less frequent.
  • Geopolitical shifts: The last fifty years illustrate how fragile the balance can be between peace, dignity, and freedom on one side, and expansionist ideologies on the other. Indeed, human rights and the centrality of rationality in law and culture are increasingly being challenged today.
  • Social inertia: Future generations may not accelerate progress; they could amplify stagnation through complacency, distraction, or misplaced priorities.
  • Global pandemics: As demonstrated by COVID‑19, biological risks can disrupt economies, slow scientific progress, and strain social cohesion.
  • Extreme scenarios: Nuclear attacks or similarly catastrophic events. These may seem remote, but resilience planning must include them if credibility is to be maintained.
In this sense, resilience requires planning for scenarios where exponential growth does not materialize, or where it is interrupted by geopolitical crises and social regression.

Constructive proposals to SBP
  • Explicitly address unforeseen risks, including systemic outages, supply-chain disruptions, environmental events, geopolitical instability, and even extreme scenarios such as nuclear attacks.
  • Clarify contingency strategies such as “hibernation mode,” but in formal documents rather than scattered forum posts.
  • Acknowledge the uncertainty of technological trajectories, and explain how resilience is ensured even if exponential progress slows or halts.
  • Reassure members whose brains are or will be already preserved by committing that their preservation will benefit from future scientific and technological advances. (For example, if future studies demonstrate that lowering storage temperatures below –20 °C can be combined with methods that prevent cracking or avoid instability of the aldehyde-based perfusion compound, then SBP would progressively lower storage temperatures, even if this requires purchasing more powerful refrigeration units. This would extend durability beyond the ~100 years currently anticipated for uploading, while minimizing long-term damage from aldehyde fixation.)
jordansparks
Site Admin
Posts: 292
Joined: Thu Aug 27, 2015 3:59 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by jordansparks »

The Cascadia earthquake is a non-issue. Risk = Severity x Frequency. With a frequency of once every 400 years, the risk is therefore extremely low. If we had that earthquake right now, I estimate tens of thousands of deaths along the Oregon Coast from the tsunami. But here in the valley, 50 miles away, the damage would be more nuanced. The main considerations would be liquefaction amplifying the waves and also unreinforced masonry buildings. Our facility is built on a rock bluff sixty feet above the floodplain where there cannot be liquefaction. Our facility is also built from steel with massive cross bracing as well as heavily reinforced concrete. Everything is engineered to easily withstand the G forces. So no. It's simply not an issue.

The refrigeration is not required. It probably prevents slight damage when used constantly for 100 years, but that's very different than claiming that a week without electricity would cause damage. It just wouldn't. Alcor's containers are very tall. It sort of makes sense for them to be somewhere with zero earthquake risk, especially since they started in LA. But that's not an issue for our small plastic containers or for our short dewars. Even in the very worst scenarios, we can still get liquid nitrogen. There might be a delay of a couple of days and it might be more expensive, but it will be available. We could go for about a month and a half before any of the cryopreserved patients would be at risk.

This is already covered in https://www.sparksbrain.org/riskManagement.html, although that page could use a refresh.

I wouldn't normally argue this point, but it might be fun. I disagree that progress has stagnated. It's been pretty consistently exponential. We had a few golden decades in the US between about 1950 and 1970 because we were the only country that hadn't been bombed in WWII, but that was a bit of an illusion. It was localized and temporary. Yes, war could interrupt that exponential growth, but it hasn't happened yet. We will need a LOT more exponential growth to get to a high enough technological level to be able to scan all the molecules of a brain or to use any nanobots. I'm estimating another 100 years of exponential growth, but it's so hard to predict.
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

Jordan, thanks for pointing to the Risk Management page: I agree it’s useful, and as you noted it could use a refresh.

My quake example wasn’t only about “infrastructure collapse”. It was also about mechanical stress transmission. With the older ASC protocol, vitrified tissue is fragile and susceptible to micro‑cracks under vibration. Even with the current formalin protocol (less fragile), earthquakes can still cause tipping or displacement of the small plastic containers (oscillations of the liquid are indeed a much more subtle issue, but still not a “non-issue”: this should be a fundamental principle of resilience, which is the focus of my thread).

Two practical points that seem under‑addressed:
  • Seismic damping vs tipping: Your reply focused on tipping risk for tall dewars (as Alcor had to consider during relocation). But short dewars without adequate seismic damping can still transmit vibrational energy through the liquid. A fragile object immersed in fluid during shaking isn’t protected like a crystal chandelier packed in polystyrene; the fluid itself carries forces to the tissue.
  • Container retention: In Figure 11 of Charles Platt’s article, the storage vault arrangement of small plastic containers doesn’t appear fully secured. Although rare, such an event could displace containers or cause falls (regardless of whether the tissue is vulnerable to vibration).
For context, Alcor’s move to Arizona involved road transport of very tall dewars, where the main concern was tipping and logistics: truck vibrations were a non‑trivial, largely unmitigated factor. By analogy, a seismic scenario shares the same class of subtle vibrational risks, not just the obvious tipping hazard.

Earthquakes were only one example in my post. I also mentioned other scenarios, such as the possibility that uploading may require storage well beyond the ~100-year estimate, which I also agree with.

As a further example, I noticed that the Risk Management page already covers vandalism quite well, but sometimes risks come not from malice, but from simple negligence (an employee forgetting to perform a check, or mishandling routine maintenance). This “human factor” is equally important, because it can cause damage just as surely as an earthquake or vandalism. I learned this the hard way in 2023, when my main PC, through a small oversight during a move, ended up in the wrong place: buried in a box in the cellar, instead of safely upstairs where I thought it was, lost among a jumble of boxes. Shortly afterwards I left for a year, 1,300 km away, unable to intervene. During that same year, a flood struck: an event whose magnitude was comparable, yet far superior, to Italy’s flood of 1966, with parallels found only in medieval chronicles. The overlap of these circumstances — the misplacement, my absence, and the extraordinary flood — shows how resilience must account for chains of unlikely events. Of course, my personal negligence was far greater than anything one would expect from professionally qualified SBP staff, but the principle remains: even highly improbable negligence can have serious consequences when combined with external hazards. That is the essence of resilience: “very unlikely” does not mean “non‑issue”, a principle otherwise known as Murphy's law.

So, if you don’t consider adding the “hibernation mode” idea I suggested in this thread, perhaps at least the Risk Management page could clarify the flooding section: instead of just “Flooding: Essentially impossible in our facility” it could add the explanation you gave here in the forum: “[because] Our facility is built on a rock bluff sixty feet above the floodplain”. That kind of detail strengthens confidence and shows that management goes beyond listing hazards.

My aim wasn’t to push a single hazard, but to invite adding further transparency, e.g. by public protocols showing how SBP manages a broader range of risks in practice.

I’m not criticizing your work; I’m advocating for clarity. If future updates addressed seismic damping for dewars, container retention under shaking, and longer storage horizons, that would strengthen confidence that “management” truly covers both obvious and subtle risks (e.g., in the design of the new “huge freezer).
jordansparks
Site Admin
Posts: 292
Joined: Thu Aug 27, 2015 3:59 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by jordansparks »

Let's start with the brains that are in liquid nitrogen. Those are the ones I've been thinking about for the longest. They will be fine. They are all individually wrapped in padding. But I have not spent as many years thinking about storage in fixative, and it seems that I overlooked some details. I was sort of imagining that the liquid would pad them just like it does inside our skulls, but that's not quite right. They will slosh. The solution is to add foam padding in the liquid, which we will be doing shortly. We need to research which kinds of foam will be completely inert in formalin. If there was an earthquake tomorrow, it would not be catastrophic, but the brains might get "bruised".

As for securing the containers, we could also do better there. We will work on padding between the containers. The patients are all in a smaller refrigerator where they cannot fall off the shelves. The ones in the big walk-in are all research cases, so those are not important. The way they are currently sitting, some of the research cases would indeed fall off the shelves, although the containers would not crack. We've been meaning to get around to that, but it's not very high priority.

For the most part, fixed tissue is resistant to many physical risks including those that might result from employee mismanagement. Flooding is literally impossible when on top of a hill. This location was very intentionally chosen to avoid flooding. I'll keep working on that page to further clarify.
jordansparks
Site Admin
Posts: 292
Joined: Thu Aug 27, 2015 3:59 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by jordansparks »

After thinking about it for a few more hours, I'm less concerned about the brains stored in fixative. The density of the brain would be very similar to the density of the liquid, so it would be well protected. But we will still look into improvements of course.
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

I really appreciate that you took the time and care to protect the vitrified brains with padding: as far as I know, neither Alcor nor CI have ever documented such measures. What I’d like to ask, though, is whether your concern was focused mainly on shocks, impacts, and sudden jolts, or if you have also considered the issue of vibrations.

Vibrations are not just another form of “impact”, but are technically different from the other aspects of an earthquake: while shocks and jolts are short, high-energy single impulses, vibrations are prolonged oscillations at low or medium frequency, which can transmit energy in a very different way, also building up resonance in the dewars. This is why shock-absorbing padding or foams are not necessarily effective against vibrations and often are not the same as damping or anti-vibration materials, which can require damping systems or anti-vibration mounts specifically tuned to reduce oscillatory energy.

I think this may be relevant from the perspective of the fragile vitrified brains, which are susceptible to micro-cracking through such vibrations (and, under significant thermal gradients or high-energy shocks, potentially even to macro-cracking). If you think it is relevant, one might also imagine scenarios such as a nearby explosion (an accidental one, a planned demolition, or even a terrorist act), which actually would involve both types of mechanical stress: the initial shock wave behaves like a sudden impact, while the subsequent pressure waves can induce vibrations in the dewars.

I make no secret of a certain inclination toward cryopreservation. I would actually like SBP to maintain, in the long term, the relevant know-how, technology, and equipment needed to implement an updated version of the ASC protocol, subject to a specific hypothesis: “if future studies demonstrate that ASC can be combined with methods that prevent cracking or avoid further damage”.

Note that in this hypothesis I’ve simply recycled the wording of the final proposal opening this thread, replacing “lowering storage temperatures below –20 °C” with “ASC” and “instability of the aldehyde-based perfusion compound” with “further damage”.

I will now go slightly off‑topic by talking about adaptability (though it might be considered complementary to resilience).

The validity of cryopreservation has shifted over the years, as your own words illustrate:
  • 2019
    Q: “… why even bother with all the expense of cryopreservation? Why not just chemically fix the brain?
    R:No, high quality chemical fixation alone is not good enough. Damage over time is very significant due to molecular motion. There is no sharp deadline on how fast fixation must be followed by cryopreservation, but it's in the range of hours, not years.
  • 2023
    In favor of simple preservation in formalin you wrote:The arguments that I've heard against long term fixation are that damage would happen by just sitting there over the course of decades. But I have not seen evidence for this yet.” (Although you also noted: “… lipids. These are actually the molecules that I worry about because they are not locked in place by the fixative crosslinks. Instead, they are trapped in a web of proteins, but they very well might still migrate. The brain has a lot of lipids in it.”)
    By contrast, when concluding about cryopreservation, you wrote:Cooling adds nothing except possibly better preservation over very long periods of time. But I'm increasingly skeptical that it even does that at all. […] The downside is that each case would need to be ramped through cryoprotectant, which itself could be damaging. It's known that osmotic pressure can cause damage, it might be complex to come up with a protocol that could be used safely and reliably. Complexity has a certain risk associated with it. Because we can always transition a patient from fridge to freezer, and from freezer to LN2, …”
On this last point, my question is: would such a transition ever provide a “sharp” benefit even “in the range of years”?

I suppose I will have to wait decades for an update of your pages on that…

Anyway, you must have a good pragmatic adaptability to change the validity of cryopreservation while remaining grounded in scientific evidence.

This validity also appeared in the topic “glutaraldehyde”, which seems to have disappeared for almost three years now — a circumstance I do not particularly mind, since formaldehyde preserves molecular information in a far more inferable form; but unfortunately, this topic seems to be leading in the opposite direction from the path favoring cryopreservation:
  • 2022:But if we instead use glutaraldehyde, then it's plausible that the preservation over time could be every bit as good as with liquid nitrogen. This is speculation. We will need to show evidence for this, but I think it's very much worth exploring. Liquid nitrogen does still have some advantages: It's guaranteed to lock all molecules in place, […] So cryopreservation is always going to remain the gold standard in cases where perfusion and immediate cooling is possible.
  • 2023:The Brain Preservation group advocates for very aggressive fixation with glutaraldehyde. I would tend to agree that we probably want something stronger than formalin for our brains, but I'm unclear if the glutaraldehyde would distort the tissue. I'm guessing not since glutaraldehyde is always used when taking electron micrographs. We're looking into those issues as well.
Now, as an analogy “by adaptability”, I might say “I fear I will have to expect months for an update of your pages on that…”

By the way, just out of curiosity (I've gone off-topic for too long…), how your facility is geographically protected against fires? (I was a bit worried by the news from just a few months ago, as well as those of 2020)
jordansparks
Site Admin
Posts: 292
Joined: Thu Aug 27, 2015 3:59 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by jordansparks »

Alcor and CI both use padding for storage in liquid nitrogen, just like we do. Yes, I've always been willing to shift my position when presented with new evidence. In this case, the evidence has come largely from Andy reviewing huge numbers of scientific papers and pointing out to me that my previous assumptions about liquid nitrogen being protective weren't actually backed up by any evidence as I had assumed. Would changing the temperature ever provide a sharp benefit in the range of years? There seems to be diminishing returns. In other words, refrigerator is great, freezer is marginally better, liquid nitrogen probably does nothing. As for glutaraldehyde, Andy was again instrumental in that view shift. But it's very nuanced. We're coming back around to that, and glutaraldehyde will probably be part of the long term protocol. The reason we moved away from it slightly was that it can form barrier that slows formaldehyde from penetrating quickly. So there are many situations where it probably shouldn't be part of the initial perfusion. Again, this came from a large number of published scientific papers. Finally, I've come to appreciate how difficult perfusion is after many many cases. That means that there's no such thing as good perfusion, so cryo cases have stopped making sense to me because of that. Poor perfusion means ice. Our facility is nearly impervious to fire. It uses metal cladding, no wood, no combustibles nearby, fully sprinklered, and then the concrete vault inside is inherently fireproof.
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

Ah, ok, you meant to those “mummifying” bands Alcor used to show in their videos years ago. I had always thought they were mainly meant to protect the fragile vitrified tissue during placement in the dewars, together with the fixing straps. As far as I understand, those bands are about 20 cm wide and create a padding layer of no more than 3 cm. Personally, I’ve always assumed that the “sleeping bag style” padding used by KrioRus (probably copying CI) might be more effective. For isolated heads, Alcor seems to use more targeted padding, but again the purpose is to prevent jolts during insertion in the dewars and, more generally, to stabilize the body during handling and movement. Perhaps for brains you were thinking of a different thickness or a more refined distribution of padding, since there is no natural cranial structure (even if vitrified) to protect the tissue. But if you consider cracking to be “inevitable” (I’m not so sure…), then vibrations and thermal gradients become almost irrelevant (but not for me…) — which is one of the reasons you prefer the simplicity of preservation in fixative, trusting in limited degradation over ~100 years, rather than risking the complications of cryopreservation (“in fact probably slightly lower in quality due to inevitable cracking”, as I read on https://sparksbrain.org/services.html ).

Changing topic, but not entirely: when I asked “would such a transition ever provide a sharp benefit even in the range of years?” I wasn’t so much asking whether the transition from freezer to LN2 is “diminishing returns”, but whether it is feasible after a decade or more. To recycle the question with some changes in wording and scope: would a transition from formalin to glutaraldehyde still be possible after years, without particular damage? You yourself mentioned that glutaraldehyde is not ideal for initial perfusion because of poor capillary penetration, and I had assumed a hybrid perfusion (formalin + glutaraldehyde) “in the range of hours”. So why not “in the range of years”? Glutaraldehyde has become more interesting to me lately: you convinced me of the molecular inferability of formalin fixation with the image https://sparksbrain.org/images/crosslinks400.jpg . I would be surprised if a similar image of glutaraldehyde fixation were equally convincing. In the meantime, I asked for a comparison with the paper Andy McKenzy linked in his thread.
jordansparks
Site Admin
Posts: 292
Joined: Thu Aug 27, 2015 3:59 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by jordansparks »

I thought Alcor was hearing cracks in all of their cases when they were using the crackphone. I could be misremembering. With such a large block of tissue, I would not be surprised, but I also don't think it would cause very much damage. A crack is generally one of the least significant kinds of damage. Well we have already used glutaraldehyde in most cases, and I think that will continue to be true. So the scenario of conversion to glutaraldehyde in 10 years is moot, but yes, it would be possible if the initial preservation was only formaldehyde. Yes, I love glutaraldehyde as well. They used to use it to make leather, so I like to think of it as leatherizing the tissue.
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

You are referring to this 2003 news: “Despite multiple large acoustic fracturing events recorded during cooling, the brain remains a cohesive whole with no grossly apparent fracturing or freezing damage. The consequences of fracturing seem to remain microscopic as long as tissue remains at cryogenic temperature.” ( https://www.cryonicsarchive.org/library ... racturing/ )

They thought to solve the problem with intermediate temperature storage (a method you also experimented with, right?): “Fracturing is a universal problem in cryonics patients stored at −196 °C. Long term inspection of vitrified patients has revealed extensive cracking damage, motivating the development of intermediate temperature storage systems.” ( https://www.cryonicsarchive.org/library ... avoidance/ )

This is because they — and perhaps you as well — did not go in the direction (too expensive…) of methods that could theoretically eliminate thermal gradients completely in the cooling phase down to LN2. Such approaches would include ultra-controlled cooling curves (extremely slow and uniform temperature descent), intermediate conductive fluids or gases to distribute heat more evenly, thermal buffer materials (phase change gels or interfaces to absorb and redistribute stress), or even multi-zone active cooling systems that regulate temperature from both outside and inside simultaneously. In principle, these strategies could minimize or eliminate gradients without changing the final storage temperature, but they are technically complex and economically prohibitive for whole-body cryopreservation.

In any case, if you move from “Don't forget about cracking, which can include areas of pulverization. […] Incidentally, cracking might be worse if you use aldehyde prior to cryopreservation.” (May 13, 2025) to “A crack is generally one of the least significant kinds of damage” (Dec 09, 2025), this may create some confusion.

That is why I worked out a set of percentages, which I carefully calibrated throughout the whole day with the help of AI (in the end I rounded them a bit, since they are obviously very approximate). Their estimates take into account the maximum theoretical inferability that could be achieved with future technology.

Thanks to this exercise I also got closer to understanding, for example, the weight of glutaraldehyde compared to formalin, and realized that cracking, in a sense, is the “last” item on the list.

Framework for comparing LTM inferability loss across damage types

Warm ischemia (baseline scenario, without reperfusion or resuscitation interventions)
  • 1 min: 0–0.2% (functional collapse; negligible deletion; minimal autolysis; early destabilization of molecular states; initial ionic imbalance; overall structure intact)
  • 3 min: 0.1–0.5% (early ionic and metabolic drift; phosphorylation and receptor instability; early scaffold destabilization; autolysis begins but remains weak; morphology intact)
  • 5 min: 0.3–1% (initial blurring of maintenance states; autolysis active at full metabolic rate with early lysosomal leakage; early membrane degradation; EM fully intact; fine-scale synaptic constraints begin to loosen; inferential loss dominated by molecular drift)
  • 10 min: 0.5–2% (early no reflow; autolytic enzymes accelerate; cytoskeletal destabilization progressing toward early fragmentation; regional vulnerability increases; redox imbalance increases molecular ambiguity; early loss of nanoscale fidelity)
  • 20 min: 1–3% (mitochondrial failure; proteostatic stress; autolysis contributes to microdomain blurring with increasing spatial heterogeneity; early fragmentation of fine-scale molecular constraints; deletion remains minimal; early region-specific constraint weakening; early degradation of synaptic molecular architecture)
  • 30 min: 1–4% (topology preserved; moderate blurring of molecular states; early synaptic misregistration)
  • 1 hour: 2–6% (accumulated redox damage; autolysis continues; progressive loss of membrane integrity; constraint weakening in selective regions)
  • 2 hours: 3–8% (microdomain disorganization; autolysis widens compatible molecular states; EM still “good”)
  • 3 hours: 4–10% (consistent with Jordan’s ~2% structural loss → ~4–10% LTM loss)
  • 6 hours: 6–15% (moderate drift of maintenance states; cumulative autolysis; early synaptic misalignment; fine-scale molecular constraints largely lost; remaining structure increasingly coarse-grained; morphology still readable)
  • 9 hours: 10–20% (heterogeneous degradation; regional constraint collapse; molecular specificity eroded)
  • 18 hours: 20–35% (Jordan’s ~15% structural loss + long-duration autolysis; many fine-scale patterns blurred; topology preserved)
  • 24 hours: 30–40% (Extensive molecular destruction; most fine-scale constraints erased; only large-scale geometry and some robust structural motifs remain; atomic-scale inferability severely compromised)
Note: Warm non-ischemic loss is dominated by autolysis, which proceeds at full metabolic rate at 37 °C.

Systemic collapse component (scenario-based inferential loss)
  • Mild pre-arrest hypoxia (10–30 min): +0.3–0.8% (early destabilization of phosphorylation states; mild receptor trafficking instability; reduced proteostatic buffering; early Ca2+ dysregulation; mitochondrial depolarization in vulnerable regions such as CA1; increased susceptibility of microdomains to subsequent ischemic blurring; no structural deletion but clear weakening of molecular stability)
  • Moderate hypoxia / respiratory failure (30–90 min): +1–2.5% (microdomain drift; phosphorylation noise; redox imbalance; early synaptic instability; proteostatic weakening; partial collapse of metabolic reserves; reduced ability to buffer ROS and Ca2+ spikes; global metabolic constraint structure begins to loosen; ischemic vulnerability significantly amplified before arrest)
  • Sepsis / systemic inflammation (hours): +2.5–5% (cytokine-induced synaptic dysfunction; microglial activation; BBB permeability changes; metabolic exhaustion; severe redox imbalance; widespread weakening of metabolic constraint structure; increased entropy of compatible molecular states; brain enters ischemia in a globally destabilized condition)
  • Hemodynamic instability/shock (30–120 min): +3–6% (loss of homeostatic buffering; fluctuating perfusion with repeated near-no-flow episodes; mitochondrial collapse in vulnerable regions; accelerated blurring; early constraint collapse; CA1 and association cortices become highly vulnerable; ischemic mechanisms D/B/C will rise faster once arrest occurs)
  • Hemorrhagic shock (20–90 min): +2–4% (global hypoperfusion with partial flow; reduced oxygen delivery; early metabolic collapse; redox stress; microdomain instability; moderate constraint weakening; less inflammatory than sepsis but more destabilizing than mild hypoxia)
  • Multi-organ failure (hours): +4–8% (collapse of maintenance states; severe metabolic exhaustion; systemic acidosis and hypercapnia; widespread constraint collapse; high entropy of compatible states; synaptic and microdomain stability severely compromised; brain enters ischemia in a profoundly weakened state)
  • Prolonged agonal period (hours): +5–10% (severe metabolic drift; repeated near-arrest episodes; fluctuating hypoxia; cumulative ROS damage; synaptic misregistration; global loss of metabolic constraint structure; brain enters ischemia with major pre-existing instability; amplifies B(t) and C(t) dramatically)
  • Temperature modulation (hypothermia/hyperthermia): −1–+2% (hypothermia reduces systemic-collapse vulnerability by slowing metabolic drift and autolytic predisposition; hyperthermia increases vulnerability by accelerating proteostatic failure, redox imbalance, and microdomain instability; approximate effect: −0.10–0.15% per °C below 37 °C, +0.15–0.25% per °C above 37 °C; applied as an additive modifier)
  • CPR modulation (effective/ineffective): −1–+3% (effective early CPR = within 2 minutes and continued for 5–10 minutes until perfusion is established; high‑quality compressions with minimal interruptions restore partial perfusion and delay metabolic collapse; ineffective or delayed CPR = after 4–6 minutes, poor perfusion, intermittent no‑flow; increases vulnerability due to repeated ionic destabilization; applied as an additive modifier)
Note: These percentages quantify the scenario-based systemic inferential loss that occurs before the onset of warm ischemia (i.e., before cardiac arrest). This component does not include ischemic mechanisms themselves (deletion, blurring, constraint collapse), nor autolysis, nor structural degradation. Instead, it captures the degree to which the brain enters the ischemic interval in a pre-weakened state, with reduced metabolic reserves, impaired buffering capacity, and destabilized molecular constraints. Systemic collapse does not directly erase synapses or distort ultrastructure, but it amplifies the susceptibility of neural tissue to subsequent ischemic mechanisms. A small quasi-multiplicative effect is also present: systemic collapse slightly increases the rate at which intrinsic ischemic mechanisms accumulate, typically contributing an additional 2–8% of the intrinsic ischemic loss. This effect remains minor, and the overall formulation is practically additive. Temperature and CPR act as modulators that can either mitigate or exacerbate this vulnerability. All intrinsic ischemic processes remain modeled separately in the warm-ischemia list.

Pre-perfusion cold ischemia (incremental loss beyond warm ischemia)
  • 5 min: +0.3–0.5% (ischemia still global; cooling just started; tissue mostly above 20 °C; metabolism slowed but still active at roughly 50–70%; autolysis slowed 5–10× but clearly present; drift reduced but not negligible; early ionic imbalance and mitochondrial depolarization from warm remain unresolved; warm-phase molecular instability begins to freeze in place as temperature falls; deeper regions such as CA1 and hypothalamus remain warmer than cortex)
  • 10 min: +0.6–0.9% (ischemia still dominant; brain trending toward 20–25 °C; metabolism around 30–40%; autolysis slowed about 10× but persists; protease activity minimal but present; early cytoskeletal strain from warm persists; receptor conformational drift from warm becomes progressively stabilized; cooling penetration remains heterogeneous, with deeper regions significantly warmer than superficial layers)
  • 20 min: +1–1.5% (ischemia unresolved; temperature approaching 10–20 °C; metabolism slowed to roughly 15–25% corresponding to a 4–6× reduction; autolytic processes slowed 10–12× but not fully arrested; micro-no-reflow becomes increasingly relevant; residual mitochondrial dysfunction from warm persists without repair; early synaptic misregistration from warm becomes fixed; spatial heterogeneity increases as deeper regions cool more slowly than cortex; early degradation of synaptic molecular architecture continues at a reduced rate)
  • 60 min: +2–3% (ischemia now concentrated in microdomains; deep hypothermia 0–10 °C; metabolism slowed 15–20×; autolysis nearly halted; additional loss dominated by residual ischemia and structural stress; drift effectively zero; nanoscale ambiguity from warm becomes entrenched; deeper regions still slightly warmer than cortex; fine-scale constraints preserved relative to warm but ischemic stress accumulates in vulnerable territories)
Note: Non‑ischemic loss in this phase is dominated by the temperature‑dependent slowdown of autolysis. Q10 scaling reduces metabolic degradation by ~15–20× between 37 °C and 0–5 °C. Drift and proteostatic failure are nearly halted at deep hypothermia. Cooling is non‑uniform, with deeper regions cooling more slowly than superficial layers. This phase shows the highest operational variability due to differences in external cooling efficiency (ice packs, nasopharyngeal irrigation, head ice bath). Values are conservative and represent marginal additions beyond warm ischemia.

On-perfusion cold ischemia (incremental loss beyond pre-perfusion)
  • 5 min: +0.12–0.22% (formalin present in major vessels; proximal cortical and subcortical territories begin to fix; some regions remain unfixed and ischemic; early chemical snapshotting already limits further ischemic progression locally; deeper microdomains and early micro-no-reflow pockets continue to evolve; metabolic and autolytic activity collapse rapidly wherever formalin has penetrated)
  • 10 min: +0.16–0.26% (large fractions of cortex and many subcortical regions are partially or fully fixed; residual degradation is confined to deeper structures and microvascular territories with delayed penetration, including vulnerable domains such as CA1 subfields; most non-ischemic processes — autolysis, molecular drift, synaptic instability — are already chemically arrested in perfused regions)
  • 20 min: +0.2–0.32% (most LTM-relevant substrates — synaptic topology, neuronal morphology, and associated molecular scaffolds — are fixed or actively fixing; residual ischemic loss is confined to territories with delayed perfusion or poor penetration; ongoing damage is limited to small, structurally compromised microdomains and persistent micro-no-reflow islands; ischemic and non-ischemic evolution is nearly fully halted at the mesoscale and macroscale)
  • 60 min: +0.22–0.34% (fixation essentially complete across all regions accessible to perfusion; additional ischemic loss is minimal and restricted to tiny volumes that were never adequately perfused or were structurally isolated; no further ischemic or non-ischemic evolution occurs beyond this point at any scale relevant to LTM inferability)
Note: These percentages represent the incremental inferential loss that occurs after perfusion has begun, on top of the damage already accumulated during warm ischemia and pre‑perfusion cold ischemia. They assume a reasonably efficient, head‑focused protocol (e.g., cephalic perfusion in an SBP‑style setup), where cannulation is achieved within the first 1–60 minutes of the total warm/cold ischemic phase. Formalin perfusion arrests ischemic and non-ischemic degradation as soon as it reaches tissue. Incremental loss reflects only the volume not yet fixed during the early perfusion window. Once fixation is established (typically within 20–60 minutes), further inferential loss collapses to zero. Any additional perfusion beyond this window serves operational goals (e.g., ensuring uniform penetration, minimizing residual water, or preparing for later cryoprotectant perfusion) but does not affect the percentages, which saturate once fixation is established. Osmotic, mechanical, and intrinsic fixation-related losses are excluded and modeled separately. Small pockets of unfixed water may persist in incompletely perfused regions, producing a very small residual ischemia (≤0.01%) during the perfusion phase, or later a more significant freezing damage during sub‑zero storage. Only the ischemic component is included in the percentages above; the freezing component is not included, but is prevented by removing the brain and soaking it in fixative and/or cryoprotectant. A small potential benefit (approximately −0.05 to −0.1%) might be achievable in future multi-site robotic perfusion protocols, but this is not included here.

For comparison, on‑perfusion cold ischemia in Alcor/CI cryoprotectant perfusion remains relatively small even after a very gradual 6‑hour cooldown to ~0 °C: approximately: ~1.0–1.5% (metabolism arrested; damage dominated by persistent no-reflow pockets and diffusion-limited microdomains rather than autolysis; macro-connectome topology preserved; partial synaptic compromise; micro-connectome parameters marginally degraded; cytoskeletal stress evident in vulnerable regions; residual molecular patterns partially compromised).
Warm → ~10 °C (1.5 hours): ~0.80–1.10% (early cooldown; high-impact non-linear events)
~10 °C → ~3 °C (4 hours total): ~0.90–1.30% (mid cooldown; hub-weighted vulnerability)
~3 °C → 0 °C (6 hours total): ~1.00–1.50% (late on‑perfusion cooldown).
The dominant damage in these protocols arises not from cold ischemia itself but from residual ice formation in unavoidable pockets during cryoprotectant perfusion, leading to an LTM inferability-loss of ~4% even in the most successful perfusions.

Tardive on-perfusion cold ischemia (incremental loss beyond pre-perfusion)
  • 5 min: +0.03–0.06% (formalin enters major vessels in a brain already at deep hypothermia; metabolism and autolysis are effectively suppressed before perfusion; incremental loss arises almost exclusively from unfixed deeper microdomains and residual micro no reflow territories not yet reached by the fixative; proximal cortical and subcortical regions begin to fix; deeper microdomains remain unfixed until penetrated; early chemical snapshotting limits any remaining ischemic progression locally; metabolic and autolytic activity collapse rapidly wherever formalin has penetrated)
  • 10 min: +0.04–0.08% (perfusion becomes more homogeneous; most cortical and many subcortical regions, including large portions of CA1, are partially or fully fixed; residual degradation is confined to small, structurally compromised microdomains with delayed penetration; non-ischemic processes are essentially absent outside these pockets)
  • 20 min: +0.05–0.1% (almost all LTM-relevant substrates in perfused territories are fixed; ongoing damage is restricted to tiny, deeply embedded micro-no-reflow islands; both ischemic and non-ischemic evolution are effectively saturated and contribute only marginally to additional inferential loss)
  • 60 min: +0.06–0.12% (fixation complete for all perfusable regions; residual loss reflects only permanently unperfused or structurally isolated microdomains; no further ischemic or non-ischemic evolution is possible beyond this point, and additional loss is negligible relative to prior warm and cold phases)
Note: These percentages assume that perfusion begins ~60 minutes after the start of a reasonably efficient cold ischemic phase. Because perfusion begins after an extended cold ischemic phase, most non-ischemic processes are already suppressed by deep hypothermia. Formalin rapidly arrests the remaining degradative activity. Incremental loss is therefore minimal and saturates extremely early. Intrinsic fixation loss, osmotic effects, and mechanical stress are excluded and modeled separately.

Formalin-based SBP preservation at −20 °C
  • 10 years: 0.06–0.32% (Intrinsic post-fixation ceiling; formalin preserves molecular distinguishability well, with residual limits concentrated in extra-connectomic molecular features such as epitopes, PTMs and chromatin topology. Time-dependent oxidations and hydrolyses at this scale remain far below the threshold for measurable inferability-loss, and synaptic topology and neuronal morphology remain fully stable.)
  • 100 years: 0.07–0.32% (The ceiling remains essentially unchanged. Minor chemical aging may slightly reduce the distinguishability of marginal molecular motifs, but macro- and micro-topology remain intact. Quasi-reversible micro-modifications such as side-chain rearrangements and partial crosslink relaxation introduce local ambiguity without structural drift).
  • 200 years: 0.10–0.41% (The upper bound reflects the cumulative effect of ultra-slow many-to-one chemical transformations in extra-connectomic molecular features, including extremely slow oxidations, amide hydrolyses and secondary crosslink rearrangements. These transformations collapse distinct molecular micro-states into identical end-states without altering geometry. Fine interfaces show slight attenuation, but synaptic and morphological topology remain preserved.)
Glutaraldehyde preservation at −20 °C
  • 10 years: 0.04–0.18% (Superior ultrastructural rigidity reduces geometric drift and lowers the topological floor; dense crosslinking imposes a small intrinsic limit on molecular distinguishability. Time-dependent oxidations and hydrolyses at this scale remain far below the threshold for measurable inferability-loss, and synaptic and morphological topology remain fully preserved.)
  • 100 years: 0.05–0.18% (The ceiling remains stable. GA maintains synaptic and morphological topology with high fidelity. Aging produces quasi-reversible micro-modifications including side-chain rearrangements and partial crosslink relaxation, which introduce local ambiguity without altering geometry; all effects remain below the threshold for informational collapse.)
  • 200 years: 0.08–0.28% (GA continues to preserve macro- and micro-topology effectively. Ultra-slow transformations such as crosslink rearrangements, secondary GA-protein scission, marginal PTM ambiguity and extremely slow oxidations or hydrolyses produce rare molecular conflations comparable to FA but with slightly better geometric stability. Fine interfaces show mild attenuation, while overall macro geometry remains intact.)
Note: The “extra connectome” layer (AMPAR stoichiometry, non-AMPA proteins, epigenetic signals) contributes only ~8–12% of the intrinsic post-fixation inferability-loss ceiling, but it accounts for ~85–90% of the time-dependent inferability-loss accumulated during long-term storage. These limits arise from molecular distinguishability, not from accessibility or retrieval constraints. Fixation preserves these signals; long-term changes reflect theoretical limits of molecular resolution rather than structural degradation.

Brain extraction damage (inferential loss from surgical removal of the brain)
  • Nearly perfect robotic removal of brain: 0.05–0.2% (precise CT-guided depth mapping; depth-stopper burr; atraumatic osteotome separation; robotic micro-manipulation with minimal shear; no blade contact with cortex; negligible pia disruption; inferential loss limited to extremely small surface patches; effectively the lower bound of extraction-related damage)
  • Not perfect robotic removal of brain: 0.2–0.6% (CT-guided depth control but imperfect calibration; occasional uneven pressure; minor cortical indentation during lifting; small areas of pia stretch; microdomain instability in localized patches; overall topology preserved; inferential loss dominated by surface-level ambiguity)
  • Nearly perfect manual removal of brain: 0.3–1% (half-thickness skull cut; osteotome to crack inner table; access from both front and back; minimal over-handling; no direct blade contact with cortex; small risk of focal pia detachment; minor microdomain drift in exposed regions; inferential loss limited to surface micro-regions)
  • Not perfect manual removal of brain: 1–3% (oscillating saw or rotary cutter; shallow circumferential cuts penetrating dura and grazing cortical surface; focal compression or shear during extraction; minor tearing of pia; localized microdomain disruption; small-scale synaptic misregistration near the cut line; no global structural loss but clear regional inferential ambiguity)
  • Catastrophic extraction error (rare): 3–10% (uncontrolled penetration of saw or bur; deep cortical laceration; focal subarachnoid tearing; significant pia disruption; localized but severe microdomain collapse; inferential loss confined to the affected region but large enough to be non-negligible; extremely rare with modern techniques)
Note: Brain removal is typically required whenever perfusion is not fully successful, because small pockets of unfixed water may remain in the cortex even after hours of formalin-based perfusion; these pockets would freeze at −20 °C and cause severe local damage. Removal allows full immersion and eliminates these “shadow regions”. These percentages quantify the inferential loss associated with the mechanical removal of the brain from the skull. They do not represent visible tissue damage or ultrastructural destruction, but the degree to which surgical manipulation introduces small regions of microdomain instability, synaptic misregistration, or surface-level ambiguity that slightly reduce the inferability of pre-mortem LTM states. Manual extraction carries higher risk of focal cortical contact, shear, or pia disruption, whereas CT-guided robotic extraction minimizes these effects.

Cracking
  • Localized microcracking: 1–4%. Few discrete fracture points; continuity punctuated but largely preserved. Arises even under well-controlled cooling during vitrification when high-concentration cryoprotectants are used (Alcor/CI protocols), or from minor handling shocks, as low-level internal stresses release along sparse fracture paths. These fractures typically reflect limited, localized exceedances of the tissue’s capacity for viscoelastic stress relaxation, without widespread propagation. Molecular fingerprinting could, in principle, allow re-matching of severed processes, but this only resolves fiber identity, not the exact synaptic destination; residual uncertainty remains at the level of spine-head geometry, active-zone alignment, vesicle distributions, and perisynaptic glia, producing a small but non-zero inference loss.
  • Diffuse microcracking: 7–14%. Distributed fractures; global reconstruction significantly more difficult. Plausible when internal stresses accumulate beyond the tissue’s limited capacity for local viscoelastic stress relaxation, including at isolated micro‑sites — a condition that can occur even in well‑controlled Alcor/CI cooling protocols using high-CPA vitrification solutions — and further exacerbated by imperfect thermal uniformity or accidental mechanical perturbations during storage (earthquakes, explosions, resonant oscillations of the dewar). Multiple engram‑spanning pathways are interrupted, and matching across many irregular, micro‑branched interfaces introduces substantial ambiguity at synaptic resolution; even if some fibers can be re‑matched by molecular fingerprint, the precise synaptic embedding (which spine, which micro‑cluster, which micro‑column, which vesicle pattern, which perisynaptic glia) remains partially ambiguous.
  • Macrocracking: 18–32%. Large fractures separating entire regions; global continuity compromised. Observed in cryonics practice as acoustic fracturing at −196 °C, typically arising from severe thermal gradients or strong mechanical shocks (drops, collisions). By analogy with brittle fracture, interfaces are expected to be irregular rather than perfectly planar, with micro‑branching and small uncertain volumes at the crack front; alignment becomes ambiguous for long‑range axons and high‑centrality nodes (e.g., CA1, hypothalamus), where even small topological errors can disproportionately affect long‑term memory inference.
Note: These percentages are conservatively reduced to reflect a relevant probability that some cracks may behave more like nearly planar, re‑alignable surfaces. The literature on vitrified systems shows that fracture is a structural risk even under controlled cooling with high CPA concentrations, but does not yet provide quantitative lower bounds for organ‑scale brain tissue. The 1–4% range should therefore be read as a modeling assumption rather than an empirically validated frequency.
Last edited by PCmorphy72 on Fri Jan 02, 2026 10:36 am, edited 2 times in total.
jordansparks
Site Admin
Posts: 292
Joined: Thu Aug 27, 2015 3:59 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by jordansparks »

Your percentages for warm ischemia seem way off. There is variation between patients, and the death process itself can cause significant damage, but I would tend to have ballpark numbers more like this:
3 hrs warm ischemia: 2% loss
6 hrs warm ischemia: 4% loss
18 hrs warm ischemia: 15% loss
The reason for this is that brain banks get brains all the time with hours of warm ischemia. If they had 75% percent loss of memory that would also mean 75% loss of structure. Those brains would be useless to the brain banks. Instead, they generally look just fine.

I also think your numbers for fixation degradation over time are way off. I think there is essentially no degradation of the memories. Also, I wouldn't worry at all about cryoprotectant osmotic or toxic damage. I think that damage is essentially zero. I would also put cracking as essentially zero loss in spite of my guess about pulverization. The real concern with cryo is not those things. The concern is ice. I looked into it more, and all cryo patients have ice, not just some of them. I do know that a little bit of ice might not be a big deal, but there's going to be a point where the ice is causing real damage. I cannot estimate that. I'll leave it to a cryobiologist.

I'm going to expand on cracking. If you take a cryopreserved brain and crack it cleanly into two halves with no pulverization, then that is zero damage. I'm very surprised that you would peg that at 50% loss. Its rudimentary to fit those two pieces back together again. You know exactly where everything belongs and it can easily be stitched up. I'm starting to think you are completely misunderstanding what's meant by "damage". It's damage to the information of where each molecule belongs. Damage means not knowing how to put it back together again. When you have full knowledge of the original state, that is zero damage. You should re-evaluate all of your percentages with the definition of damage as relates to "known state" rather than the more naive "physical damage" that you seem to have used. Because this same exact definition issue applies to all the smaller molecular changes as well. If you know where the molecule should go, that's zero damage. I really thought you already knew that because you were talking about inference.
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

If you know where the molecule should go, that's zero damage. I really thought you already knew that because you were talking about inference.
I knew that. But even using definitions like “maximum theoretical inference achievable with any conceivable future technology for atomic-scale readout”, an AI would give the same answers as when using “information of how to put each molecule back together again to the original state”. The hard part is not the wording — it’s guiding the AI step by step through the entire analytical structure that makes the inference possible. This ended up becoming a much longer analysis than I expected, and the topic just pulled me in.

Things become non-trivial when we try to measure inferability. Here I am speaking strictly about inferability in the information-theoretic sense, not about any assumed relationship between physical damage and memory loss. To measure it, we must first decide what counts as the long-term memory (LTM) we are trying to infer. And here the definition matters.
  • If LTM is defined as the physical pattern stored in the tissue, then inferability does not depend on the brain’s decoding machinery. It depends only on our ability to read the underlying physical synaptic, ultrastructural, and molecular structure of the tissue.
  • If LTM is defined as the already-decoded information, then reconstructing it in a real brain becomes far from trivial. It would require rebuilding the entire decoding machinery (hippocampal indexing, pattern-completion dynamics, cortical reinstatement, and the dynamic constraints that support these processes and make stored patterns readable). This is vastly more complex than inferring the physical synaptic patterns themselves.
A rough analogy, with all its limitations, is the following. Imagine LTM as a collection of bank cards (the stored data in the physical synaptic patterns). If damage hits the cards, the total inferability is easy to measure: the percentage of damage in cards tells you the percentage of lost information.

Now imagine LTM as a collection of bank cards plus a corresponding set of passwords (the decoding keys in the hippocampal–cortical system that allows those synaptic patterns to be read). If damage hits both cards and passwords uniformly, the situation changes radically: a password with one missing digit is still inferable; with two missing digits it becomes harder; with three missing digits it becomes exponentially harder. Here “harder” quickly approaches “practically impossible” (see NOTE below), and the loss of inferability is clearly not proportional to the number of missing digits.

This is exactly what happens if structures like the hippocampus are damaged: the loss of inferability is not proportional to the number of damaged cells. It grows faster — sometimes dramatically faster. And this asymmetry is exactly what complicates any attempt to quantify “how much LTM is inferable” after damage.

Of course, structures like the hippocampus and related medial temporal lobe circuits are not “passwords” in any literal sense. They are decoding subsystems that support pattern completion, stabilization, and retrieval of distributed cortical memories. They also provide dynamic constraints, which are not static molecular structures but are essential for making stored patterns functionally accessible. Whether such constraints are inferable from the remaining structure is a separate question. The analogy only captures one aspect: the fact that damage to the decoding machinery can have a non-linear impact on inferability, even when the stored patterns themselves are mostly intact.

NOTE: This concept is more subtle than simply imagining the number of missing digits growing to 100. Even if reconstruction becomes “practically impossible”, the theoretical measure of inferability would not change. What makes inferability collapse in practice are additional factors. In information-theoretic terms, inferability decreases as the entropy of the set of compatible original states increases. Each factor below is simply a different way in which distinct initial configurations can collapse into the same observable outcome, thereby increasing the entropy of the solution space. This includes the “many-to-one” case, where different molecular states converge to an identical final configuration after fixation.
  • Intrinsic ambiguity. Small differences in the original state lead to nearly indistinguishable outcomes, making the reconstruction imprecise but not fundamentally non-unique.
  • True non-uniqueness (different states → same outcome). Distinct original states become indistinguishable in the final specimen. For example, certain patterns of chemical modification can make different methylation histories compatible with the same observed structure after crosslinking. This is the canonical “many-to-one” mechanism.
  • Non-linear explosion of possibilities. Small amounts of damage produce a disproportionately large increase in the number of compatible solutions. This is the regime where “harder” quickly approaches “practically impossible”.
  • Loss of constraint-structure (constraint collapse). The damage does not alter the stored data directly, but destroys the structure of constraints that limited the solution space. When these constraints collapse, the number of possible reconstructions grows dramatically even if the data themselves are minimally altered.
On the empirical basis and current limits of inferential quantification

The information-theoretic analysis above clarifies what inferability means, but it does not yet tell us how to quantify it. A fully rigorous estimate would require a biophysical model capable of tracking molecular degradation, synaptic state drift, and the collapse of constraint structures over time, and then computing the entropy of the set of pre-mortem configurations compatible with a given post-mortem snapshot. No such model exists today — not in the literature, and not in any current AI system.

For this reason, the quantitative estimates that follow should not be interpreted as the output of a complete molecular-level inference engine. They are structurally informed, physiologically constrained approximations. Their empirical anchor is the only real quantitative dataset we have: your own “ballpark numbers” for structural degradation, which I am using as the foundational empirical constraint for this analysis. If I had access to a full molecular-level model, I would use it. In its absence, approximation is not a weakness — it is the only mathematically honest option.

These values are not precise measurements; unfortunately, they are also not the result of advanced iterative solvers or convergence loops capable of refining parameters through repeated simulation cycles; such computational machinery is simply not part of the present framework. These values are estimates grounded in the practical experience of brain banks, which routinely receive tissue with several hours of warm ischemia that nevertheless retains structurally intact morphology. If such tissue had suffered 50–75% inferential loss, it would exhibit corresponding structural degradation — which it does not. Your numbers therefore provide the only realistic quantitative constraints available, and I am using them as the empirical foundation for the analysis that follows.

The inferential components that will be introduced below (e.g., “deletion, blurring, constraint collapse”) are therefore conceptual categories, not separable numerical functions. They explain why inferability degrades in a non-linear way, but they cannot yet be assigned independent values. Any attempt to do so would require a level of molecular modeling that no existing system — human or artificial — can currently perform.

Future AI systems may eventually be able to construct such models by integrating molecular dynamics, synaptic proteostasis, regional vulnerability, and constraint-structure evolution into a unified probabilistic framework. When that becomes possible, inferability will become a measurable quantity rather than an informed approximation. The present framework is intended as the conceptual scaffolding for that future work, not as a substitute for it.

A final methodological note is required here. All quantitative estimates in this analysis assume a retrieval process operating close to the physical lower bound achievable by any future technology. This bound is non-zero: even idealized atomic-scale readout cannot avoid small perturbations, shot noise, or the indistinguishability of extremely similar molecular states. However, the magnitude of this irreducible retrieval-loss is expected to be far below the intrinsic inferability-loss ceilings discussed earlier. Basic physical limits on signal-to-noise ratios, photon/electron statistics, and molecular perturbation imply a lower bound on the order of 0.001–0.01% for atomic-scale discrimination of densely crosslinked biological structures. With realistic redundancy and averaging across repeated measurements, the effective contribution is likely no more than 0.001–0.005%. This value refers not only to the minimal retrieval-loss imposed by physics. It is distinct from — and much smaller than — the intrinsic inferability-loss ceilings (0.03–0.30%) that arise from structural conflation within the preserved tissue itself (e.g., when adjacent membranes or spines come into contact). For this reason, retrieval-loss is treated as negligible in the present context: the values reported below reflect the intrinsic limits of the preserved structure, not the contingent limitations of any particular readout method.

Ischemic damage
If they had 75% percent loss of memory that would also mean 75% loss of structure. Instead, they generally look just fine.
This follows from your statement that “structural preservation is fully equivalent to memory preservation. There is no distinction. They are the same thing.

I had confused “consolidating features” with “consolidated features”, but later I was more precise in estimating that the Structure/LTM ratio should not be 100%, but roughly 93–99%, dropping to 88–92% if synaptic plasticity components are excluded (I based this estimate on considerations you may find, for example, in Abraham et al., 2019). In an earlier post I also sketched how this plasticity component behaves under fixation (FA vs GA), although I did not quantify those effects or explain their mechanisms.

As I’ve already made a few “confused” digressions — including several overestimates in my last post — I will probably make more, though hopefully fewer over time. That said, I fully admit that my 4-hour warm-ischemia estimate was too high; given your direct experience with connectomes from brains fixed more than 3 hours post mortem (and possibly a few fresher samples from your former free research program), I will adjust accordingly.

Also note that my 5-minute warm-ischemia estimate (“5–10%”) was never meant as “pure ischemia”. It was implicitly incorporating the shock typical of systemic-collapse deaths — multi-organ failure, pre-arrest hypoxia, inflammatory load, metabolic exhaustion — which are very different from sudden cardiac arrest in an otherwise healthy person. To be practical, since I was using approximate percentages, I treated those systemic-collapse contributions as a kind of “zero-offset” that I simply added to the 5-minute value.

In hindsight, and especially in light of your remark that “the death process itself can cause significant damage”, it is clearer that these two contributions should not be lumped together. For this reason, what I previously treated as a single “warm-ischemia percentage” must now be decomposed into two components:

Total LTM inferability loss(t) = Ischemic component(t) + Systemic-collapse component(t)

To make this decomposition meaningful, each term must be defined separately.
  • Ischemic component: why this term requires further inferential subcomponents. Warm ischemia is often treated as a single scalar quantity — “X minutes of no-flow equals Y% damage” — but this simplification hides the fact that ischemia is not a unitary phenomenon. From the standpoint of LTM inferability, warm ischemia produces three distinct classes of informational degradation, each governed by different biophysical mechanisms and each contributing differently to the total inferential loss.
    • Engram deletion (loss of physical substrate). This is the outright destruction of neurons, dendrites, axons, or synapses. It is the slowest component to appear, but the most irreversible. Ultrastructure can remain traceable for hours after functional viability is lost, which means that this component becomes significant only later, as physical structures begin to fail.
    • Engram blurring (loss of molecular and microdomain states). Long before physical structures disappear, ischemia disrupts the stable molecular configurations that constrain synaptic identity and strength: phosphorylation patterns, receptor clustering, scaffold organization, proteostatic balance, and microdomain integrity. These degradations are often invisible to EM, even when morphology appears intact, but they reduce inferability by producing a widening of the set of pre-mortem states that remain indistinguishable at the level of local structure.
    • Constraint collapse (loss of global structural constraints). Some aspects of LTM depend not only on local synaptic states but also on global circuit-level constraints — hippocampal indexing tendencies, pattern-completion biases, mesoscale connectivity motifs, and region-specific vulnerabilities — that restrict the space of possible interpretations of local structure. When ischemia disrupts these global constraints (for example through selective vulnerability of CA1, early metabolic failure in association cortices, or pre-arrest systemic deterioration), the same local snapshot becomes compatible with a larger number of global configurations, thereby amplifying inferential uncertainty even when local morphology remains apparently preserved.
    Taken together, deletion, blurring, and constraint collapse provide a full decomposition of the ischemic contribution to LTM inferability loss. Each evolves on its own temporal profile, so that the ischemic component at time t can be understood as the sum of three terms: D(t), B(t) and C(t).
  • Systemic-collapse component. Warm ischemia often begins in a physiological context that is already deteriorating. In many deaths, the brain does not enter the ischemic interval from a state of sudden, clean cardiac arrest, but from hours of progressive systemic failure: declining oxygenation, circulatory instability, metabolic exhaustion, and multi-organ dysfunction. These global processes do not directly delete synapses or distort microstructure, but they erode the metabolic and homeostatic conditions that stabilize neural state. As systemic parameters drift outside viable ranges, the brain’s ability to buffer or delay ischemic degradation collapses, and repair capacity diminishes. The tissue therefore enters the ischemic interval in a pre-weakened state that increases its susceptibility to the intrinsic ischemic mechanisms described above (accelerating them and producing a small quasi-multiplicative effect, though the formula remains practically additive). This adds a temporally intertwined but mechanistically distinct contribution to LTM inferability loss relative to the ischemic decomposition into deletion, blurring, and constraint collapse.
The updated “warm-ischemia baseline” list in the comparison framework (see my last post above) reports the total LTM inferential loss produced by intrinsic warm ischemia at 37 °C, under a baseline scenario (sudden cardiac arrest, no systemic-collapse deterioration), upon which all subsequent cold-phase increments are added. These values integrate the full ischemic decomposition — deletion, blurring, and constraint collapse — together with temperature-dependent non-ischemic degradation (autolysis, proteostatic failure, molecular drift), which are fully active and inseparable at normothermia.

In the previous version of this comparison framework, the warm-ischemia percentages implicitly included not only the intrinsic ischemic mechanisms (deletion, blurring, constraint collapse) but also the physiological deterioration that often precedes cardiac arrest in real-world deaths. This “systemic collapse” contribution was treated as an unspoken offset added to the early warm-ischemia values.

To make the model more transparent and mechanistically accurate, this component is now separated into its own list. Systemic collapse does not delete synapses or distort ultrastructure directly; instead, it weakens the metabolic and homeostatic conditions that stabilize neural state, making the brain more vulnerable to ischemic degradation. In other words, the brain often enters warm ischemia already “pre-damaged”, and the ischemic curve rises faster as a result.
The “Systemic collapse component” list in the updated framework quantifies this scenario-based contribution.

A dedicated “pre-perfusion cold ischemia” list now captures the incremental inferential loss that occurs after warm ischemia ends and before any perfusate re-enters the cerebral vasculature. In the updated comparison framework, this phase is treated explicitly because ischemia remains at full strength while metabolism progressively slows as temperature falls. Autolysis and molecular drift decline steeply with cooling: Q10 scaling implies a ~15–20× slowdown of autolytic processes between 37 °C and 0–5 °C. Cooling is non-uniform, and deeper regions such as CA1 and the hypothalamus lag behind superficial layers, producing spatial heterogeneity in the early minutes. Because non-ischemic degradation is rapidly suppressed but ischemia persists, incremental loss in this phase reflects both intrinsic biophysics and the practical variability of external cooling methods (ice packs, nasopharyngeal irrigation, head ice bath). The 5–20 minute window is particularly critical in real-world scenarios where transport and setup delay cannulation. The updated percentages represent incremental inferential loss beyond the warm baseline and are calibrated for consistency with warm ischemia and on-perfusion phases.

In SBP protocols, once perfusion begins, fixation and ischemia enter a brief period of direct competition: whichever reaches a region first determines its trajectory. I included a separate list for this “on-perfusion cold ischemia” phase because it captures the short interval where formalin has entered the major vessels but has not yet reached the deeper microdomains. In most practical cases — especially when cannulation occurs within the typical 5–20 minute window — this phase is narrow, and the remaining loss is confined to territories with delayed penetration or micro‑no‑reflow. The percentages are small because, once fixation gains the upper hand, the ischemic curve collapses rapidly.

I also separated the tardive variant, which applies when perfusion begins only after the brain has already reached deep hypothermia. In this scenario, the “race” between fixation and degradation is largely resolved before fixation even arrives: metabolism and autolysis are already suppressed by temperature alone. The incremental loss is therefore minimal and saturates very quickly. This list exists to distinguish these cases from the standard on-perfusion phase, since the underlying dynamics are qualitatively different even if the time intervals appear similar.

Although the framework focuses on SBP protocols, I include below the two grey-tier lists for Alcor/CI because the contrast is informative. Unlike SBP, where fixation competes directly with ischemia and rapidly arrests it, Alcor/CI perfusion introduces a cryoprotectant rather than a fixative. The CPA cools the tissue from the inside but does not stop degradation; it merely slows it while adding mechanical, osmotic, and thermal stresses that SBP protocols avoid. The result is a longer and more irregular ischemic tail, with incremental losses that behave differently in both the standard and tardive variants. These lists are included only as comparative references, not as part of the core SBP framework.

Alcor/CI cryoprotectant on-perfusion cold ischemia (incremental loss beyond pre-perfusion)
  • 5 min: +0.2–0.3% (partial restoration of flow; some territories still under micro-no-reflow; cooling now acts from inside and outside; metabolism sharply suppressed but not fully arrested; autolysis and hydrolysis extremely slow but non-zero; early consolidation of warm/cold-phase ambiguity as drift halts; cold-shock and mechanical/thermal stress present but marginal relative to ischemic loss)
  • 10 min: +0.3–0.4% (most major vessels are perfused; intravascular temperature trending toward low-teens °C; residual ischemia confined to deeper and watershed microdomains; autolysis slowed ~20×; drift minimal as warm-legacy patterns stabilize; cold-shock and perfusion-induced structural microstress begin to accumulate but remain secondary to ischemia)
  • 20 min: +0.45–0.65% (macro-perfusion is stable; ischemia confined to persistent micro-no-reflow islands; temperature approaching ~10 °C; metabolism ~1–3%; autolysis and hydrolysis effectively halted but not chemically blocked; incremental loss dominated by unresolved microvascular territories and structural stress; cold-shock and non-ischemic mechanical/thermal micro-damage contribute only marginally)
  • 60 min: +0.7–1% (perfusion relatively stable; metabolism near zero; ischemia-driven biochemical activity minimal; autolysis and enzymatic processes fully halted; warm-legacy patterns entrenched; remaining loss reflects persistent micro-no-reflow territories and cumulative structural fatigue; cold-shock and perfusion-related microstress contribute a small but non-negligible fraction, still far below the ischemic component)
  • 90 min: +0.8–1.1% (brain temperature around ~10 °C in most regions; ischemic processes essentially saturated; metabolism fully suppressed; autolysis and enzymatic activity halted; incremental loss dominated by long-duration structural and microvascular stress in previously compromised territories; cold-shock and non-ischemic microstress accumulate slowly but remain modest, affecting borderline nanoscale patterns rather than creating new large-scale ambiguity)
  • 4 hours: +0.9–1.2% (brain temperature near ~3 °C; metabolism fully suppressed; ischemia-related processes effectively frozen; autolysis and enzymatic activity halted; incremental loss beyond this point is due almost entirely to long-duration structural, mechanical, and thermal stress under non-physiological perfusion conditions; cold-shock-related micro-damage rises above the background of earlier ischemic loss but remains a small correction)
  • 6 hours: +1.1–1.5% (brain temperature approaches ~0 °C with gradual cooling; residual ischemia unchanged and confined to stable micro-no-reflow pockets; metabolism zero; autolysis and enzymatic processes fully halted; warm-legacy patterns fully entrenched; remaining incremental loss reflects saturation of long-duration non-ischemic structural stress — cold-shock, mechanical, and thermal — in already vulnerable microdomains; no new large-scale degradation occurs, but borderline nanoscale ambiguities become fully entrenched; additional loss beyond this point negligible relative to warm + early cold; cooling fully uniform)
Note: These values represent the incremental inferential loss that occurs after cryoprotectant perfusion has started, on top of the damage already accumulated during warm ischemia and pre-perfusion cold ischemia. They assume a reasonably efficient, head-focused protocol (e.g., cephalic perfusion in an SBP-style setup), where cannulation is achieved within the first 1–60 minutes of the total warm/cold ischemic phase. Because cryoprotectant perfusion cools the brain from the inside, metabolism is rapidly suppressed; most additional inferential loss is therefore driven by residual micro-no-reflow and long-duration structural stress, not by ongoing metabolic collapse. Osmolarity, osmotic shrinking, and CPA chemical toxicity are explicitly excluded and are to be modeled in a separate block. Small pockets of unfixed water may persist in incompletely perfused regions, producing a very small residual ischemia (≤0.01%) during the perfusion phase, or later a more significant freezing damage during sub‑zero storage. Only the ischemic component is included in the percentages above; the freezing component is not included, but is prevented by removing the brain and soaking it in fixative and/or cryoprotectant. A small potential benefit (approximately −0.1 to −0.2%) might be achievable in future multi-site robotic perfusion protocols, but this is not included here

Alcor/CI tardive cryoprotectant on-perfusion cold ischemia (incremental loss beyond pre-perfusion)
  • 5 min: +0.05–0.1% (partial restoration of flow in territories that were still under micro-no-reflow at the end of cold pre-perfusion; temperature remains in the 0–10 °C range; metabolism effectively near zero; autolysis and hydrolysis essentially halted; additional loss dominated by residual micro-ischemia and the onset of structural stress under non-physiological perfusion; nanoscale ambiguity from the warm and pre-perfusion cold phases continues to consolidate, but no new large-scale patterns emerge)
  • 10 min: +0.1–0.15% (perfusion more homogeneous; most macro-vessels and a larger fraction of microdomains are now reperfused; temperature still 0–10 °C with minimal further cooling; metabolism functionally absent; autolysis and enzymatic activity fully suppressed; incremental loss driven by persistent micro-no-reflow pockets and cumulative mechanical/thermal stress from perfusion; nanoscale ambiguities in already vulnerable regions become more entrenched, but no qualitatively new degradation modes appear)
  • 20 min: +0.15–0.25% (perfusion pattern largely stable; residual ischemia confined to small, structurally compromised microdomains; temperature stable in deep hypothermia; metabolism and autolysis effectively zero; additional loss reflects long-duration structural fatigue in previously stressed territories and subtle CPA-related mechanical stress; nanoscale ambiguity increases only marginally and remains tightly coupled to pre-existing vulnerabilities rather than creating new large-scale uncertainty)
  • 60 min: +0.25–0.35% (perfusion stable over tens of minutes; residual micro-no-reflow slowly saturates; structural stress from prolonged non-physiological perfusion accumulates but at a low rate; temperature uniform and deeply hypothermic; metabolism and autolysis fully arrested; incremental loss now dominated by slow mechanical and thermal micro-damage in already borderline regions; nanoscale ambiguity grows slowly but remains a small correction relative to the warm + pre-perf cold legacy)
  • 90 min: +0.3–0.4% (brain remains at stable deep hypothermia with uniform cooling; ischemic processes fully saturated; no new ischemic territories emerge; structural and microvascular stress continue to accumulate slowly in vulnerable domains; nanoscale ambiguity in synaptic and membrane-associated structures increases slightly but remains tightly bounded; additional loss over this interval is modest compared to the cumulative damage already present at the start of perfusion)
  • 4 hours: +0.35–0.45% (long-duration perfusion at deep hypothermia; residual micro-ischemia effectively static; structural, mechanical, and thermal stress from prolonged non-physiological perfusion dominate the incremental loss; no meaningful metabolic or autolytic contribution; nanoscale ambiguity increases mainly in already compromised microdomains; large-scale circuit topology and mesoscale architecture remain effectively unchanged relative to the state at the end of cold pre-perfusion)
  • 6 hours: +0.4–0.5% (perfusion and temperature fully stable for hours; residual ischemia unchanged and confined to fixed microdomains; metabolism zero; autolysis and enzymatic processes fully halted; incremental loss reflects saturation of long-duration structural stress in already vulnerable territories; no new large-scale degradation occurs; borderline nanoscale ambiguities become fully entrenched; additional loss beyond this point is negligible relative to warm plus pre-perfusion cold ischemia)
Note: Values assume that cryoprotectant perfusion begins ~60 minutes after the start of a reasonably efficient cold ischemic phase. The above percentages are lower than in the non-tardive scenario because a larger fraction of the total cold-phase degradation has already occurred during the extended pre-perfusion interval.
Last edited by PCmorphy72 on Fri Jan 02, 2026 8:11 am, edited 1 time in total.
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

Fixation damage

Regarding fixation degradation, my initial estimates shifted after orienting the AI using Andy’s references. I have also revised the explanatory notes that accompany the percentages — a component just as important as the numerical values themselves — especially those describing the intrinsic “molecular effects” that arise from the very onset of fixation. These revisions are based on a refined assessment I reached by querying the AI in a more targeted way, as a final attempt to see whether it could assist reliably before resorting to a more manual approach (prior to this, I had even considered a “10 years: ~0%” loss).

Your remark was decisive: “If it's some sort of "dynamic" state that is short lived, we don't care about it. If it's a longer lived state, then we already are capturing it.” A survey of the literature strongly supports this principle. Within this criterion of stability, I used Andy’s “epigenetic information” as a stability benchmark rather than as a substrate of memory. Relative to this I focused on identifying a static component of synaptic plasticity that is slightly less inferable than the epigenetic layer, yet still compatible with long‑term preservation. This led me to a plausible candidate: the subunit composition of AMPA receptors.
  • AMPARs introduction. Separating the “consolidated” contribution from the usual “consolidating” contribution to LTM of these receptors, I had to support the idea that LTM is somewhat encoded in AMPARs, so I went through statements like “We review the reliance of synaptic plasticity on AMPAR variants and propose the ‘AMPA receptor code’ framework” (Diering & Huganir, 2018). To quantify this “contribution” I had to be speculative, but being conservative as this thread about resilience suggests, by reading statements like “Tetrameric AMPA receptors are primary transducers of fast excitatory transmission, and their abundance at the synaptic surface is a crucial determinant of synaptic efficacy in neuronal communication” (Muñoz de León-López et al., 2025) I would estimate a 5-10% contribution rate (until you will show me why this “new guess” of mine is wrong again). Chemical fixation preserves receptor position and molecular architecture, yet partially inferable, even with future technology. You have already justified this “partially inferableby sayingDNA has many methyl groups attached to it. If a crosslink happened at one of those methyl groups, you might not be able to tell if that location was initially methylated or not.”, let me thinking that I could still have as reference that post of yours with the image “Examples of formaldehyde-mediated crosslinking of proteins and DNA”, but I will try to justify the “partially inferable” in a more quantitative and exhaustive way.
  • Possible technologies. Current atomic-resolution structural methods — such as cryo-EM resolving side-chain orientations and subunit interfaces — already show that stable molecular configurations can be recovered with high fidelity. Their steady improvement outlines a clear scalability path toward full atomic-scale readout. In such a regime, algorithms could discriminate GluA isoforms by their structural fingerprints — loop and helix motifs, side-chain constellations, and crosslink patterns — allowing near-complete recovery of stable AMPA subunit composition in fixed tissue and far exceeding today’s immuno-based limits.
  • Epigenetics/DNA fixation. Taking as a reference the maximum theoretical inferability of “epigenetic information” and DNA in a chemically fixed brain, we can say that it is very close to 100%. Yet it is clear that there must be theoretical limits, plausibly due to the most difficult segments to recover: the three-dimensional topology of chromatin (compartments, loops) and marginal histone epitopes, which remain partially masked even under optimized retrieval. These factors concentrate the residual uncertainty, while DNA base methylation calls are essentially perfect. The estimate places epigenetic inferability-loss in the range of 0.03–0.1% for formalin, while glutaraldehyde fixation — though preserving 3D proximities more rigidly — introduces introduces denser and less reversible crosslinks that constrain chromatin mobility and reduce the distinguishability of marginal histone-level signals, slightly increasing the theoretical inferability-loss to 0.04–0.15%.
  • AMPARs fixation. By analogy, the static subunit composition of AMPA receptors also has a theoretical inferability close to 100%. Here, however, the limiting segments differ: ultra-rare crosslink “scarring” that reduces the distinguishability of subunit- or isoform-specific structural motifs, ultra-low-abundance splice variants, and sub-nanometer registration between proteomic reads and receptor clusters. Dynamic conformations and rapid PTMs are excluded, since long-term memory inference depends only on stable tetramer identity and stoichiometry. Retrieval leverage is thinner than in epigenetics: once structural motifs become partially conflated by dense crosslinking, redundancy is limited. With advanced proteomics, subunit-specific signatures can be recovered at high fidelity, but residual uncertainty arises from structural conflation rather than simple epitope masking. Formalin fixation, with shorter and more reversible crosslinks, preserves molecular distinguishability slightly better, while glutaraldehyde, with denser and less reversible crosslinks, improves ultrastructural fidelity but increases the probability of subunit-level conflation. The revised estimate places AMPA subunit inferability-loss at 0.05–0.3% for formalin and 0.04–0.25% for glutaraldehyde.
  • Epigenetics/DNA vs. AMPARs comparative synthesis.
    • Structural continuity vs. assembly discreteness: Epigenetics span continuous nuclear architecture; uncertainty concentrates in 3D chromatin topology and marginal histone-level signals. AMPA composition is a discrete, membrane-embedded assembly; uncertainty arises from structural conflation at protein interfaces and isoform-level discrimination.
    • Retrieval leverage: Epigenetics benefit from dual leverage (DNA-level assays + histone PTM imaging), allowing cross-validation that pushes ceilings upward. AMPA relies primarily on protein-level structural signatures, with thinner redundancy.
    • Topology vs. registration: Epigenetics are limited by topology reconstruction; AMPA by nanometer-scale registration and isoform-specific structural motifs.
    • Sensitivity to fixation: Epigenetics are relatively robust to fixation variability. AMPA composition is more sensitive to crosslink chemistry: GA improves ultrastructural fidelity but increases the probability of structural conflation at subunit interfaces; FA favors molecular distinguishability but preserves ultrastructure less rigidly.
    • Dependency on dynamic states: Epigenetics depend minimally on dynamic accessibility; AMPA composition is explicitly static, with scarring sometimes mimicking isoform differences.
  • Summary. Epigenetic inferability in formalin-fixed tissue reaches slightly higher values, limited mainly by chromatin topology and histone-level signals, while glutaraldehyde introduces a modest increase in structural conflation despite better topology preservation. Static AMPA subunit composition, while not requiring dynamic states, remains marginally less inferable due to interface-level conflation and isoform discrimination; here too glutaraldehyde offers a structural advantage but increases the probability of conflation, yielding ceilings comparable to formalin. Both domains approach similarly low theoretical ceilings, but for fundamentally different reasons: epigenetic information is limited by the reconstruction of continuous chromatin topology, whereas AMPA receptor composition is limited by the distinguishability of discrete protein assemblies under dense crosslinking.
This comparison can be extended to the rest of the connectome, including synaptic topology, neuronal morphology, and other protein substrates. For each component, the following are considered: minimum theoretical inferability-loss (FA vs. GA), limiting factors, and estimated weight in LTM-inferability.
  • Synaptic topology (position, density, ultrastructure) — FA: 0.07–0.3%; GA: 0.04–0.15% (Limiting factors: sub-nanometer alignment of pre/post densities, vesicle pools, nanocolumn organization, and fine interface registration. GA preserves ultrastructure more rigidly, reducing geometric drift; FA preserves molecular distinguishability slightly better. The inferability-loss reflects only ultra-rare structural conflation at fine interfaces, not retrieval differences.)
    Weight: 78–85%. (Synaptic topology is the dominant substrate of consolidated memory because it encodes the engram directly: which neurons connect, where, and with what strength. Experimental evidence from hippocampus, frontal cortex, and parietal cortex shows that memory recall depends on reinstating these patterns. The overwhelming weight reflects the fact that the engram is fundamentally a structural graph: the spatial distribution of synapses, their density, and their ultrastructural organization carry the bulk of the informational load.)
  • Neuronal morphology (cell bodies, dendrites, axons) — FA: 0.05–0.2%; GA: 0.03–0.1% (Limiting factors: membrane continuity, spine morphology, and crosslink-induced micro-distortions that can slightly reduce the distinguishability of fine dendritic features. GA excels in preserving geometric fidelity; FA introduces minor geometric relaxation. These effects influence inferability only when distinct morphological micro-states become conflated.)
    Weight: 5–10%. (Morphology defines integrative capacity: dendritic arborization, spine density, and axonal branching determine how neurons can connect and process inputs. While conduction is irrelevant for inferability, the structural architecture of dendrites and spines constrains synaptic placement and clustering. Morphology is therefore a supporting substrate: it does not encode the engram directly, but it shapes the space in which synaptic changes occur. Studies on dendritic spine remodeling in hippocampal engrams confirm that morphological stabilization accompanies synaptic consolidation, justifying a 5–10% weight. See Bosch et al., 2014)
  • Static AMPA receptor subunit composition — FA: 0.05–0.3%; GA: 0.04–0.25% (Limiting factors: crosslink-induced structural conflation at subunit interfaces, isoform discrimination, and nanometer-scale registration between proteomic reads and receptor clusters. GA improves ultrastructural fidelity but increases the probability of conflation among closely related subunit motifs; FA preserves molecular distinguishability slightly better. These differences affect inferability only through structural conflation, not through epitope masking or accessibility.)
    Weight: 5–10%. (AMPA receptor stoichiometry (GluA1–4 isoforms, Q/R editing, tetrameric arrangement) directly stabilizes synaptic efficacy and is a molecular correlate of long-term potentiation and depression. It does not encode the engram by itself, but it modulates synaptic strength in a stable, long-lasting manner. This places AMPA composition above epigenetics in weight, but still secondary to synaptic topology)
  • Epigenetic information / DNA — FA: 0.03–0.1%; GA: 0.04–0.15% (Limiting factors: 3D chromatin topology (compartments, loops), marginal histone-level signals, and occasional crosslink rearrangements that reduce the distinguishability of nucleosome-scale configurations. DNA base methylation calls remain near-ideal. GA preserves 3D proximities more rigidly but increases the probability of conflation among marginal histone epitopes; FA preserves distinguishability slightly better.)
    Weight: 2–5%. (Epigenetic marks regulate transcriptional thresholds and plasticity potential. They are essential for memory consolidation and stabilization but do not encode the engram directly. Their contribution is supportive: they set the rules for synaptic change rather than storing the specific memory trace. Hence a modest weight. Note: Although epigenetic patterns are highly redundant across similar neuronal populations — making them exceptionally easy to reconstruct from sparse sampling — this redundancy does not eliminate their informational weight. What matters for LTM inference is not the local pattern itself, but the global regulatory constraints that these patterns collectively impose on synaptic plasticity. This regulatory layer is far less redundant than the underlying patterns, and its loss would introduce a modest but non‑negligible ambiguity in reconstructing consolidated LTM. See Holliday, 1999)
  • Other molecular substrates (non-AMPA proteins, scaffolds, adhesion molecules) — FA: 0.1–0.4%; GA: 0.1–0.4% (Limiting factors: isoform discrimination, PTM-level distinguishability, and crosslink heterogeneity. FA introduces minor geometric relaxation; GA increases structural conflation among closely related motifs. Inferability is similar because redundancy is high and informational content is low.)
    Weight: 0.1–1.5% (Includes scaffolding proteins, such as PSD-95 and Homer, and adhesion molecules, such as neuroligins and neurexins. They stabilize synapses but do not encode specific information. Their role is supportive, ensuring synaptic persistence, but their direct informational content of the engram is minimal. Hence a very small weight. See: Ortiz-Sanz et al., 2020)
In conclusion, considering these weights, the global inferability-loss floor for consolidated LTM is slightly lower with GA (0.04–0.17%) due to its dominance in preserving synaptic topology, while FA (0.06–0.31%) has only other not strictly inferability-related advantages, such as a superior molecular accessibility. Although the two ranges overlap substantially, their upper bounds differ by just under 0.14%, a small difference in absolute terms but one that still reflects the overwhelming importance of synaptic topology relative to the more supportive role of molecular substrates.

Now, regarding “fixation degradation over time”, I have assumed a cryoprotective variant of the standard “neutral saline storage medium with sodium azide for long term storage” (then reducing water activity, molecular mobility and the effective Arrhenius/Q10 scaling), at pH 7.0, with storage at a constant temperature of −20 °C. After initial fixation with either FA or GA, the time-dependent increase in loss is concentrated ~85–90% in the “extra-connectome”, with the remaining ~10–15% affecting synaptic macro-topology and micro-registration (fine pre/post-synaptic interfaces). The percentages below represent true inferability-loss, while the accompanying noise values indicate the sub-threshold chemical drift accumulated under storage.
  • 10 years: negligible increase (~0–0.01%), Noise: ~0.001%. Loss is limited to ultra-rare local chemical noise (single-residue oxidations, extremely slow amide hydrolyses) that remains fully below redundancy and does not alter geometry or registration.
  • 100 years: still negligible increase (~0–0.01%). Noise: ~0.005%. Slow accumulation of reversible or quasi-reversible micro-modifications (side-chain rearrangements, partial crosslink relaxation) produces minor local ambiguity, including a growing ambiguity in protein–lipid interface discrimination, but all effects remain below the threshold for informational collapse; synaptic topology remains unchanged.
  • 200 years: cumulative increase (~0.04–0.1%). Noise: ~0.01%. At this point, ultra-slow many-to-one chemical transformations finally exceed local redundancy, producing measurable inferability-loss. These transformations include local chemical modifications (aromatic oxidation, extremely slow amide hydrolysis, secondary scission of GA–protein crosslinks), sub-molecular micro-fragmentations (loss of small functional groups, single C–N or C–C bond breaks), and ultra-slow Maillard-type reactions and secondary crosslinks. These processes do not move structures or degrade synapses; they collapse distinct molecular micro-states into the same static end-state, producing informational loss without structural damage. Fine interfaces and labile molecular signals show slight attenuation, while macro-geometry remains intact.
When the component-specific limits are weighted by their actual contribution to consolidated LTM, the 200-year inferability-loss floor becomes tightly constrained. Connectomic geometry shows only ~0.12% theoretical loss, driven exclusively by ultra-rare many-to-one transformations that do not propagate. Static AMPA receptor subunit composition adds ~0.08%, reflecting the limited redundancy of isoform-specific epitopes under dense crosslinking. Stable epigenetic information would add ~0.02%, concentrated in chromatin topology and marginal histone epitopes rather than DNA methylation itself, which remains essentially invertible.

If there were brief annual excursions from −20 °C to −10 °C lasting only a few hours, plus rare decadal events reaching +4 °C for up to 10 days, then the 10/100/200y estimates above would rise, yielding additional increases of +0.0008/+0.006/+0.011% for FA initial fixation, and +0.0007/+0.005/+0.009 for GA (using an Arrhenius/Q10-type model). Under such cryoprotective −20 °C conditions, long-term storage kinetics are largely independent of whether FA or GA was used for the initial fixation: the small FA vs GA differences in the values above mainly reflect differences in the initial crosslinking pattern, which lead to two slightly different structural behaviors.
  • FA preserves initial molecular readability well, but long-term storage at −20 °C increases chemical aging load primarily in extra-connectomic molecular features (epigenetic marks, PTMs, AMPARs, non-AMPA proteins), with minor attenuation of fine interfaces. Macro-topology remains intact.
  • GA forms more stable crosslinks and preserves ultrastructure better, at the cost of epitope masking. With aging, the accessibility penalty persists, but topological drift is smaller. Over 100–200 y, GA’s superior ultrastructural stability yields a slightly lower inferibility-loss floor than FA.
By contrast, in a non-cryoprotective +4 °C scenario, continuous storage for 10–200 years in a standard aldehyde-containing saline medium without cryoprotectant would accelerate the relevant ultra-slow chemical reactions by roughly 200–400× relative to −20 °C. Under these conditions, the corresponding inferability loss, accumulated during storage relative to the post-fixation baseline, would be approximately +0.15/+1.0/+1.8% for FA and +0.12/+0.8/+1.5% for GA. FA-fixed tissue can exhibit slow lipid leaching at +4 °C, whereas GA-fixed tissue does not, which contributes to GA’s slightly lower long-term inferability-loss floor.

At present, practical aspects of perfusion may in some cases lead to avoiding glutaraldehyde after an initial formalin perfusion, so these micro-differences in theoretical inferability between FA and GA may appear “essentially zero”. However, it seems wise to recognize that such micro-differences are likely to be amplified when uploading technologies — likely performed only once per patient — and subsequent technologies of “revival” (or, more accurately, reconstruction) will fail to fully exploit the theoretical negligibility of the FA vs. GA gap. This is because those technologies will probably not employ the later refinements that, only after their use, would bring both approaches closer to the theoretical maximum of inference.

It is also wise not to be overly confident that capturing any “longer lived state” is sufficient to fully recover long-term memory. Prudential statistics and worst-case weighting from Andy’s collaboration on PLOS1 already soften such confidence: “70.5% of participants agreed that long-term memories are primarily maintained by neuronal connectivity patterns and synaptic strengths […] Despite this, the median probability estimate that any long-term memories could potentially be extracted from a static snapshot of brain structure was around 40%”. Two tests from 2001 and 2009 — mentioned in that study — are sometimes invoked to suggest that 40% could be conservative, since these tests suggest that continuous electrophysiological states are not required. From a broader scholarly perspective, it should be emphasized that no comprehensive or systematically articulated hypotheses have yet been advanced with the explicit aim of refuting the static snapshot hypothesis of memory inferability (though some authors — e.g., Gallistel & Balsam, 2014 — have suggested that long-term memory might depend on intrinsically dynamic or non-structural states, without offering a precise alternative framework).

Smolen et al. (2020) propose that memory persistence likely requires “synaptic positive feedback loops” to counter molecular turnover, such as “persistent activation of CaMKII and self-activated synthesis of PKMζ”. This line of reasoning implies that without reconstructing equivalent feedback mechanisms, a reconstructed brain would soon begin to forget. Nevertheless, this perspective does not categorically exclude static inference: such feedback could, in principle, be re-initiated as functional equivalents from a sufficiently detailed snapshot, without requiring that the original dynamic configuration be inferred or reproduced in its pre-fixation form. This does not imply, however, that all dynamic states are guaranteed to re-emerge (or be re-initiated): some may depend on conditions or processes that are themselves functionally irretrievable. In the worst-case scenario, this could lead to partial memory loss. The crucial point is not whether dynamic states can be reconstructed to resemble the originals at the moment of fixation, but whether they can be re-initiated as equivalent functional processes by the natural operation of the static architecture. In this sense, the snapshot hypothesis does not demand historical similarity, but functional equivalence.

One might imagine a decisive test — if ever feasible — that approximates this logic: after a complete reset, every so-called “dynamic” state — without exception — understood in the broad functional sense often implied, would not require preservation. Such a test might indeed lead us to conclude, in a literal sense, “If it's some sort of "dynamic" state that is short lived, we don’t care about it”.

For this reason, it is useful to enumerate candidate dynamic states — including electrophysiologic and metabolic states, spanning both passive processes (e.g., turnover) and active stabilizing mechanisms (e.g., the feedback dependent processes mentioned above) — and assign them provisional epistemic percentages with respect to LTM inferability from fixed tissue. In the table below, “FA inferibility-loss” and “GA inferibility-loss” estimate how much of each state’s information is lost and cannot be recovered from the static traces left by fixation (GA generally achieves stronger crosslinking and preserves ultrastructural and molecular detail more faithfully). “LTM relevance” estimates the likelihood that each state contributes to LTM reconstruction (bounded by the survey-based upper limit of ~40%). “LTM weight” quantifies the unique informational contribution of each state, adjusted for overlaps (with a cumulative threshold of ~30%).

Code: Select all

Type of dynamic state                    | FA infer.-loss | GA infer.-loss | LTM relevance | LTM weight
-----------------------------------------+----------------+----------------+---------------+--------------
Electrophysiological stabilizers         | 40–60%         | 30–50%         | ~20%          | 1.5–2.0%
Short-timescale synaptic dynamics        | 40–50%         | 20–30%         | ~20%          | 1.0–1.5%
Cytoskeletal / structural micro-dynamics | 40–50%         | 20–30%         | ~20%          | 1.0–1.5%
Astrocytic modulation / gliotransmission | 50–60%         | 30–40%         | ~12%          | 0.8–1.2%
Neuroprotective signaling cascades       | 60–70%         | 40–50%         | ~10%          | 0.5–1.0%
Metabolic / stress-response dynamics     | 50–60%         | 30–40%         | ~10%          | 0.5–0.8%
Immune–glial activation states           | 60–70%         | 40–50%         | ~8%           | 0.3–0.5%
Myelin plasticity                        | 40–50%         | 20–30%         | ~8%           | 0.3–0.5%
Post-translational modifications         | 60–70%         | 40–50%         | ~5%           | 0.1–0.3%
Protein turnover *                       | 80–90%         | 70–80%         | ~3%           | 0.05–0.1%
Ion gradients (Na,K,Ca,Cl)               | 80–90%         | 70–80%         | ~3%           | 0.05–0.1%
Continuous electrophysiological states **| 100%           | 100%           | ~0%           | ~0%
Global physiological regimes ***         | 100%           | 100%           | n/a           | n/a

Electrophysiological stabilizers (e.g., local assemblies, engram reactivation) are included as a single category because, 
  unlike continuous global activity, they may contribute to the stabilization of memory traces through local synchrony and circuit‑level feedback loops.
Firing patterns are not listed as a separate category, since they are considered emergent outputs of electrophysiological stabilizers (see note above).
Short-timescale synaptic dynamics (e.g., vesicular trafficking, docking–undocking cycles, rapid receptor cycling)
  are grouped as a single category because they share similar ultrastructural correlates and similar partial inferibility from fixed tissue.
States such as protein turnover are included for completeness, but their contribution to passive memory persistence 
  is minimal because they do not encode memory-specific information and are functionally redundant.
Dynamic states collectively account for only ~10% of the non-redundant LTM weight, well below the ~30% upper bound,
  reflecting the assumption that most long-term information is structurally instantiated while dynamic states primarily act as mediators of accessibility and stability.
* Protein turnover is assigned a small but non zero LTM weight (~3%), consistent with classical protein synthesis inhibition studies (e.g., Davis & Squire 1984, cited in the PLOS1 survey),
  which show that blocking protein synthesis six or more days after training does not impair recall. 
  These results indicate that ongoing synthesis is not required for retrieval, while still allowing a modest indirect contribution to long term stability.
** Continuous electrophysiological states (e.g., spike trains, global oscillations, persistent ion fluxes)
   are listed with ~0% relevance and weight to emphasize that they were considered but are,
   at the moment, experimentally regarded as unnecessary for passive memory persistence.
*** Global physiological regimes do not constitute dynamic states, encode memory‑specific information, 
    or leave reconstructible traces; for this reason, they do not receive LTM relevance or LTM weight.
    They are included for completeness because they can modulate neural dynamics and the functional stability of long-term memories,
    even though their configurations do not distinguish one memory from another and therefore do not carry LTM‑relevant information.
    Their absence, however, may still impair the re‑emergence of dynamic processes required for functional recall, despite not being themselves informational substrates.
    Well‑established examples include sleep–wake cycles, global neuromodulatory states, systemic stress responses, and neuroinflammatory suppression.
    Other regimes — such as hibernation‑like states or deep hypothermia — may also influence dynamic-state relevance,
    although this remains speculative and unsupported by direct experimental evidence.
These percentages reflect the kinds of technologies that could, in principle, extract static correlates of dynamic processes from fixed tissue. Current structural and molecular methods already recover partial ultrastructural and biochemical signatures — such as vesicular-trafficking footprints, cytoskeletal organization patterns, or molecular distributions linked to signaling cascades and metabolic states — and their steady improvement outlines a clear path toward future atomic-scale readout. Such readout would recover any dynamic state that leaves a stable static trace, while failing where no such trace exists. AI-based integrative modeling and digital twins may simulate electrophysiological patterns compatible with preserved morphology, but such simulations do not constitute inferability: they generate plausible activity, not the actual pre-fixation state. Where static traces exist, GA’s stronger crosslinking reduces inferibility loss; where no static trace exists (e.g., ion gradients or continuous electrophysiological states), both FA and GA converge to complete inferibility loss.

Further speculative concerns may remain, even regarding the apparent dispensability of continuous electrophysiological states — as suggested by hypothermia experiments. In particular, such experiments may simply mask the true relevance of these states: during deep hypothermia, the information intrinsically carried by them — their dynamic constraints relevant for LTM inferability and for sustaining functional equivalence — may be transiently encoded, mediated, or redistributed across other dynamic processes, in a form of dynamic compensation potentially supported by neuroprotective mechanisms such as suppression of neuroinflammation and hibernation-like states. This may preserve the functional equivalence of those electrophysiological states despite their suppression. A similar masking may occur in other contexts involving dynamic states that are often regarded as dispensable for the inferability (and, by extension, the maintenance) of long-term memory, yet might nonetheless contribute to functional equivalence through partially redundant dynamics and mutually reinforcing feedback.

In all such cases, even if the full static snapshot — connectomic and molecular — were preserved with high fidelity, it might fail to restore the dynamic constraints required for LTM inferability if those constraints do not spontaneously re-emerge during “revival”. In a worst-case reading, this would align with the residual 60% scientific uncertainty remaining after the survey’s 40% estimate in favor of the “static snapshot” hypothesis, illustrating how a preserved mass of connectomic code could remain undecipherable. Yet such concerns remain unjustified until a more plausibly resilient — post-mortem — preservation method is supported by scientific evidence. (“Mainstream scientists would reply that, while not perfect, aldehyde fixation is far and away going to preserve more information than any other method.”)
Last edited by PCmorphy72 on Fri Jan 02, 2026 11:07 am, edited 2 times in total.
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

Brain extraction damage

In updating the original post, I have added a new list specifically dedicated to the inferential loss introduced by the surgical removal of the brain. This category was not present in the previous version and deserves its own treatment. The idea emerged partly from the discussion on robotic surgery — which highlighted how future surgical precision may reduce extraction related damage — and partly from your observation that fully successful perfusion is expected to be an exception rather than the rule. When perfusion is incomplete, small pockets of unfixed water may remain in the cortex even after hours of formalin‑based perfusion. These pockets would freeze at −20 °C and cause severe local damage; in some cases they may also reflect tiny regions that never received full perfusion, producing a very small amount of residual ischemia during the perfusion phase. Removing the brain allows full immersion and eliminates these “shadow regions”, which is why extraction becomes necessary in most real‑world cases.

Although this section focuses strictly on the mechanical inferential loss introduced by extraction itself, it is worth noting that future improvements in perfusion technology — such as multi site robotic perfusion (i.e., perfusion initiated from multiple anatomical access points), ventricular perfusion, subarachnoid perfusion, or epidural/subdural cooling — could reduce the frequency of incomplete perfusion. These approaches are technically demanding and potentially risky if performed poorly. The effects of these perfusion related techniques are conceptually distinct from extraction damage, and their small potential benefits (likely well below 0.1% in LTM inferential terms, though possibly up to twice as large under Alcor/CI protocols) are already accounted for in the warm ischemia and on perfusion cold ischemia lists; they are therefore not quantified again in this section.

Rather than modeling visible tissue destruction, the estimates reflect how different extraction techniques alter microdomain stability, synaptic registration, and surface‑level continuity in ways that slightly reduce the theoretical reconstructability of pre‑mortem states.

Cracking [and ice]

Contrarily to the brain extraction damage, which is strictly superficial and limited to cortical micro‑regions, cracking and ice damage disrupt deep volumetric continuity and therefore operate on a completely different inferential scale. Fortunately, in SBP protocols the chemically stabilized brain does not form the brittle, cooling-induced glassy matrix produced by high-concentration vitrification solutions in Alcor/CI protocols; the fixation cross-linking itself prevents the formation of a fragile amorphous solid, so only accidental mechanical shocks during storage remain a concern.

You suggested that cracking might be “essentially zero loss”, but the justification you added (“in spite of my guess about pulverization”) only explains a reduction in confidence, not the conclusion itself. Likewise, the shift to “the real concern is ice” does not justify the assumption that cracking is negligible.

My concern was not ice in SBP protocols because you told me ice is 0% there. But for Alcor/CI, ice is a real factor, so I prepared a structured list.

Alcor/CI long term-memory inferability-loss from ice damage
(post-2009 cases; CNV= Cryoprotectant Net Volume, not strictly equal to “non-ice” because it can include under-perfused voxels, sub-vitrification, or local CPA gradients)
  • Top 22% Alcor cases (CNV 96.5% ± 2.0%): 4–8% (ice ~0.1–0.5%, “essentially ice-free” (Fahy & Wowk, 2021), functional impact remains amplified by microdiscontinuities; even minimal fractures interrupt synaptic chains and neuroendocrine nodes in CA1/hypothalamus, producing losses greater than volumetric estimates; local mechanical propagation → 0.5–1.0%; intrinsic osmotic damage from freezing: intracellular dehydration and extracellular compaction during glass transition → 0.2–0.4%).
  • Top 50% Alcor head-only (CNV 85% ± 3%): 7–12% (ice ~1–2% confined to ventricles and subarachnoid spaces; CNV higher and more stable than whole body due to smaller thermal mass and targeted perfusion; microfractures in CA1/hypothalamus with amplified impact on LTM → 3–5%; local propagation → 1–2%; intrinsic osmotic damage: synaptic compression and extracellular matrix alteration from water-solute redistribution → 0.5–1%).
  • Top 50% Alcor whole-body (CNV 78% ± 5%): 11–18% (ice ~2–4% with peripheral macrocrystals and diffuse microfractures; CNV lower and more variable than neuro cases, with systemic cooling stress and less targeted perfusion; CA1/hypothalamus interruptions aggravated by volumetric gradients → 6–9%; extended propagation/recrystallization → 2–3%; intrinsic osmotic damage: solute gradients and cellular shrinkage during deep cooling → 0.5–1%).
  • Top 50% Cryonics Institute (CNV 72% ± 6%): 14–22% (ice ~3–5% with less uniform distribution and broader nucleation; perfusion/cooling protocols less optimized than Alcor; CA1/hypothalamus discontinuities more severe, amplifying functional loss → 7–10%; propagation/recrystallization more extensive → 2–3%; intrinsic osmotic damage: stronger shrinkage and solute concentration due to less controlled CPA/thermal profiles → 1–1.5%).
NOTE: Neocortical damage is implicitly included via volumetric ice and osmotic terms but is not anatomically isolated due to distributed and partially redundant encoding. Hippocampal CA1, entorhinal cortex, and hypothalamus act as high-centrality bottlenecks and are disproportionately vulnerable.

I’m not sure whether these ice‑related damage percentages I listed are as “impressive” as this statement of yours: “As ice expands, it crushes and smears the tissue with a force of about 75,000 psi. Any physical process that stirs or smears molecules causes a kind of damage which would not allow future inference of the original structure. You just can't unmix something. There's not enough information.

Since your description implicitly treats ice damage as fully non-invertible, I will temporarily adopt a high safety-margin as well: I have generally used a global safety-margin of ~40–50% for my new percentages, but for cracking I will use ~60–70%. Note that, unlike cracking, ice does not admit any “planar” or nearly lossless scenario: volumetric expansion and recrystallization cannot produce smooth or re‑alignable interfaces. There is no benign hypothesis for ice: only the brittle, chaotic, non‑invertible regime exists. Cracking instead requires a different treatment because two physical regimes remain plausible: the “planar” vs “brittle” hypothesis. To explain this point, I prefer to “expand on cracking” as well.

Cracking introduces a different inferential problem: loss of continuity in distributed engram pathways. LTM is not a static mosaic but a distributed code across hippocampus, prefrontal cortex, amygdala, and cortical ensembles, where recall depends on coordinated integration across regions (Tonegawa et al., 2015). You correctly noted:

“Neurons have long delicate axons and dendrites … a single neuron can span the entire brain … matching two ends could be ambiguous … A molecular fingerprint could allow matching up two ends in a damaged area.”

This is true — but only solves fiber identity, not fiber destination. If the fracture occurs at the point of arrival, the system loses information about where that process was supposed to terminate. More generally, if the fracture disrupts a considerable volume around an axon or dendrite, even if you can match their two ends, you still do not recover:
  • which synapse it was forming
  • which spine head was the target
  • which micro-cluster or micro-column it belonged to
  • which high-centrality node (CA1, hypothalamus) it linked into
  • which vesicle-distribution pattern characterized the active zone
  • which perisynaptic glia were part of the micro-circuit
These features are long-term memory. Matching the two ends of a fiber does not reconstruct the topology of its synaptic embedding — and this can be even more relevant than losing an entire less-central neuron.

For the purpose of inferability analysis, any connectomics and network-based models of memory should probably decompose into three interacting components:

T = topological disruption — interruption of engram‑spanning pathways and high‑centrality nodes
I = interface uncertainty — irregular, micro‑branched fracture surfaces and small uncertain volumes
S = synaptic embedding ambiguity — uncertainty about which spine, micro‑cluster, micro‑column, vesicle pattern, or perisynaptic glia constituted the original synaptic target.

Cracking increases all three terms simultaneously: Inferential-loss = T × I × S. For the updated percentages, the AI modeled these components in a way that can be approximated on a rough 1–5 scale:

Localized microcracking → T=2, I=2, S=3
Diffuse microcracking → T=3, I=3, S=4
Macrocracking → T=4, I=4, S=5

We still need to clarify how to treat the previously mentioned “two physical regimes” of cracking. Ultramicrotome sectioning produces smooth, controlled surfaces that preserve ultrastructural continuity with high confidence. By contrast, brittle macro- or microcracking in vitrified tissue is not equivalent to controlled sectioning. While direct neuroscientific studies have not documented pulverization or debris loss, analogies with glassy materials indicate that fracture propagation typically generates irregular, non-planar interfaces through well-known mechanisms such as crack-front instability and microbranching. In glassy solids, the absence of visible pulverization does not imply planar or symmetric fracture surfaces. Pulverization is a macroscopic manifestation of energy dissipation, but the same energy can be dissipated during crack propagation through modes that remain below the resolution of naked eye inspection:
  • sub-micron crack bifurcations
  • surface roughening
  • microbranching
  • sub-micron chipping
  • nanoscale debris
  • shear-induced amorphization
  • sublimation-like transitions
Vitrified tissue, which mechanically behaves as a brittle glass, is therefore expected to fracture through unstable crack-propagation modes that produce rough, non-alignable interfaces even when no loose fragments are macroscopically detectable. Consequently, observing a clean macroscopic split cannot be taken as evidence for planar cracking at synaptic scales. Each uncertain fragment or rough micro-gap can generate non-zero uncertainty at synaptic scales. Operationally, only the macroscopic geometry of the crack is visible during handling; the micro-scale features that determine inferability remain concealed within the macrocracking event itself.

Within the mentioned 60–70% safety-margin band, I keep the estimates anchored to the inferability perspective that even small cracks may have a disproportionate impact. Since the planar scenario has been proposed as a potentially low-loss regime, I model cracking under two mutually exclusive hypotheses:
  • Planar hypothesis. Cracks behave almost like ultramicrotome cuts (nearly lossless at synaptic scales). This hypothesis remains plausible because macroscopic observations in vitrified human brains do not contradict it, while current cryobiological work on cracking — rather than characterizing microscale interface structure — remains primarily focused on minimizing thermal-stress risks for a cracking essentially inevitable in whole-brain vitrification (at least “presently”, in Wowk’s 2011 words, which also specify that cracking “probably does not compromise” LTM inference, likely justifying this “probably” via the planarity hypothesized here).
  • Brittle hypothesis. Cracks behave like brittle fractures in glassy materials (rough, micro‑branched, with small uncertain volumes). This hypothesis is grounded in the broader physics of glassy solids, where unstable fracture modes and non‑planar interfaces are common (Lawn, 1993). Consequently, a brittle-type interface remains compatible with the same inevitability assumed in the planar hypothesis, also under macroscopic cooling conditions that appear well controlled — a point that is essential for interpreting the updated cracking inferability‑loss percentages in the framework.
This also connects to the PLOS1 survey, where 70.5% of participants agreed that LTM is highly dependent on structure and synaptic strengths, yet the median probability that LTM could be extracted from a static snapshot was only ~40%. In a worst‑case reading, this leaves ~60% residual uncertainty — which is exactly the range of safety-margin I am adopting here.

Since the goal is not to estimate probabilities but to produce safe estimates under the worst plausible hypothesis, I apply a conservative logarithmic reduction, subtracting ~10% to one of my previous damage estimates, corresponding to a non-informative 50% prior — essentially a coin-flip assumption between the planar and brittle scenarios. If future evidence were to show that the planar hypothesis holds with high probability — for example, on the order of 90% — then the brittle-oriented estimates would be reduced accordingly. The exact reduction is not fixed here, since it depends on the strength and structure of the empirical evidence, but the framework is designed to update conservatively as the probability of the planar scenario increases. The resulting percentages therefore reflect a high safety-margin, worst‑case‑compatible estimate of maximum inferential loss.

Osmotic shrinking/swelling

This time I agree with this reduction of confidence:
… ramped through cryoprotectant, which itself could be damaging. It's known that osmotic pressure can cause damage, …” (Oct 03, 2023)
I wouldn't worry at all about cryoprotectant osmotic or toxic damage. I think that damage is essentially zero.” (Dec 10, 2025)

Osmotic shrinking and swelling occur whenever a fixative or cryoprotective solution enters the tissue with an osmolarity or permeability different from the intracellular environment. This includes perfusion with formalin (10% NBF ≈ 4% FA), glutaraldehyde (2–2.5% GA), low-concentration DMSO solutions such as those used in SBP protocols, and high-concentration vitrification mixtures such as M22 (Alcor) and VM-1 (CI), as well as earlier glycerol-based mixtures and modern ethylene-glycol–based mixtures.

In all these cases, neurites, spines, and microdomains undergo transient deformation during loading and equilibration. However, deformation alone does not imply inferential loss. As long as the process remains continuous and invertible, the relative geometry of synapses, active zones, vesicle pools, and perisynaptic glia is preserved up to a coordinate transform. Shrinking and swelling changes shape, not structure. For this reason, the osmotic contribution to LTM inferability-loss should already be ~0% at the resolution of this framework.

A hypothetical regime does exist in which osmotic stress exceeds the elastic limit of membranes and cytoskeleton, producing tearing, fusion, cavitation, or irreversible crossings of thin processes. In that regime, osmotic damage could in principle be worse than cracking, because it would not merely introduce interfaces but locally destroy or fuse fine structure.

The point at which deformation is no longer continuous and reversible includes:
  • membrane rupture
  • spinal detachment
  • cytoskeletal tearing
  • cavitation
  • massive blebbing
  • irreversible surface fusions
That is, when you're no longer "stretching" a viscoelastic tissue, but tearing, crushing, gluing, and breaking it.

However, such a regime is incompatible with the ultrastructural appearance of formalin-fixed, glutaraldehyde-fixed, or CPA-treated brain tissue: we do not see widespread membrane rupture, spine fusion, or non-reversible crossings. Real-world perfusion protocols operate far below this threshold.

Even so, within the continuous regime, a non-equilibrium, non-uniform osmotic state could in principle make the mapping from pre-stress to fixed morphology non-injective: multiple pre-stress configurations may converge to the same final geometry. But if such underdetermination existed at the 0.1–1% level, it would already be visible in EM as inconsistent or collapsed microstructure. The absence of such signatures implies that any inferential ambiguity must be several orders of magnitude smaller than spine destruction or ice/cracking damage, and therefore below the noise floor of this model.

Although the absolute osmotic contribution to inferability-loss is ~0%, different perfusion and CPA protocols impose different magnitudes of osmotic stress. In the continuous, sub-elastic regime, any infinitesimal inferential ambiguity scales proportionally with the amplitude and duration of these gradients. For this reason, it is convenient to express osmotic effects as simple multiples of a reference infinitesimal ε rather than as explicit percentages.

Let ε denote the infinitesimal osmotic inferability-loss associated with standard formalin perfusion. Other protocols can be placed on a relative scale:
  • Formalin perfusion (10% NBF): ε
  • Glutaraldehyde perfusion (2–2.5%): ~1.5 × ε
  • SBP Low-DMSO solutions: ~2 × ε
  • Alcor M22: ~2–3 × ε
  • CI VM‑1: ~2–3 × ε
  • Glycerol‑based mixtures: ~2–3 × ε
  • Ethylene‑glycol–based mixtures: ~1.5–2 × ε
These values express relative osmotic stress, not absolute inferential loss. In absolute terms, osmotic shrinking/swelling contributes ~0% to LTM inferability-loss, and any residual ambiguity is dominated by other damage modes such as ice or cracking.

Cryprotectant toxicity

AI estimates that cryoprotectant toxicity contributes 3–7% LTM inferability-loss in Alcor protocols, reflecting persistent chemical interactions with proteins, lipids, and membrane-associated water in unfixed tissue. In contrast, low-DMSO formulations such as VM-1, used by CI, reduce these interactions and are better modeled as 2–5%, but this reduction — driven by their lower DMSO content — also lowers protection against perfusion-related ice formation. Experimental low-DMSO cryoprotectants developed for different purposes, such as the formulations reported in recent Chinese research (JoVE 2025), may fall in the 1–3% toxicity range for the same reason, but are not suitable for long-term vitrification. A xenon-augmented vitrification cocktail, though affecting directly neither vitrification process nor ice protection, could push the chemically mediated component of inferability-loss toward the ~1% range: xenon is osmotically neutral, chemically inert, and its effects are overwhelmingly reversible, leaving no detectable structural or molecular footprint above the 0.01% noise threshold. Reaching this ~1% corresponds to optimized EG/PG-based vitrification chemistry operating at the minimum concentrations compatible with threshold freezing-point depression and stable vitrification. For SBP protocols, once the tissue is chemically fixed, cryoprotectant toxicity becomes effectively irrelevant for inferential purposes.

A separate consideration concerns ice nucleation. As noted by Wowk: However a phenomenon called ice nucleation happens at a high rate near the glass transition temperature, and in some studies doesn’t become undetectable until 20 degrees below it. Ice nucleation — the local reorientation of water molecules into nanoscale ice crystals — doesn’t cause immediate structural damage. However it can make avoiding ice growth and associated structural damage during future rewarming more difficult. The extent and significance of ice nucleation in highly concentrated cryoprotectant solutions is still poorly understood.

If long-term storage occurs at temperatures well below Tg − 20 °C, nucleation becomes thermodynamically suppressed and remains inert for centuries, and the resulting inferability-loss is expected to remain within the same negligible losses discussed for chemically fixed brains over 200-year horizons at −20 °C, while here the scenario concerns brains vitrified far below −135 °C — in particular those with cells ready for the “revival” as in Alcor/CI protocols. In such a scenario, the biological burden would be limited primarily to the residual ischemic damage accumulated before hypothermia arrested metabolic activity: a form of injury that future medicine might plausibly reverse without requiring full molecular reconstruction.

However, reaching this deep-storage regime requires crossing Tg during cooling, and for a macroscale organ such as the brain this transition produces mechanical stresses that make cracking practically unavoidable with current technologies. For this reason, one cannot “choose” to fight nucleation instead of cracking: the path that eliminates nucleation necessarily passes through the temperature range where cracking dominates. In this sense, the classical cryonics ideal of a true “revival” from long-term vitrification — without ice, without cracking, without molecular reconstruction, and without a revival-incompatible fixation — fractures under its own physical constraints.
PCmorphy72
Posts: 38
Joined: Sun May 26, 2019 12:39 pm

Re: Comparing resilience of brain preservation with digital data preservation

Post by PCmorphy72 »

I would like to add a brief bibliographic clarification. In my previous post I wrote that no genuinely systematic hypotheses have been advanced with the explicit aim of refuting the snapshot hypothesis. This remains true, but one partial exception is worth mentioning. Compared to the already mentioned Gallistel & Balsam, whose critique is more general and not systematically articulated, Trettenbrein (2016) offers a more structured critique of the synaptic theory of memory, discussing informational limits, turnover, and representational issues. I found this paper through Aurelia Song’s page on Nectome (specifically by following the link to Langille & Brown), and from that same page I could have drawn on several additional useful references, including those concerning the molecular stability of synaptic proteins and the preservation of receptor interfaces relevant to AMPA receptors. That page also provides strong support for the use of glutaraldehyde.

And of course, no pressure to reply — I know the posts were long, and I don’t expect anyone to review a personal essay unless they genuinely feel like engaging.
Post Reply