- cross-posted to:
- technews@radiation.party
- cross-posted to:
- technews@radiation.party
Contributing immortal objects into Python introduces true immutability guarantees for the first time ever. It helps objects bypass both reference counts and garbage collection checks. This means that we can now share immortal objects across threads without requiring the GIL to provide thread safety.
This is actually really cool. In general, if you can make things immutable or avoid state, then that will help you structure things concurrently. With immortal objects you now can guarantee that immutability without costly locks. It will be interesting to see what the final round of benchmarks are when this is fully implemented.
Can’t this result in ugly memory leaks?
@Sternout @jnovinger
Maybe, but (apart from the “Accidental Immortalisation” case mentioned in PEP 683) these things are created deliberately by C extensions.
A sane extension shouldn’t be building loads of them on the fly in the first place.
I have not read the PEP itself or the PEPs that they claim to simplify, but this feel like a very bad idea that only really benefits Meta and a few other mega servers. It is enabling a micro-optimization that is only usable in a niche use case (forking long running processes with shared memory). Unfortunately, it is making all other python applications, on average, 2% slower. This is a minor regression but it hurts everyone and benefits almost no one.
Shouldn’t this be useful for pandas and other data analytics libraries? That would be useful beyond meta and similar. A lot of mid to large sized orgs use those.