Many functional languages and libraries offer "persistent" data structures, whose "update" operations are specified to return a copy (or "version") of the data structure with the update applied. This fused copy-and-update sidesteps mutation, by precluding writes except to newly-allocated memory that is not yet reachable from any existing references.
However, naively copying a data structure gets expensive as the data structure grows. To stay efficient, persistent data structures are typically optimized to reduce the amount of copying needed during an "update", by having the new version reuse unchanged internal structure from the old version where possible. This decreases the amount of memory that needs to be copied, from linear in the size of the old version (O(N)) to sublinear (e.g. O(logN)).
Unfortunately, this reduction in cost is sometimes insufficient, which prompted the ideation of so-called "transient" data structures. These data structures are, in all but name, mutable data structures - but shoehorned into the same interface as persistent data structures. Update operations still return a data structure, which may be the same version with a mutation applied, but is allowed to still be a true copy / new version. (This creates an obvious footgun: It is a misuse to access the "old" version after an update, as it may or may not be an alias of the "new" version.)
Transients were developed for a primarily-functional paradigm, with the intent that a persistent data structure would be converted to a transient when performance is critical, and swiftly de-converted afterward. (This explains their reuse of the persistent-style interface.) What if we poked at this assumption? Say we were coming from a "not-strictly-functional" paradigm, where we regard mutation more amicably. Is there anything transient and persistent data structures still have to offer?
In this "not-strictly-functional" paradigm, we acknowledge mutation as a fact of life. We hopefully still approach it cautiously - deliberating where and when it should happen - but with intelligible boundaries in place, we have no impulse to circumvent it. Clearly, in this paradigm, forcing a copy before every update feels overzealous. But, there are some situations where we inherently need a copy.
Say we are passing the data structure to another method, or thread, that needs a version it can independently mutate - while other parts of the code need the original, or their own mutable versions. It would be great if creating these copies was sublinear in the size of the data structure.
Additionally, the "structural sharing" technique that persistent data structures use to achieve sublinear copies can, in some cases, be leveraged to achieve a sublinear "merge" of two different versions. This feature would be particularly attractive for data-parallel programs, that repeatedly merge together sub-results of independent parallel computations.
Curiously, it seems like we might have performance to gain from borrowing a few ideas from persistent data structures - and conveniently, transients lay some of the groundwork for shifting those ideas to a mutation-friendly paradigm.
Perhaps, it's even possible to implement existing standard mutation-friendly interfaces, or extend them slightly with copy/merge (we'll call them "fork"/"join") operations. Then the new implementation could be used as a drop-in replacement, unlike many functional data structure libraries that require refactoring to a new set of base interfaces, and a different paradigm. Or, if we prefer the functional paradigm, it would be trivial to reintroduce persistent-style abstractions that internally delegate to a full-featured mutable implementation, just prefacing external "updates" with a call to the cheap internal copy operation.
Does that sound interesting?