I'm just reading Purely Functional Worst Case Constant Time Catenable Sorted Lists by Brodal et al. and their introduction to the different kinds of persistence in the context of data structures leaves me with an obvious question:
Confluent persistence: all the versions can be updated and queried and additionally, two versions can be combined to produce a new version. Note that in this case it is possible to create in polynomial time an exponentially sized structure by repeatedly joining it with itself.
What are the practical applications of being able to create an "exponentially-sized" structure in polynomial time by repeatedly joining it with itself?
Here's an example use case. Imagine a general-purpose "sequence" datatype being used as an array in a situation where the array is expected to be sparse (ie, most of the elements will contain the same value, with a relatively few spots set to some other value). If the sequence datatype has this property, then you can build the (possibly very large) array using the technique you mention, and it will still be reasonably space and time efficient under this usage.
Sure, you could make a special-purpose datatype especially for sparse arrays, and it would probably be a little more space and time efficient than the general-purpose datatype, but the point is that the general-purpose datatype adapts gracefully enough to this usage pattern that you might not even need to make the special-purpose datatype.
(Admittedly, this example is about confluent persistence in general, not about the "sorted lists" of the paper. Then again, in the paper, they were making the remark in question about confluent persistence in general, not specifically about their own data structure.)
I did not read the paper but by seeing the definition of Confluent Persistence you have provided I can correlate that to a Data Structure such as Binomial Heap where in the merge operation is logarithmic running time and it is highly suitable for sets union kind of Algorithms. As Trees grows exponentially and considering a Binomial Heap we may have several of them and each one of them can be merged in polynomial time. Carrying out the union operation would be the the best application that I feel for such kind of Data Structures and that too in polynomial time.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With