I would never create enough entities to run out of 63-bit address space, but say I used allocateIdRange to allocate the id 9223372036854775807 (which is almost 2^63). Is that kind of entity just broken for new, automatically entered entities?
I tried this out in a test app . It seems that some shards of the auto-IDer can continue to produce valid ids, but other shards just give a DatastoreFailureException
. The success rate is about 30%. Will it ever go up?
This is actually a serious question because, in my naivete, I created some rather huge ids. I still have several trillion entities to go before I come up to this limit, but I've noticed that the ids can jump by millions between entities, and I enter new entities at the rate of about a million per year. So... I'm scared of hitting this limit.
With a test app I reserved a bunch of very high ids with allocateIdRange
. At first, about half of my attempts to put new entities succeeded. Now, no new entities can be put with a blank id - a DatastoreFailureException
is raised every time. I presume this is because the key allocator implementation does not keep track of gaps in keys, but only keeps track of the highest id given out so far.
I don't see any way to reset the counter for this Kind, so I think the only solution would be to pick a new Kind
name.
Lesson: don't use ids anywhere near 2^63!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With