I have noticed that many iterators or data readers are forward only like DataReader, XmlReader, IEnumerator, any many more (you got the idea).
So simply asking why are they forward-only usually when i create a Data Iterator for my custom needs i usually Try adding support for navigation on both sides. I agree that most of time we don't need backward traversing but sometimes we do need and so we end up creating temp
variables or something to hold on the data while required.
So my questions are:
Why are most Data-Iterators forward-only
Am i wrong in creating a backward traversable Iterator/ Data reader. if not why doesn't framework have such support for its inbuilt Data Iterators.
Do we have any serious performance drawback or its just not considered a good design to have such feature.
This question has bugged me a lot from start but never got a satisfactory answer so I'm asking it here.I do believe many developers may agree with me that backward traversing can be useful sometimes.
In general, if we want to analyze the time complexity of a tree traversal then we have to think in the terms of the number of nodes visited. Hence, if a tree has n nodes, then each node is visited only once in inorder traversal and hence the complexity of the inorder traversal of the binary tree is O ( n ) O(n) O(n) .
Just as it doesn't matter which node we start with, it doesn't really matter which neighboring vertex we visit next — just as long as the vertex is reachable, and is one of the neighbors of a . For example, we could arbitrarily choose to visit node c next.
Inorder Traversal. Inorder Traversal is the one the most used variant of DFS(Depth First Search) Traversal of the tree.
The goal of a graph traversal, generally, is to find all nodes reachable from a given set of root nodes. In an undirected graph we follow all edges; in a directed graph we follow only out-edges.
"forwards only" is:
for example, if you are reading data from a database, a network stream, etc, you can only guarantee "forwards". We certainly don't want to arbitrarily buffer all that data - it could be huge potentially.
If the client thinks they have a sane amount of data, they can always call ToList()
etc to buffer it in memory and allow random access.
For example, consider this perfectly valid sequence:
public static IEnumerable<int> LotsOfData() {
var random = new Random();
while(true) yield return random.Next();
}
obviously that example is a little unlikely, but reading from a socket, database, or even a large file - can be essentially the same scenario.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With