Suppose you want to find the middle node of a linked list in as efficient a way possible. The most typical "best" answer given is to maintain 2 pointers, a middle, and current. And to increment the middle pointer when the # of elements encountered is divisible by 2. Hence, we can find the middle in 1 pass. Efficient, right? Better than brute force, which involves 1 pass to the end, then 1 more pass until we reach size/2.
BUT... not so fast, why is the first method faster than the "brute force" way? In the first method, we're incrementing the middle pointer approximately size/2 times. But in the brute force way, in our 2nd pass, we're traversing the list until we reached the size/2th node. So aren't these 2 methods the same? Why is the first better than the 2nd?
//finding middle element of LinkedList in single pass
LinkedList.Node current = head;
int length = 0;
LinkedList.Node middle = head;
while(current.next() != null){
length++;
if(length%2 ==0){
middle = middle.next();
}
current = current.next();
}
if(length%2 == 1){
middle = middle.next();
}
Traverse the linked list using 2 pointers i.e. slow and fast pointer. Move the slow pointer one node at a time and the fast pointer two nodes at once until the fast pointer points to null. When the fast pointer reaches the end slow pointer will point to the middle element.
Method 1: Traverse the whole linked list and count the no. of nodes. Now traverse the list again till count/2 and return the node at count/2.
By using two pointers, incrementing one at each iteration and other at every second iteration. When the first pointer will point at end of Linked List, the second pointer will be pointing at a middle node of the Linked List.
If we modify the code to be:
while(current.next() != null){
current = current.next();
middle = middle.next();
if(current.next() != null){
current = current.next();
}
}
Now there are fewer assignments since length
does not have to be incremented and I do believe this will give an identical result.
At the end of the day both solutions are O(N) so it is a micro-optimization.
As @Oleg Mikheev suggested, why can't we use Floyd's cycle-finding algorithm to find the middle element, as follows:
private int findMiddleElement() {
if (head == null)
return -1; // return -1 for empty linked list
Node temp = head;
Node oneHop, twoHop;
oneHop = twoHop = temp;
while (twoHop != null && twoHop.next != null) {
oneHop = oneHop.next;
twoHop = twoHop.next.next;
}
return oneHop.data;
}
The first answer has multiple advantages:
Since the two methods are of the same complexity O(N), any analysis on the efficiency needs to be careful, maybe involving the specific implementation and cost model. However, for the most naive implementation, the first method can save some loop variable increments.
It save you one variable's space - the two pointers v.s. the length, the counter and one pointer. Also, what if it is a huge list, and the length overflowed?
However, if you consider some specific model, then the second method might be much better. If the elements are all adjacent in memory, and the list is large enough , the cache can only hold one place of continuous memory, the first method might incur some memory access cost. At the end of the day, these two methods are mostly equivalent. Of course, the technique used in the first method is more flashy, and the thought process might be useful in other contexts.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With