I was solving the basic problem of finding the number of distinct integers in a given array.
My idea was to declare an std::unordered_set
, insert all given integers into the set, then output the size of the set. Here's my code implementing this strategy:
#include <iostream> #include <fstream> #include <cmath> #include <algorithm> #include <vector> #include <unordered_set> using namespace std; int main() { int N; cin >> N; int input; unordered_set <int> S; for(int i = 0; i < N; ++i){ cin >> input; S.insert(input); } cout << S.size() << endl; return 0; }
This strategy worked for almost every input. On other input cases, it timed out.
I was curious to see why my program was timing out, so I added an cout << i << endl;
line inside the for-loop. What I found was that when I entered the input case, the first 53000
or so iterations of the loop would pass nearly instantly, but afterwards only a few 100
iterations would occur each second.
I've read up on how a hash set can end up with O(N)
insertion if a lot of collisions occur, so I thought the input was causing a lot of collisions within the std::unordered_set
.
However, this is not possible. The hash function that the std::unordered_set
uses for integers maps them to themselves (at least on my computer), so no collisions would ever happen between different integers. I accessed the hash function using the code written on this link.
My question is, is it possible that the input itself is causing the std::unordered_set
to slow down after it hits around 53000
elements inserted? If so, why?
Here is the input case that I tested my program on. It's pretty large, so it might lag a bit.
Use unordered_set when We need to keep a set of distinct elements and no ordering is required.
std::unordered_set is an STL container and its introduced in C++11. It provides a functionality of a Set i.e. it can contains the unique elements only. unordered_set stores the elements internally using a Hash Table. We can declare an unordered_set of a particular type only i.e.
The input file you've provided consists of successive integers congruent to 1
modulo 107897
. So what is most likely happening is that, at some point when the load factor crosses a threshold, the particular library implementation you're using resizes the table, using a table with 107897
entries, so that a key with hash value h
would be mapped to the bucket h % 107897
. Since each integer's hash is itself, this means all the integers that are in the table so far are suddenly mapped to the same bucket. This resizing itself should only take linear time. However, each subsequent insertion after that point will traverse a linked list that contains all the existing values, in order to make sure it's not equal to any of the existing values. So each insertion will take linear time until the next time the table is resized.
In principle the unordered_set
implementation could avoid this issue by also resizing the table when any one bucket becomes too long. However, this raises the question of whether this is a hash collision with a reasonable hash function (thus requiring a resize), or the user was just misguided and hashed every key to the same value (in which case the issue will persist regardless of the table size). So maybe that's why it wasn't done in this particular library implementation.
See also https://codeforces.com/blog/entry/62393 (an application of this phenomenon to get points on Codeforces contests).
Your program works absolutely fine. There is nothing wrong with the hash algorithm, collisions, or anything of the like.
The thottling you are seeing is from the console i/o when you attempt to paste 200000 numbers into the window. That's why it chokes. Redirect from file and it works fine and returns the result almost instantly.
C:\Users\selbie\source\repos\ConsoleApplication126\Debug>ConsoleApplication126.exe < d:/test.txt 200000
All the numbers in your test input file are unique, so the output is 200000
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With