I have really large file with approximately 15 million entries. Each line in the file contains a single string (call it key).
I need to find the duplicate entries in the file using java. I tried to use a hashmap and detect duplicate entries. Apparently that approach is throwing me a "java.lang.OutOfMemoryError: Java heap space" error.
How can I solve this problem?
I think I could increase the heap space and try it, but I wanted to know if there are better efficient solutions without having to tweak the heap space.
Select the data you want to check for duplicate information. Then, from the Home tab, select Conditional Formatting > Highlight Cell Rules > Duplicate Values.
The key is that your data will not fit into memory. You can use external merge sort for this:
Partition your file into multiple smaller chunks that fit into memory. Sort each chunk, eliminate the duplicates (now neighboring elements).
Merge the chunks and again eliminate the duplicates when merging. Since you will have an n-nway merge here you can keep the next k elements from each chunk in memory, once the items for a chunk are depleted (they have been merged already) grab more from disk.
I'm not sure if you'd consider doing this outside of java, but if so, this is very simple in a shell:
cat file | sort | uniq
You probably can't load the entire file at one time but you can store the hash and line-number in a HashSet no problem.
Pseudo code...
for line in file
entries.put(line.hashCode, line-number)
for entry in entries
if entry.lineNumbers > 1
fetch each line by line number and compare
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With