I have a csv file with 42,000 lines in the following pattern
03055,Milford,NH
03057,Mont Vernon,NH
03060,Nashua,NH
I tried to store the data in a HashMap
using the zipcode as a key, like
while ((line = stream_in.readLine())!=null) {
LocationBean temp_location_bean = new LocationBean();
String line_trimmed = line.trim();
String[] line_chunked = line_trimmed.split(",",4);
temp_location_bean.setZip_code(line_chunked[0]);
temp_location_bean.setCity(line_chunked[1]);
temp_location_bean.setState(line_chunked[2]);
this.locations_as_beans_list.put(zip_code, temp_location_bean);
}
But when I go to do a lookup:
for(Map.Entry<String, LocationBean> location_object : this.locations_as_beans_list.entrySet())
{
LocationBean temp_location_bean = location_object.getValue();
if (params[0].matches(temp_location_bean.getZip_code())) {
master_location = temp_location_bean.getCity() + ","
+ temp_location_bean.getState()
+ ", (" + temp_location_bean.getZip_code() +")";
}
}
It takes over 20 seconds.... Shouldn't the performance be relatively quick? How can I improve the performance here?
tl;dr how can I optimize the reads in this example?
If you're looking for performance then you shouldn't iterate the entrySet
to lookup a keyed zipcode. Instead, you might use the HashMap
and get the value by its' key. Like,
LocationBean temp_location_bean = this.locations_as_beans_list.get(params[0]);
if (temp_location_bean != null) {
master_location = temp_location_bean.getCity() + ","
+ temp_location_bean.getState()
+ ", (" + temp_location_bean.getZip_code() +")";
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With