there is one csv file having 100 columns, but we need only 3-5 columns which needs to be loaded into database.
I dont want to specify all the 100 columns in linetokenizer in job xml.
Please suggest how we can proceed in this case
Try using a custom fieldSetMapper. You can use it similar to a ResultSet with indexes. You have to list all the column names only if you want automatic mapping. Specify only the delimiter, in your case ","
<bean id="flatFileItemReader" class="org.springframework.batch.item.file.FlatFileItemReader" scope="step">
<property name="resource" value="YOURFILE" />
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="fieldSetMapper">
<bean class="CUSTOMFIELDSETMAPPER" />
</property>
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value="," />
</bean>
</property>
</bean>
</property>
</bean>
The Custom Mapper could be something like this, say if you want to read the 1st column and 25th column:
public class CustomMapper implements FieldSetMapper<CustomPOJO>{
@Override
public CustomPOJO mapFieldSet(FieldSet fieldSet) throws BindException {
CustomPOJO result = new CustomPOJO();
result.setName(fieldSet.readString(0));
result.setAddress(fieldSet.readString(24));
return result;
}
}
For further explanation on how to use the reader, please refer to this tutorial
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With