There's some legacy code that I would like to refactor.
There is some data obtained through reading some registers. This data is represented in csv and xml files.
The current way is dirty. There's no separation between data and the view (XML, CSV). So actually, for each format, the collection of data is done each time.
To give you a picture, its currently like this:
A::Timestamp()
{
//does some data collection and dumps to csv file
//the header for this csv file is built in PreTimeStamp function.
//depending on some command line options certain cols are added.
filehndle << data1 << ","<<data2<<"," << data3;
if( cmd_line_opt1 )
{
filehndle << "," << statdata1 <<","<<statdata2;
}
}
A::PreTimeStamp()
{
//header for csv file
filehndle << "start, end, delta";
if( cmd_line_opt1 )
{
filehndle << "," << "statdata1 , statdata2";
}
}
There's another class B::Profile() which does the data collection the same way as A::Timestamp does, but the data is dumped as XML.
I would want to refactor it to have the data collection in a common place. And use some adaptors for csv and xml to take the data and dump it in that format.
Now I would need some help on what model I could use to represent the data. The data collected is not fixed so I cant model it a struct
or some static types. The cols that are added to csv file depends on command line options.
And the next help would I could plug classes like say, xmlWriter and CsvWriter to this data model?
I recommend using the Strategy Pattern for this. The TimeStamp and PreTimeStamp declarations would be pure virtual (ie. virtual void Timestamp() =0) in the 'Dumper' interface and the Dumper_A and Dumper_B implementations would be declared virtual. The class loading the data would then be assigned the appropriate implementation of Dumper to handle the dumping of the data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With