Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

FileHelpers throws OutOfMemoryException when parsing large csv file

I'm trying to parse a very large csv file with FileHelpers (http://www.filehelpers.net/). The file is 1GB zipped and about 20GB unzipped.

        string fileName = @"c:\myfile.csv.gz";
        using (var fileStream = File.OpenRead(fileName))
        {
            using (GZipStream gzipStream = new GZipStream(fileStream, CompressionMode.Decompress, false))
            {
                using (TextReader textReader = new StreamReader(gzipStream))
                {
                    var engine = new FileHelperEngine<CSVItem>();
                    CSVItem[] items = engine.ReadStream(textReader);                        
                }
            }
        }

FileHelpers then throws an OutOfMemoryException.

Test failed: Exception of type 'System.OutOfMemoryException' was thrown. System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Text.StringBuilder.ExpandByABlock(Int32 minBlockCharCount) at System.Text.StringBuilder.Append(Char value, Int32 repeatCount) at System.Text.StringBuilder.Append(Char value) at FileHelpers.StringHelper.ExtractQuotedString(LineInfo line, Char quoteChar, Boolean allowMultiline) at FileHelpers.DelimitedField.ExtractFieldString(LineInfo line) at FileHelpers.FieldBase.ExtractValue(LineInfo line) at FileHelpers.RecordInfo.StringToRecord(LineInfo line) at FileHelpers.FileHelperEngine1.ReadStream(TextReader reader, Int32 maxRecords, DataTable dt) at FileHelpers.FileHelperEngine1.ReadStream(TextReader reader)

Is it possible to parse a file this big with FileHelpers? If not can anyone recommend an approach to parsing files this big? Thanks.

like image 849
BowserKingKoopa Avatar asked Mar 05 '13 20:03

BowserKingKoopa


1 Answers

You must work record by record in this way:

  string fileName = @"c:\myfile.csv.gz";
  using (var fileStream = File.OpenRead(fileName))
  {
      using (GZipStream gzipStream = new GZipStream(fileStream, CompressionMode.Decompress, false))
      {
          using (TextReader textReader = new StreamReader(gzipStream))
          {
            var engine = new FileHelperAsyncEngine<CSVItem>();
            using(engine.BeginReadStream(textReader))
            {
                foreach(var record in engine)
                {
                   // Work with each item
                }
            }
          }
      }
  }

If you use this async aproach you will only be using the memory for a record a time, and that will be much more faster.

like image 55
Marcos Meli Avatar answered Nov 10 '22 20:11

Marcos Meli