Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Read from StreamReader in batches

I have been running into OutOfMemory Exceptions while trying to load an 800MB text file into a DataTable via StreamReader. I was wondering if there a way to load the DataTable from the memory stream in batches, ie, read the first 10,000 rows of the text file from StreamReader, create DataTable, do something with DataTable, then load the next 10,000 rows into the StreamReader and so on.

My googles weren't very helpful here, but it seems like there should be an easy way to do this. Ultimately I will be writing the DataTables to an MS SQL db using SqlBulkCopy so if there is an easier approach than what I have described, I would be thankful for a quick pointer in the right direction.

Edit - Here is the code that I am running:

public static DataTable PopulateDataTableFromText(DataTable dt, string txtSource)
{

    StreamReader sr = new StreamReader(txtSource);
    DataRow dr;
    int dtCount = dt.Columns.Count;
    string input;
    int i = 0;

    while ((input = sr.ReadLine()) != null)
    {

        try
        {
            string[] stringRows = input.Split(new char[] { '\t' });
            dr = dt.NewRow();
            for (int a = 0; a < dtCount; a++)
            {
                string dataType = dt.Columns[a].DataType.ToString();
                if (stringRows[a] == "" && (dataType == "System.Int32" || dataType == "System.Int64"))
                {
                    stringRows[a] = "0";
                }
                dr[a] = Convert.ChangeType(stringRows[a], dt.Columns[a].DataType);

            }
            dt.Rows.Add(dr);
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.ToString());
        }
        i++;
    }
    return dt;
}

And here is the error that is returned:

"System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at System.String.Split(Char[] separator, Int32 count, StringSplitOptions options)
at System.String.Split(Char[] separator}
at Harvester.Config.PopulateDataTableFromText(DataTable dt, String txtSource) in C:...."

Regarding the suggestion to load the data directly into SQL - I'm a bit of a noob when it comes to C# but I thought that is basically what I am doing? SqlBulkCopy.WriteToServer takes the DataTable that I create from the text file and imports it to sql. Is there an even easier way to do this that I am missing?

Edit: Oh, I forgot to mention - this code will not be running on the same server as the SQL Server. The Data text file is on Server B and needs to be written to table in Server A. Does that preclude using bcp?

like image 940
tt2 Avatar asked Dec 29 '22 07:12

tt2


1 Answers

Have you considered loading the data directly into SQL Server and then manipulating it in the database? The database engine is already designed to perform manipulation of large volumes of data in an efficient manner. This may yield better results overall and allows you to leverage the capabilities of the database and SQL language to do the heavy lifting. It's the old "work smarter not harder" principle.

There are a number of different methods to load data into SQL Server, so you may want to examine these to see if any are a good fit. If you are using SQLServer 2005 or later and you really need to do some manipulation on the data in C#, you can always use a managed stored procedure.

Something to realize here is that the OutOfMemoryException is a bit misleading. Memory is more than just the amount of physical RAM you have. What you are likely running out of is addressable memory. This is a very different thing.

When you load a large file into memory and transform it into a DataTable it likely requires a lot more than just 800Mb to represent the same data. Since 32bit .NET processes are limited to just under 2Gb of addressable memory, you will likely never be able to process this quantity of data in a single batch.

What you will likely need to do is to process the data in a streaming manner. In other words, don't try to load it all into a DataTable and then bulk insert to SQLServer. Rather process the file in chunks, clearing out the prior set of rows once you're done with them.

Now, if you have access to a 64-bit machine with lots of memory (to avoid VM thrashing) and a copy of the 64-bit .NET runtime, you could probably get away within running the code unchanged. But I would suggest making the necessary changes anyways since it will likely improve the performance of this even in that environment.

like image 81
LBushkin Avatar answered Jan 09 '23 06:01

LBushkin