Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the Maximum Number of Records i can insert through InserAllOnSubmit() In Linq to Sql

I'm reading the Excel sheet and applying My business logic and i'm trying to insert using Linq to SQL.

In my loop i have > (greater than) 5,000 records and < (Less than) 15,000 records to Insert.

public List<tblLog> tblLogList = new List<tblLog>(); 

This Method is Inside the Looping:

public void SaveLog()
{
    tblLog tnlog = new tblLog();
    tnlog.id = Guid.NewGuid();
    tnlog.comp_id = Comp_id;
    tnlog.branch_id = Branch_id;
    tnlog.row_id = rowID;
    tnlog.his_id = his_id;
    
    //Add records to List
    tblLogList.Add(tnlog);

Earlier i have tried this code to submit 1 by one:

    //_trContext.tblLogs.InsertOnSubmit(tblLog);
    //_trContext.SubmitChanges();

Due to the Performance hit i have changed InsertOnSubmit to InsertAllOnSubmit

    if (tblLogList.Count >= 1000)
    {
         _trContext.tblLogs.InsertAllOnSubmit(tblLogList);
         _trContext.SubmitChanges();
         tblLogList.Clear();
    }
}

Here my question is:

  1. What is the Maximum Number Records i can insert through InserAllOnSubmit() In Linq to Sql.

  2. Through my above code i achieved up to 1000 records but i swear while the code goes for 10,000 or more records it might through some Timeout Exception since its implemented in my **windows service**.

    I'm realy confused, what were the best suggestions to handle the above logic.?

Thanks in Advance.

like image 792
RajeshKdev Avatar asked Jun 20 '13 15:06

RajeshKdev


2 Answers

While the others correctly state that there are better solutions for this task, a direct answer to your question has not yet been given.

The answer is that technically the number is either limited by the amount of memory the context will consume by swallowing all these new objects, or by the maximum size of the database transaction log. The former is far more likely to be the bottleneck, but you can prevent reaching both limits by committing the records in chunks of several hundreds or thousands and using a new context (!) for each batch.

As you said, a practical limit can be the command time-out (if you don't want to make it eternal). This too can be circumvented by committing in batches and, thus, starting a new command for each batch.

The functional limit depends on things like whether or not the process is unattended (patience), and whether the data are expected to be available within a certain amount of time. If these requirements are tight you may have to resort to one of the solutions suggested in the other answers.

like image 130
Gert Arnold Avatar answered Oct 12 '22 23:10

Gert Arnold


At that volume, I'd be using SqlBulkCopy, perhaps via a DataTable, or perhaps via FastMember's ObjectReader API which creates an IDataReader from a list of types objects, suitable for feeding to SqlBulkCopy.

like image 22
Marc Gravell Avatar answered Oct 13 '22 00:10

Marc Gravell