Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the best way to load huge result set in memory?

I am trying to load 2 huge resultsets(source and target) coming from different RDBMS but the problem with which i am struggling is getting those 2 huge result set in memory.

Considering below are the queries to pull data from source and target:

Sql Server - select Id as LinkedColumn,CompareColumn from Source order by LinkedColumn

Oracle - select Id as LinkedColumn,CompareColumn from Target order by LinkedColumn

Records in Source : 12377200

Records in Target : 12266800

Following are the approaches i have tried with some statistics:

1) open data reader approach for reading source and target data:

Total jobs running in parallel = 3

Time taken by Job1 = 01:47:25

Time taken by Job1 = 01:47:25

Time taken by Job1 = 01:48:32

There is no index on Id Column.

Major time is spent here: var dr = command.ExecuteReader();

Problems: There are timeout issues also for which i have to kept commandtimeout to 0(infinity) and it is bad.

2) Chunk by chunk reading approach for reading source and target data:

   Total jobs = 1
   Chunk size : 100000
   Time Taken : 02:02:48
   There is no index on Id Column.

3) Chunk by chunk reading approach for reading source and target data:

   Total jobs = 1
   Chunk size : 100000
   Time Taken : 00:39:40
   Index is present on Id column.

4) open data reader approach for reading source and target data:

   Total jobs = 1
   Index : Yes
   Time: 00:01:43

5) open data reader approach for reading source and target data:

   Total jobs running in parallel = 3
   Index : Yes
   Time: 00:25:12

I observed that while having an index on LinkedColumn does improve performance, the problem is we are dealing with a 3rd party RDBMS table which might not have an index.

We would like to keep database server as free as possible so data reader approach doesn't seem like a good idea because there will be lots of jobs running in parallel which will put so much pressure on database server which we don't want.

Hence we want to fetch records in the resource memory from source to target and do 1 - 1 records comparison to keep the database server free.

Note: I want to do this in my c# application and don't want to use SSIS or Linked Server.

Update:

Source Sql Query Execution time in sql server management studio: 00:01:41

Target Sql Query Execution time in sql server management studio:00:01:40

What will be the best way to read huge result set in memory?

Code:

static void Main(string[] args)
        {   
            // Running 3 jobs in parallel
             //Task<string>[] taskArray = { Task<string>.Factory.StartNew(() => Compare()),
        //Task<string>.Factory.StartNew(() => Compare()),
        //Task<string>.Factory.StartNew(() => Compare())
        //};
            Compare();//Run single job
            Console.ReadKey();
        }
public static string Compare()
        {
            Stopwatch stopwatch = new Stopwatch();
            stopwatch.Start();
            var srcConnection = new SqlConnection("Source Connection String");
            srcConnection.Open();
            var command1 = new SqlCommand("select Id as LinkedColumn,CompareColumn from Source order by LinkedColumn", srcConnection);
            var tgtConnection = new SqlConnection("Target Connection String");
            tgtConnection.Open();
            var command2 = new SqlCommand("select Id as LinkedColumn,CompareColumn from Target order by LinkedColumn", tgtConnection);
            var drA = GetReader(command1);
            var drB = GetReader(command2);
            stopwatch.Stop();
            string a = stopwatch.Elapsed.ToString(@"d\.hh\:mm\:ss");
            Console.WriteLine(a);
            return a;
        }
      private static IDataReader GetReader(SqlCommand command)
        {
            command.CommandTimeout = 0;
            return command.ExecuteReader();//Culprit
        }
like image 361
ILoveStackoverflow Avatar asked Feb 14 '18 11:02

ILoveStackoverflow


1 Answers

There is nothing (I know of) faster than a DataReader for fetching db records.

Working with large databases comes with its challenges, reading 10 million records in under 2 seconds is pretty good.

If you want faster you can:

  1. jdwend's suggestion:

Use sqlcmd.exe and the Process class to run query and put results into a csv file and then read the csv into c#. sqlcmd.exe is designed to archive large databases and runs 100x faster than the c# interface. Using linq methods are also faster than the SQL Client class

  1. Parallize your queries and fetch concurrently merging results: https://shahanayyub.wordpress.com/2014/03/30/how-to-load-large-dataset-in-datagridview/

  2. The easiest (and IMO the best for a SELECT * all) is to throw hardware at it: https://blog.codinghorror.com/hardware-is-cheap-programmers-are-expensive/

Also make sure you're testing on the PROD hardware, in release mode as that could skew your benchmarks.

like image 183
Jeremy Thompson Avatar answered Oct 03 '22 05:10

Jeremy Thompson