I'm working on a project and I need to read a CSV file and then fill a DataSet with its data. I've been searching and I have found some interesting things in OleDB.
I have a class CSVReader:
class CSVReader
{
public DataTable GetDataTable(string filePath)
{
OleDbConnection conn = new System.Data.OleDb.OleDbConnection("Provider=Microsoft.Jet.OleDb.4.0; Data Source = " + Path.GetDirectoryName(filePath) + "; Extended Properties = \"Text;HDR=YES;FMT=Delimited\"");
conn.Open();
string strQuery = "SELECT * FROM [" + Path.GetFileName(filePath) + "]";
OleDbDataAdapter adapter = new OleDbDataAdapter(strQuery, conn);
DataSet ds = new System.Data.DataSet("CSV File");
adapter.Fill(ds);
return ds.Tables[0];
}
}
And I call it from here:
CSVReader datareader = new CSVReader();
DataTable dt = datareader.GetDataTable(filepath);
The problem is that it parse the first line (the header line) like JUST ONE identifier for the column, I mean: This is the header of the CSV file:
Name, Product Name, Server, Vendor, Start Time, End Time, Host Name, User Name, Project Name, Usage time (hours)
And after it, there is all the data separated by commas.
When I read the file, fill the dataset and print dt.Columns.Count it shows that it only have 1 column.
Any help?
Thanks in advance.
You can read a CSV file into a DataFrame using the read_csv() function (this function should be familiar to you, but you can run help(pd. read_csv) in the console to refresh your memory!). Then, you can call the . to_sql() method on the DataFrame to load it into a SQL table in a database.
First, we will get the csv file path from our project. After getting the file path use LoadCSV function of NuGet package. This function will return the WorkBook object. Now to further process this object we can convert it into data table.
A CSV (comma-separated values) file is a text file that has a specific format which allows data to be saved in a table-structured format.
The best option I have found, and it resolves issues where you may have different versions of Office installed, and also 32/64-bit issues, is FileHelpers.
It can be added to your project references using NuGet and it provides a one-liner solution:
CommonEngine.CsvToDataTable(path, "ImportRecord", ',', true);
I always use this CSV library for reading CSV files in through C# its always worked good for me.
http://www.codeproject.com/KB/database/CsvReader.aspx
Heres an example of reading a CSF file using the library
using System.IO;
using LumenWorks.Framework.IO.Csv;
void ReadCsv()
{
// open the file "data.csv" which is a CSV file with headers
using (CsvReader csv =
new CsvReader(new StreamReader("data.csv"), true))
{
int fieldCount = csv.FieldCount;
string[] headers = csv.GetFieldHeaders();
while (csv.ReadNextRecord())
{
for (int i = 0; i < fieldCount; i++)
Console.Write(string.Format("{0} = {1};",
headers[i], csv[i]));
Console.WriteLine();
}
}
}
KBCsv has built-in support for reading into a DataSet
:
using (var reader = new CsvReader(@"C:\data.csv")) {
reader.ReadHeaderRecord();
var dataSet = new DataSet();
reader.Fill(dataSet, "csv-data");
}
if nothing special i use this kind of code
TextReader tr1 = new StreamReader(@"c:\pathtofile\filename",true);
var Data = tr1.ReadToEnd().Split('\n')
.Where(l=>l.Length>0) //nonempty strings
.Skip(1) // skip header
.Select(s=>s.Trim()) // delete whitespace
.Select(l=>l.Split(',')) // get arrays of values
.Select(l=>new {Field1=l[0],Field2=l[1],Field3=l[2]});
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With