Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to skip multiple lines using read.csv in PySpark

I am having a .csv with few columns, and I wish to skip 4 (or 'n' in general) lines when importing this file into a dataframe using spark.read.csv() function. I have a .csv file like this -

ID;Name;Revenue
Identifier;Customer Name;Euros
cust_ID;cust_name;€
ID132;XYZ Ltd;2825
ID150;ABC Ltd;1849

In normal Python, when using read_csv() function, it's simple and can be done using skiprow=n option like -

import pandas as pd
df=pd.read_csv('filename.csv',sep=';',skiprows=3) # Since we wish to skip top 3 lines

With PySpark, I am importing this .csv file as follows -

df=spark.read.csv("filename.csv",sep=';') 
This imports the file as -
ID          |Name         |Revenue
Identifier  |Customer Name|Euros
cust_ID     |cust_name    |€
ID132       |XYZ Ltd      |2825
ID150       |ABC Ltd      1849

This is not correct, because I wish to ignore first three lines. I can't use option 'header=True' because it will only exclude the first line. One can use 'comment=' option, but for that one needs the lines to start with a particular character and that is not the case with my file. I could not find anything in the documentation. Is there any way this can be accomplished?

like image 513
cph_sto Avatar asked Oct 04 '18 10:10

cph_sto


1 Answers

I couldnt find a simple solution for your problem. Although this will work no matter how the header is written,

df = spark.read.csv("filename.csv",sep=';')\
          .rdd.zipWithIndex()\
          .filter(lambda x: x[1] > n)\
          .map(lambda x: x[0]).toDF()
like image 160
mayank agrawal Avatar answered Sep 19 '22 23:09

mayank agrawal