I'm diving into pandas and experimenting around. As for reading data from an Excel file. I wonder what's the difference between using ExcelFile to read_excel. Both seem to work (albeit slightly different syntax, as could be expected), and the documentation supports both. In both cases, the documentation describes the method the same: "Read an Excel table into DataFrame" and "Read an Excel table into a pandas DataFrame". (documentation for read_excel, and for excel_file)
I'm seeing answers here on SO that uses either, w/o addressing the difference. Also, a Google search didn't produce a result that discusses this issue.
WRT my testing, these seem equivalent:
path = "test/dummydata.xlsx" xl = pd.ExcelFile(path) df = xl.parse("dummydata") # sheet name
and
path = "test/dummydata.xlsx" df = pd.io.excel.read_excel(path, sheetname=0)
other than the fact that the latter saves me a line, is there a difference between the two, and is there a reason to use either one?
Thanks!
pandas. read_excel() function is used to read excel sheet with extension xlsx into pandas DataFrame. By reading a single sheet it returns a pandas DataFrame object, but reading two sheets it returns a Dict of DataFrame.
read_excel()) is really, really slow, even some with small datasets (<50000 rows), it could take minutes. To speed it up, we are going to convert the Excel files from . xlsx to . csv and use panda.
To tell pandas to start reading an Excel sheet from a specific row, use the argument header = 0-indexed row where to start reading. By default, header=0, and the first such row is used to give the names of the data frame columns. To skip rows at the end of a sheet, use skipfooter = number of rows to skip.
Strings are used for sheet names, Integers are used in zero-indexed sheet positions. Lists of strings/integers are used to request multiple sheets.
There's no particular difference beyond the syntax. Technically, ExcelFile
is a class and read_excel
is a function. In either case, the actual parsing is handled by the _parse_excel
method defined within ExcelFile
.
In earlier versions of pandas, read_excel
consisted entirely of a single statement (other than comments):
return ExcelFile(path_or_buf,kind=kind).parse(sheetname=sheetname, kind=kind, **kwds)
And ExcelFile.parse
didn't do much more than call ExcelFile._parse_excel
.
In recent versions of pandas, read_excel
ensures that it has an ExcelFile
object (and creates one if it doesn't), and then calls the _parse_excel
method directly:
if not isinstance(io, ExcelFile): io = ExcelFile(io, engine=engine) return io._parse_excel(...)
and with the updated (and unified) parameter handling, ExcelFile.parse
really is just the single statement:
return self._parse_excel(...)
That is why the docs for ExcelFile.parse
now say
Equivalent to read_excel(ExcelFile, ...) See the read_excel docstring for more info on accepted parameters
As for another answer which claims that ExcelFile.parse
is faster in a loop, that really just comes down to whether you are creating the ExcelFile
object from scratch every time. You could certainly create your ExcelFile
once, outside the loop, and pass that to read_excel
inside your loop:
xl = pd.ExcelFile(path) for name in xl.sheet_names: df = pd.read_excel(xl, name)
This would be equivalent to
xl = pd.ExcelFile(path) for name in xl.sheet_names: df = xl.parse(name)
If your loop involves different paths (in other words, you are reading many different workbooks, not just multiple sheets within a single workbook), then you can't get around having to create a brand-new ExcelFile
instance for each path anyway, and then once again, both ExcelFile.parse
and read_excel
will be equivalent (and equally slow).
ExcelFile.parse
is faster.
Suppose you are reading dataframes in a loop. With ExcelFile.parse
you just pass the Excelfile
object(xl
in your case). So the excel sheet is just loaded once and you use this to get your dataframes. In case of Read_Excel you pass the path instead of Excelfile
object. So essentially every time the workbook is loaded again. Makes a mess if your workbook has loads of sheets and tens of thousands of rows.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With