I have pandas.DataFrame with too much number of columns.
I call:
In [2]: X.dtypes
Out[2]: VAR_0001     object
        VAR_0002      int64
                 ...   
        VAR_5000      int64
        VAR_5001      int64
And I can't understand what types of data I have between VAR_0002 and VAR_5000
It's can be int64, int8, float64 and so on. I see in this blog native type of pandas.DataFrame but I think this is wrong information. How can I get this?
And other question. When I work on PC (Windows) and call this:
In [3]: X.dtypes[X.dtypes.map(lambda x: x=='bool')]
I get several columns with this type of data. But when I use this command on Mac, I get nothing. WAT?
get_dtype_counts() function returns the counts of dtypes in the given object. It returns a pandas series object containing the counts of all data types present in the pandas object. It works with pandas series as well as dataframe.
Pandas DataFrame count() MethodThe count() method counts the number of not empty values for each row, or column if you specify the axis parameter as axis='columns' , and returns a Series object with the result for each row (or column).
Return a Series containing counts of unique values. The resulting object will be in descending order so that the first element is the most frequently-occurring element.
To answer your first question do the following:
df.dtypes.value_counts()
Example:
In [4]:
df = pd.DataFrame({'a':[0], 'b':['asds'], 'c':[0]})
df.dtypes
Out[4]:
a     int64
b    object
c     int64
dtype: object
In [5]:
df.dtypes.value_counts()
Out[5]:
int64     2
object    1
dtype: int64
                        If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With