小言_互联网的博客

数据科学包——pandas基础(处理丢失数据、统计、合并、分组)

349人阅读  评论(0)

一、处理丢失数据

Pandas 使用 numpy.NaN 来表示丢失的数据,它不参与计算。

import pandas as pd
import numpy as np
dates = pd.date_range('20160301', periods=6)
df = pd.DataFrame(data=np.random.randn(6, 4), index=dates, columns=list('ABCD'))
df
>>>
                   A         B         C         D
2016-03-01 -0.768789  1.347539  0.715367  0.581275
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110
2016-03-03  0.815005 -0.343642  2.097537  1.706366
2016-03-04  0.755148  0.022449 -0.033679 -0.411535
2016-03-05  1.361040 -0.813593 -1.230564 -0.957386
2016-03-06  0.277162 -0.650544 -0.152688  1.070919
  • 添加新的列
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1
>>>
                   A         B         C         D   E
2016-03-01 -0.768789  1.347539  0.715367  0.581275 NaN
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110 NaN
2016-03-03  0.815005 -0.343642  2.097537  1.706366 NaN
2016-03-04  0.755148  0.022449 -0.033679 -0.411535 NaN

1.为空值数据赋值

df1.loc[dates[1:3], 'E'] = 1
df1
>>>
                   A         B         C         D    E
2016-03-01 -0.768789  1.347539  0.715367  0.581275  NaN
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110  1.0
2016-03-03  0.815005 -0.343642  2.097537  1.706366  1.0
2016-03-04  0.755148  0.022449 -0.033679 -0.411535  NaN

还可以这样做:

df1.loc['20160302':'20160303','E']=5

2.删除空数据行和列

不会改变df1的结构和数据

df1.dropna(how='any')
>>>
                   A         B         C         D    E
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110  1.0
2016-03-03  0.815005 -0.343642  2.097537  1.706366  1.0

可以通过axis参数来删除含有空数据的全部列

df1 = df1.dropna(axis=1)

可以通过subset参数来删除E中含有空数据的全部行

df1 = df1.dropna(subset=["E"])

3.填充所有缺失数据

同样不改变df1的数据和结构

df1.fillna(value=5)
>>>
                   A         B         C         D    E
2016-03-01 -0.768789  1.347539  0.715367  0.581275  5.0
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110  1.0
2016-03-03  0.815005 -0.343642  2.097537  1.706366  1.0
2016-03-04  0.755148  0.022449 -0.033679 -0.411535  5.0

4.判断是否有NaN值

NaN, Not a Number, NaN是浮点数的一个值,代表“不是数”. 它即不是无穷大, 也不是无穷小.
存在NaN,返回True

pd.isnull(df1)
>>>
                A      B      C      D      E
2016-03-01  False  False  False  False   True
2016-03-02  False  False  False  False  False
2016-03-03  False  False  False  False  False
2016-03-04  False  False  False  False   True

二、统计

numpy.NaN 不参与计算

1.平均值、求和、累加和

df1.mean() #默认按列求平均值
df1.mean(axis=1) #按行求平均值
df.sum() #按列求和
df.sum(axis='columns') #按行求和
df.cumsum() #计算轴向元素累加和,返回由中间结果组成的数组

2.shift函数

该函数主要的功能就是使数据框中的数据移动,若freq=None时,根据axis的设置,行索引数据保持不变,列索引数据可以在行上上下移动或在列上左右移动;若行索引为时间序列,则可以设置freq参数,根据periods和freq参数值组合,使行索引每次发生periods*freq偏移量滚动,列索引数据不会移动

# shift(0)
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(0)
s
>>>
2020-04-29    1.0
2020-04-30    3.0
2020-05-01    5.0
2020-05-02    NaN
2020-05-03    6.0
2020-05-04    8.0
Freq: D, dtype: float64

#shift(1)
s = pd.Series([1,3,5,np.nan,6,8], index=dates).shift(1)
s
>>>
2020-04-29    NaN
2020-04-30    1.0
2020-05-01    3.0
2020-05-02    5.0
2020-05-03    NaN
2020-05-04    6.0
Freq: D, dtype: float64

3.sub函数

>>> df
                   A         B         C         D
2016-03-01 -0.768789  1.347539  0.715367  0.581275
2016-03-02  0.163001 -0.509262 -0.615675 -0.220110
2016-03-03  0.815005 -0.343642  2.097537  1.706366
2016-03-04  0.755148  0.022449 -0.033679 -0.411535
2016-03-05  1.361040 -0.813593 -1.230564 -0.957386
2016-03-06  0.277162 -0.650544 -0.152688  1.070919
>>> s
2016-03-01    NaN
2016-03-02    1.0
2016-03-03    3.0
2016-03-04    5.0
2016-03-05    NaN
2016-03-06    6.0
Freq: D, dtype: float64
  • sub减法,二维数组df按行减去一维数组s的值
>>> df.sub(s,axis='index')
                   A         B         C         D
2016-03-01       NaN       NaN       NaN       NaN
2016-03-02 -0.836999 -1.509262 -1.615675 -1.220110
2016-03-03 -2.184995 -3.343642 -0.902463 -1.293634
2016-03-04 -4.244852 -4.977551 -5.033679 -5.411535
2016-03-05       NaN       NaN       NaN       NaN
2016-03-06 -5.722838 -6.650544 -6.152688 -4.929081

4.apply 应用函数

  • 使用apply函数,调用cumsum函数
df.apply(np.cumsum)
>>>
                   A         B         C         D
2016-03-01 -0.768789  1.347539  0.715367  0.581275
2016-03-02 -0.605788  0.838277  0.099692  0.361166
2016-03-03  0.209217  0.494635  2.197229  2.067531
2016-03-04  0.964365  0.517084  2.163549  1.655996
2016-03-05  2.325405 -0.296509  0.932986  0.698610
2016-03-06  2.602567 -0.947053  0.780298  1.769529
  • 使用apply函数,自定义一个函数,返回每列最大值与最小值的差值
df.apply(lambda x: x.max() - x.min())
>>>
>>> df.apply(lambda x: x.max() - x.min())
A    2.129829
B    2.161131
C    3.328100
D    2.663752
dtype: float64
  • 返回每行最大值与最小值的差值
df.apply(lambda x: x.max() - x.min() ,axis=1)

5.计数和众数

s = pd.Series(np.random.randint(0, 7, size=10))
s
>>>
0    4
1    6
2    6
3    0
4    5
5    4
6    6
7    4
8    1
9    1

计数,每个数字出现的次数

>>> s.value_counts()
6    3
4    3
1    2
5    1
0    1
dtype: int64

众数,4和6出现的次数最多

>>> s.mode()
0    4
1    6

三、数据合并

1.concat函数

>>> df = pd.DataFrame(np.random.randn(10, 4), columns=list('ABCD'))
>>> df
          A         B         C         D
0 -0.650238  1.293171  1.091541  0.178952
1 -0.164027 -0.053264  0.490023 -0.992589
2  0.782788  0.254571 -1.867243  0.529259
3 -0.429579  0.562526 -0.436258  2.364229
4  2.234823 -1.420623 -0.809523  0.165442
5 -0.277173 -0.151315 -0.132357 -0.992252
6  0.261993  0.018131  1.355897  0.151521
7 -0.696561 -2.785051 -0.771156  0.278054
8  0.995268  0.083372 -0.498132  0.946659
9  0.675779 -0.853811 -0.674370 -1.470860

>>> df.iloc[:3]
          A         B         C         D
0 -0.650238  1.293171  1.091541  0.178952
1 -0.164027 -0.053264  0.490023 -0.992589
2  0.782788  0.254571 -1.867243  0.529259

>>> df.iloc[3:7]
          A         B         C         D
3 -0.429579  0.562526 -0.436258  2.364229
4  2.234823 -1.420623 -0.809523  0.165442
5 -0.277173 -0.151315 -0.132357 -0.992252
6  0.261993  0.018131  1.355897  0.151521

>>> df.iloc[7:]
          A         B         C         D
7 -0.696561 -2.785051 -0.771156  0.278054
8  0.995268  0.083372 -0.498132  0.946659
9  0.675779 -0.853811 -0.674370 -1.470860
  • concat函数
df1 = pd.concat([df.iloc[:3], df.iloc[3:7], df.iloc[7:]])
df1
>>> df1 = pd.concat([df.iloc[:3], df.iloc[3:7], df.iloc[7:]])
>>> df1
          A         B         C         D
0 -0.650238  1.293171  1.091541  0.178952
1 -0.164027 -0.053264  0.490023 -0.992589
2  0.782788  0.254571 -1.867243  0.529259
3 -0.429579  0.562526 -0.436258  2.364229
4  2.234823 -1.420623 -0.809523  0.165442
5 -0.277173 -0.151315 -0.132357 -0.992252
6  0.261993  0.018131  1.355897  0.151521
7 -0.696561 -2.785051 -0.771156  0.278054
8  0.995268  0.083372 -0.498132  0.946659
9  0.675779 -0.853811 -0.674370 -1.470860

2.SQL 样式的联合查询(merge函数)

>>> left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
>>> right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
>>> left
   key  lval
0  foo     1
1  foo     2
>>> right
   key  rval
0  foo     4
1  foo     5

merge合并

# SELECT * FROM left INNER JOIN right ON left.key = right.key;
pd.merge(left, right, on='key')
>>>
   key  lval  rval
0  foo     1     4
1  foo     1     5
2  foo     2     4
3  foo     2     5

3.append函数

append是seriesdataframe的方法,使用它就是默认沿着(axis = 0)列进行拼接

>>> s = pd.Series(np.random.randint(1, 5, size=4), index=list('ABCD'))
>>> df.append(s, ignore_index=True)
           A         B         C         D
0  -0.650238  1.293171  1.091541  0.178952
1  -0.164027 -0.053264  0.490023 -0.992589
2   0.782788  0.254571 -1.867243  0.529259
3  -0.429579  0.562526 -0.436258  2.364229
4   2.234823 -1.420623 -0.809523  0.165442
5  -0.277173 -0.151315 -0.132357 -0.992252
6   0.261993  0.018131  1.355897  0.151521
7  -0.696561 -2.785051 -0.771156  0.278054
8   0.995268  0.083372 -0.498132  0.946659
9   0.675779 -0.853811 -0.674370 -1.470860
10  2.000000  2.000000  4.000000  2.000000

四、分组统计

  • 数据分组
  • 每个组应用一个函数,输出一个结果
  • 合并每个组的结果构成最终输出
df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
                          'foo', 'bar', 'foo', 'foo'],
                   'B' : ['one', 'one', 'two', 'three',
                           'two', 'two', 'one', 'three'],
                   'C' : np.random.randn(8),
                   'D' : np.random.randn(8)})
df
>>>
     A      B         C         D
0  foo    one -0.907531  0.245216
1  bar    one  0.575741  0.917884
2  foo    two -0.725860  1.146291
3  bar  three -0.421367 -0.885779
4  foo    two -0.363707 -1.905409
5  bar    two -1.307323  1.316524
6  foo    one  0.436804  1.164812
7  foo  three -1.146570  2.534074
  • ‘A’为分组对象,对其他项按分组求和
>>> df.groupby('A').sum()
            C         D
A
bar -1.152949  1.348629
foo -2.706864  3.184985
  • ‘A、B’为分组对象,对其他项按分组求和
# df.groupby(['B', 'A']).sum()
df.groupby(['A', 'B']).sum()
>>>
                  C         D
A   B
bar one    0.575741  0.917884
    three -0.421367 -0.885779
    two   -1.307323  1.316524
foo one   -0.470727  1.410028
    three -1.146570  2.534074
    two   -1.089567 -0.759117

转载:https://blog.csdn.net/muguangjingkong/article/details/105890555
查看评论
* 以上用户言论只代表其个人观点,不代表本网站的观点或立场