300字范文,内容丰富有趣,生活中的好帮手!
300字范文 > Python Dataframe之excel csv pickle feather parquet jay hdf5 文件存储格式==》存读效率对比

Python Dataframe之excel csv pickle feather parquet jay hdf5 文件存储格式==》存读效率对比

时间:2020-05-22 05:34:39

相关推荐

Python Dataframe之excel csv pickle feather parquet jay hdf5 文件存储格式==》存读效率对比

今天看到一篇文章

参考:对比不同主流存储格式(csv, feather, jay, h5, parquet, pickle)的读取效率

然后我自己也试了一下,感觉发现了“新大陆”,T_T~到现在才知道还有这些存储形式,比这excel、csv快多了。

上次实习的时候,因为不知道可以存为其他格式,把多个几十个G的dataframe处理完后存为csv,过后又要读出来

心态瞬间崩了~

搞数据都搞了好久,浪费时间~拐求

文章目录

desk 读取CSV文件一、excel 存储格式(xlsx)二、csv 存储格式三、pickle 存储格式四、feather 存储格式五、parquet 存储格式六、jay 存储格式七、hdf5 存储格式

多的不说,直接看效果

这个结论直接借一下参考文章的,有需要的可以详细看参考文章

还有就是,数据稍微大一点,最好不要存为 excel 格式,这真的~慢

最后最后,在借用这位大兄弟的一句话

desk 读取CSV文件

import timeimport daskimport dask.dataframe as ddfrom dask.diagnostics import ProgressBarfrom numba import jitimport pandas as pdimport numpy as npimport sys# ----------------------------------------------------------------------------switchDict = {0 : 'TEST',1 : 'ALL'}# 编译数据量状态开关 0为测试(读部分数据),1为全量status = switchDict[1]@jitdef importData(fileName):if status == 'TEST':df = dd.read_csv(fileName, header=None, blocksize="100MB").head(17000)else:df = dd.read_csv(fileName, blocksize="64MB").compute()df.index = pd.RangeIndex(start=0, stop=len(df))return df# 读正样本t0=time.time()t1= time.perf_counter()with ProgressBar():data = importData('train.csv')t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 3.421277699999337wall time: 3.421303749084473'''print(f"当前数据框占用内存大小:{sys.getsizeof(data)/1024/1024:.2f}M") data.shape

一、excel 存储格式(xlsx)

存为excel表,真的慢到天荒~拉胯啊

import timet0=time.time()t1= time.perf_counter() data.to_excel("data.xlsx")t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)

等了十几分钟,算了~

import timet0=time.time()t1= time.perf_counter() data_excel = pd.read_excel("./data.xlsx")t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)

二、csv 存储格式

import timet0=time.time()t1= time.perf_counter() data.to_csv("data.csv")t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 32.49002720000135wall time: 32.48996901512146'''

import timet0=time.time()t1= time.perf_counter() data_csv = pd.read_csv("./data.csv")t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 7.5742819999995845wall time: 7.574833154678345'''

三、pickle 存储格式

Pickle:用于序列化和反序列化Python对象结构

详细百度八~

import timet0=time.time()t1= time.perf_counter() data.to_pickle("data.pkl.gzip")t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 1.1933384000002625wall time: 1.1980044841766357'''

import timet0=time.time()t1= time.perf_counter() data_pickle = pd.read_pickle("./data.pkl.gzip")t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 1.246990000000551wall time: 1.246736764907837'''

四、feather 存储格式

Feather:一个快速、轻量级的存储框架

网上很多推荐这个存储格式的

再见 CSV,速度提升 150 倍!

详细百度八~

import timet0=time.time()t1= time.perf_counter() data.to_feather("data.feather")t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 0.5462657999996736wall time: 0.5466225147247314'''

t0=time.time()t1=time.perf_counter() data_feather = pd.read_feather("./data.feather")t2=time.time()t3=time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 0.6685380999997506wall time: 0.6682815551757812'''

五、parquet 存储格式

Parquet:Apache Hadoop的列式存储格式

详细百度八~

import timet0=time.time()t1= time.perf_counter() data.to_parquet("data.parquet")t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 2.874607599999763wall time: 2.874359369277954'''

t0=time.time()t1=time.perf_counter() data_parquet = pd.read_parquet("./data.parquet")t2=time.time()t3=time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 0.9940449000000153wall time: 0.9959096908569336'''

六、jay 存储格式

安装 datatable 包

pip install -i /simple/ --trusted-host datatable

import datatable as dtt0=time.time()t1=time.perf_counter() dt.Frame(data).to_jay("data.jay")t2=time.time()t3=time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 6.169269200000599wall time: 6.168536901473999'''

当我查看内容时,该对象是frame

t0=time.time()t1=time.perf_counter() data_jay = dt.fread("./data.jay")t2=time.time()t3=time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 0.03480849999959901wall time: 0.034420013427734375'''data_jay.shape

七、hdf5 存储格式

普通格式存储

import timet0=time.time()t1= time.perf_counter() ####普通格式存储:h5 = pd.HDFStore('./data.h5','w')h5['data'] = datah5.close()t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 2.1860209000005852wall time: 2.186391592025757'''

压缩格式存储

import timet0=time.time()t1= time.perf_counter() ####压缩格式存储h5 = pd.HDFStore('./data.h5','w', complevel=4, complib='blosc')h5['data'] = datah5.close()t2=time.time()t3= time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 1.9893786000002365wall time: 1.9896411895751953'''

t0=time.time()t1=time.perf_counter() data_hdf5 = pd.read_hdf('./data.h5',key='data')t2=time.time()t3=time.perf_counter() print("cpu time:",t3-t1)print("wall time:",t2-t0)'''cpu time: 1.4497185000000172wall time: 1.4497275352478027'''

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。