DataFrame.
to_csv
Write object to a comma-separated values (csv) file.
Note
pandas-on-Spark to_csv writes files to a path or URI. Unlike pandas’, pandas-on-Spark respects HDFS’s property such as ‘fs.default.name’.
pandas-on-Spark writes CSV files into the directory, path, and writes multiple part-… files in the directory when path is specified. This behaviour was inherited from Apache Spark. The number of files can be controlled by num_files.
File path. If None is provided the result is returned as a string.
String of length 1. Field delimiter for the output file.
Missing data representation.
Columns to write.
Write out the column names. If a list of strings is given it is assumed to be aliases for the column names.
String of length 1. Character used to quote fields.
Format string for datetime objects.
String of length 1. Character used to escape sep and quotechar when appropriate.
this is a path.
Python write mode, default ‘w’.
mode can accept the strings for Spark writing mode. Such as ‘append’, ‘overwrite’, ‘ignore’, ‘error’, ‘errorifexists’.
‘append’ (equivalent to ‘a’): Append the new data to existing data.
‘overwrite’ (equivalent to ‘w’): Overwrite existing data.
‘ignore’: Silently ignore this operation if data already exists.
‘error’ or ‘errorifexists’: Throw an exception if data already exists.
Names of partitioning columns
Column names to be used in Spark to represent pandas-on-Spark’s index. The index name in pandas-on-Spark is ignored. By default, the index is always lost.
This kwargs are specific to PySpark’s CSV options to pass. Check the options in PySpark’s API documentation for spark.write.csv(…). It has higher priority and overwrites all other options. This parameter only works when path is specified.
See also
read_csv
DataFrame.to_delta
DataFrame.to_table
DataFrame.to_parquet
DataFrame.to_spark_io
Examples
>>> df = ps.DataFrame(dict( ... date=list(pd.date_range('2012-1-1 12:00:00', periods=3, freq='M')), ... country=['KR', 'US', 'JP'], ... code=[1, 2 ,3]), columns=['date', 'country', 'code']) >>> df.sort_values(by="date") date country code ... 2012-01-31 12:00:00 KR 1 ... 2012-02-29 12:00:00 US 2 ... 2012-03-31 12:00:00 JP 3
>>> print(df.to_csv()) date,country,code 2012-01-31 12:00:00,KR,1 2012-02-29 12:00:00,US,2 2012-03-31 12:00:00,JP,3
>>> df.cummax().to_csv(path=r'%s/to_csv/foo.csv' % path, num_files=1) >>> ps.read_csv( ... path=r'%s/to_csv/foo.csv' % path ... ).sort_values(by="date") date country code ... 2012-01-31 12:00:00 KR 1 ... 2012-02-29 12:00:00 US 2 ... 2012-03-31 12:00:00 US 3
In case of Series,
>>> print(df.date.to_csv()) date 2012-01-31 12:00:00 2012-02-29 12:00:00 2012-03-31 12:00:00
>>> df.date.to_csv(path=r'%s/to_csv/foo.csv' % path, num_files=1) >>> ps.read_csv( ... path=r'%s/to_csv/foo.csv' % path ... ).sort_values(by="date") date ... 2012-01-31 12:00:00 ... 2012-02-29 12:00:00 ... 2012-03-31 12:00:00
You can preserve the index in the roundtrip as below.
>>> df.set_index("country", append=True, inplace=True) >>> df.date.to_csv( ... path=r'%s/to_csv/bar.csv' % path, ... num_files=1, ... index_col=["index1", "index2"]) >>> ps.read_csv( ... path=r'%s/to_csv/bar.csv' % path, index_col=["index1", "index2"] ... ).sort_values(by="date") date index1 index2 ... ... 2012-01-31 12:00:00 ... ... 2012-02-29 12:00:00 ... ... 2012-03-31 12:00:00