pyspark.pandas.groupby.GroupBy.cumprod¶
-
GroupBy.
cumprod
() → FrameLike[source]¶ Cumulative product for each group.
- Returns
- Series or DataFrame
See also
Series.cumprod
DataFrame.cumprod
Examples
>>> df = ps.DataFrame( ... [[1, None, 4], [1, 0.1, 3], [1, 20.0, 2], [4, 10.0, 1]], ... columns=list('ABC')) >>> df A B C 0 1 NaN 4 1 1 0.1 3 2 1 20.0 2 3 4 10.0 1
By default, iterates over rows and finds the sum in each column.
>>> df.groupby("A").cumprod().sort_index() B C 0 NaN 4 1 0.1 12 2 2.0 24 3 10.0 1
It works as below in Series.
>>> df.B.groupby(df.A).cumprod().sort_index() 0 NaN 1 0.1 2 2.0 3 10.0 Name: B, dtype: float64