csv-dataset helps to read csv files and create descriptive and efficient input pipelines for deep learning in a streaming fashion
Project description
csv-dataset
CsvDataset
helps to read a csv file and create descriptive and efficient input pipelines for deep learning.
CsvDataset
iterates the records of the csv file in a streaming fashion, so the full dataset does not need to fit into memory.
Install
$ pip install csv-dataset
Usage
Suppose we have a csv file whose absolute path is filepath
:
open_time,open,high,low,close,volume
1576771200000,7145.99,7150.0,7141.01,7142.33,21.094283
1576771260000,7142.89,7142.99,7120.7,7125.73,118.279931
1576771320000,7125.76,7134.46,7123.12,7123.12,41.03628
1576771380000,7123.74,7128.06,7117.12,7126.57,39.885367
1576771440000,7127.34,7137.84,7126.71,7134.99,25.138154
1576771500000,7134.99,7144.13,7132.84,7141.64,26.467308
...
from csv_dataset import (
Dataset,
CsvReader
)
dataset = CsvDataset(
CsvReader(
filepath,
float,
# Abandon the first column and only pick the following
indexes=[1, 2, 3, 4, 5],
header=True
)
).window(3, 1).batch(2)
for element in dataset:
print(element)
The following output shows one print.
[[[7145.99, 7150.0, 7141.01, 7142.33, 21.094283]
[7142.89, 7142.99, 7120.7, 7125.73, 118.279931]
[7125.76, 7134.46, 7123.12, 7123.12, 41.03628 ]]
[[7142.89, 7142.99, 7120.7, 7125.73, 118.279931]
[7125.76, 7134.46, 7123.12, 7123.12, 41.03628 ]
[7123.74, 7128.06, 7117.12, 7126.57, 39.885367]]]
...
Dataset(reader: AbstractReader)
dataset.window(size, shift=None, stride=1) -> self
Defines the window size, shift and stride
dataset.batch(batch) -> self
Defines batch size
dataset.get() -> Optional[np.ndarray]
Get the data of the next batch
dataset.reset() -> None
Reset reader position
CsvReader(filepath, dtype, indexes, **kwargs)
- filepath
str
absolute path of the csv file - dtype
Callable
data type. We should only usefloat
orint
for this argument. - indexes
List[int]
column indexes to pick from the lines of the csv file - kwargs
- header
bool = False
whether we should skip reading the header line. - splitter
str = ','
the column splitter of the csv file - normalizer
List[NormalizerProtocol]
list of normalizer to normalize each column of data. ANormalizerProtocol
should contains two methods,normalize(float) -> float
to normalize the given datum andrestore(float) -> float
to restore the normalized datum. - max_lines
int = -1
max lines of the csv file to be read. Defaults to-1
which means no limit.
- header
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
csv-dataset-0.2.1.tar.gz
(6.6 kB
view hashes)
Built Distribution
Close
Hashes for csv_dataset-0.2.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1175e465a405136365ca277fb457281df06e50231c5aab7e26a88d734eb7e5cc |
|
MD5 | e470c4f94596c30b5a199e0f2e59ed58 |
|
BLAKE2b-256 | 49aa9970a5af04f7a69df5e14ef4018f51ef6e43c044686c0e4c1cb56f976c63 |