Skip to main content

A clean and simple library for Continual Learning in PyTorch.

Project description

Continuum

PyPI version Build Status Codacy Badge DOI Documentation Status

A library for PyTorch's loading of datasets in the field of Continual Learning

Aka Continual Learning, Lifelong-Learning, Incremental Learning, etc.

Read the documentation.

Example:

Install from and PyPi:

pip3 install continuum

And run!

from torch.utils.data import DataLoader

from continuum import ClassIncremental, split_train_val
from continuum.datasets import MNIST

clloader = ClassIncremental(
    MNIST("my/data/path", download=True, train=True),
    increment=1,
    initial_increment=5)

print(f"Number of classes: {clloader.nb_classes}.")
print(f"Number of tasks: {clloader.nb_tasks}.")

for task_id, train_dataset in enumerate(clloader):
    train_dataset, val_dataset = split_train_val(train_dataset, val_split=0.1)
    train_loader = DataLoader(train_dataset)
    val_loader = DataLoader(val_dataset)

    for x, y, t in train_loader:
        # Do your cool stuff here

Supported Scenarios

Name Acronym  Supported Scenario
New Instances  NI :white_check_mark: Instances Incremental
New Classes  NC :white_check_mark: Classes Incremental
New Instances & Classes  NIC :white_check_mark: Data Incremental

Supported Datasets:

Note that the task sizes are fully customizable.

Name Nb classes  Image Size Automatic Download Type
MNIST 10  28x28x1 :white_check_mark: :eyes:
Fashion MNIST 10  28x28x1 :white_check_mark: :eyes:
KMNIST 10  28x28x1 :white_check_mark: :eyes:
EMNIST 10  28x28x1 :white_check_mark: :eyes:
QMNIST 10  28x28x1 :white_check_mark: :eyes:
MNIST Fellowship 30  28x28x1 :white_check_mark: :eyes:
CIFAR10 10 32x32x3 :white_check_mark: :eyes:
CIFAR100 100 32x32x3 :white_check_mark: :eyes:
CIFAR Fellowship 110 32x32x3 :white_check_mark: :eyes:
ImageNet100 100 224x224x3 :x: :eyes:
ImageNet1000 1000 224x224x3 :x: :eyes:
Permuted MNIST 10 28x28x1 :white_check_mark: :eyes:
Rotated MNIST 10 28x28x1 :white_check_mark: :eyes:
CORe50 50 224x224x3 :white_check_mark: :eyes:
CORe50-v2-79 50 224x224x3 :white_check_mark: :eyes:
CORe50-v2-196 50 224x224x3 :white_check_mark: :eyes:
CORe50-v2-391 50 224x224x3 :white_check_mark: :eyes:
MultiNLI  5   :white_check_mark:  :book:

Furthermore some "Meta"-datasets are available:

InMemoryDataset, for in-memory numpy array:

x_train, y_train = gen_numpy_array()

clloader = CLLoader(
    InMemoryDataset(x_train, y_train),
    increment=10,
)

PyTorchDataset,for any dataset defined in torchvision:

clloader = CLLoader(
    PyTorchDataset("/my/data/path", dataset_type=torchvision.datasets.CIFAR10),
    increment=10,
)

ImageFolderDataset, for datasets having a tree-like structure, with one folder per class:

clloader = CLLoader(
    ImageFolderDataset("/my/train/folder", "/my/test/folder"),
    increment=10,
)

Fellowship, to combine several continual datasets.:

clloader = CLLoader(
    Fellowship("/my/data/path", dataset_list=[CIFAR10, CIFAR100]),
    increment=10,
)

Some datasets cannot provide an automatic download of the data for miscealleneous reasons. For example for ImageNet, you'll need to download the data from the official page. Then load it likewise:

clloader = CLLoader(
    ImageNet1000("/my/train/folder", "/my/test/folder"),
    increment=10,
)

Some papers use a subset, called ImageNet100 or ImageNetSubset. They are automatically downloaded for you, but you can also provide your own.

Indexing

All our continual loader are iterable (i.e. you can for loop on them), and are also indexable.

Meaning that clloader[2] returns the third task (index starts at 0). Likewise, if you want to evaluate after each task, on all seen tasks do clloader_test[:n].

Sample Images

MNIST:

Task 0 Task 1 Task 2 Task 3 Task 4

FashionMNIST:

Task 0 Task 1 Task 2 Task 3 Task 4

CIFAR10:

Task 0 Task 1 Task 2 Task 3 Task 4

MNIST Fellowship (MNIST + FashionMNIST + KMNIST):

Task 0 Task 1 Task 2

PermutedMNIST:

Task 0 Task 1 Task 2 Task 3 Task 4

RotatedMNIST:

Task 0 Task 1 Task 2 Task 3 Task 4

ImageNet100:

...
Task 0 Task 1 Task 2 Task 3 ...

Citation

If you find this library useful in your work, please consider citing it:

@misc{douillardlesort2020continuum,
  author={Douillard, Arthur and Lesort, Timothée},
  title={Continuum, Data Loaders for Continual Learning},
  howpublished={https://github.com/Continvvm/continuum},
  year={2020},
  doi={10.5281/zenodo.3759673}
}

Maintainers

This project was started by a joint effort from Arthur Douillard & Timothée Lesort.

Feel free to contribute! If you want to propose new features, please create an issue.

On PyPi

Our project is available on PyPi!

pip3 install continuum

Note that previously another project, a CI tool, was using that name. It is now there continuum_ci.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

continuum-0.2.0.tar.gz (759.5 kB view hashes)

Uploaded Source

Built Distribution

continuum-0.2.0-py3-none-any.whl (59.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page