Skip to main content

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks.

Project description

Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications.

Fine-tuning embeddings on domain specific data for better performance on neural search tasks.

Fine-tuning deep neural networks (DNNs) significantly improves performance on domain specific neural search tasks. However, fine-tuning for neural search is not trivial, as it requires a combination of expertise in ML and Information Retrieval. Finetuner makes fine-tuning simple and fast by handling all related complexity and infrastructure in the cloud. With Finetuner, you can easily make models more performant and production ready.

📈Performance boost: Finetuner significantly increases the performance of pretrained models on domain specific neural search applications.

🔱 Simple yet powerful: Interacting with Finetuner is simple and seamless, and also supports rich features such as selections of different loss functions, e.g. siamese/triplet loss, metric learning, layer pruning, weights freezing, dimensionality reduction, and much more.

Fine-tune in the cloud: Finetuner runs your fine-tuning jobs in the cloud. You never have to worry about provisioning (cloud) resources! Finetuner handles all related complexity and infrastructure.

What is the purpose of Finetuner?

Finetuner enables performance gains on domain specific neural search tasks by fine-tuning models in the cloud. We have conducted experiments on various neural search tasks in different domains to illustrate these performance improvements.

Finetuner also aims to make fine-tuning simple and fast. When interacting with Finetuner, the API takes care of all your fine-tuning jobs in the cloud. This only requires a few lines of code from you, as demonstrated in below.

How does it work?

Install

Requires Python 3.7+ installed on Linux/MacOS.

pip install -U finetuner-client

Fine-tuning ResNet50 on Totally Looks Like dataset

import finetuner
from finetuner.callback import EvaluationCallback

finetuner.login()

finetuner.create_experiment(name='tll-experiment')

run = finetuner.fit(
        model='resnet50',
        train_data='resnet-tll-train-data',
        callbacks=[EvaluationCallback(query_data='resnet-tll-eval-data')],
)

print(run.status())
print(run.logs())

run.save_model('resnet-tll')

This minimal example code starts a fine-tuning run with only the necessary arguments. It has the following steps:

  • Login to Finetuner: This is necessary if you'd like to run fine-tuning jobs with Finetuner in the cloud.
  • Create experiment: This experiment will contain various runs with different configurations.
  • Start fine-tuning run: Select backbone model, training and evaluation data for your evaluation callback.
  • Monitor: Check the status and logs of the progress on your fine-tuning run.
  • Save model: If your fine-tuning run has successfully completed, save it for further use and integration.

Support

Join Us

Finetuner is backed by Jina AI and licensed under Apache-2.0. We are actively hiring AI engineers, solution engineers to build the next neural search ecosystem in opensource.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

finetuner-client-0.2.2.tar.gz (25.6 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page