Finetuner makes neural network fine-tuning easier and faster by streamlining the workflow and handling all the complexity and infrastructure requirements in the cloud. With Finetuner, one can easily enhance the performance of pre-trained models and make them production-ready without expensive hardware.
This release covers Finetuner version 0.6.4, including dependencies finetuner-api 0.4.4 and finetuner-core 0.11.0.
This release contains 6 new features, 1 bug fix and 1 documentation improvement.
🆕 Features
User-friendly login from Python notebooks (#576)
We've added the method finetuner.notebook_login()
as a new method for logging in from notebooks like Jupyter in a more user-friendly way.
Change device specification argument in finetuner.fit()
(#577)
We've deprecated the cpu
argument to the finetuner.fit()
method, replacing it with the device
argument.
Instead of specifying cpu=False
, for a GPU run, you should now use device='cuda'
; and for a CPU run, instead of cpu=True
, use device='cpu'
.
The default is equivalent to device='cuda'
. Unless you're certain that your Finetuner job will run quickly on a CPU, you should use the default argument.
We expect to remove the cpu
argument entirely in version 0.7, which will break any old code still using it.
Validate Finetuner run arguments on the client side (#579)
The Finetuner client now checks that the arguments to Finetuner runs are coherent and at least partially valid, before transmitting them to the cloud infrastructure. Not all arguments can be validated on the client-side, but the Finetuner client now checks all the ones that can.
Update names of OpenCLIP models (#580)
We have changed the names of open-access CLIP models available via Finetuner to be compatible with CLIP-as-Service. For example, the model previously referenced as ViT-B-16#openai
is now ViT-B-16::openai
.
Add method finetuner.build_model()
to load pre-trained models without fine-tuning (#584)
Previously, it was not possible to load a pre-trained model via Finetuner without performing some retraining or 'fine-tuning' on it. Now it is possible to get a pre-trained model, as is, and use it via Finetuner immediately.
For example, to use a BERT model in the finetuner without any fine-tuning:
import finetuner
from docarray import Document, DocumentArray
model = finetuner.build_model('bert-base-cased') # load pre-trained model
documents = DocumentArray([Document(text='example text 1'), Document(text='example text 2')])
finetuner.encode(model=model, data=documents) # encode texts without having done any fine-tuning
assert documents.embeddings.shape == (2, 768)
Show progress while encoding documents (#586)
You will now see a progress bar when using finetuner.encode()
.
🐞 Bug Fixes
Fix GPU-availability issues
We have observed some problems with GPU availability in Finetuner's use of Jina AI's cloud infrastructure. We've fully analyzed and repaired these issues.
📗 Documentation Improvements
Add Colab links to Finetuning Tasks pages (#583)
We have added runnable Google Colab notebooks for the examples in the Finetuning Tasks documentation pages: Text-to-Text, Image-to-Image, and Text-to-Image.
🤟 Contributors
We would like to thank all contributors to this release:
- Wang Bo (@bwanglzu)
- Michael Günther (@guenthermi)
- George Mastrapas (@gmastrapas)
- Louis Milliken (@LMMilliken)