Finetuner makes neural network fine-tuning easier and faster by streamlining the workflow and handling all the complexity and infrastructure requirements in the cloud. With Finetuner, one can easily enhance the performance of pre-trained models and make them production-ready without expensive hardware.
This release covers Finetuner version 0.7.5, including dependencies finetuner-api 0.5.6 and finetuner-core 0.13.3.
This release contains 2 refactorings and 2 bug fixes.
⚙ Refactoring
Downloading pre-trained weights is not necessary anymore
Previously, when a fine-tuning job was completed and the get_model
function was called, we would construct the model, load the pre-trained weights, and then overwrite them with fine-tuned weights. We have now disabled the downloading of pre-trained weights, which speeds up the get_model
function and eliminates unneeded network traffic.
Provide informative error messages when user did not login (#708)
Before creating a Run
, users are required to call finetuner.login()
and use third-party authentication to log in. Previously, if they had not already done so, they would receive an error message that did not tell them to log in. We now display a more informative error message in the event that a user forgets to log in or their login attempt was unsuccessful.
🐞 Bug Fixes
Fix model name validation error using the model display name
When users request a model by name, they use names with the format name-size-lang
, for example: bert-base-en
. However, these names were not included in our internal schema for validation and jobs would fail validation. This has now been rectified.
Fix automatic batch size selection for PointNet models
In the past, BatchSizeFinder
was unable to properly select batch sizes for PointNet++ models. This has been fixed.
🤟 Contributors
We would like to thank all contributors to this release:
- Wang Bo (@bwanglzu)
- Louis Milliken (@LMMilliken)
- Michael Günther (@guenthermi)
- George Mastrapas (@gmastrapas)
- Scott Martens (@scott-martens)