Contribute to Metalhead.jl
We welcome contributions from anyone to Metalhead.jl! Thank you for taking the time to make our ecosystem better.
You can contribute by fixing bugs, adding new models, or adding pre-trained weights. If you aren't ready to write some code, but you think you found a bug or have a feature request, please post an issue.
Before continuing, make sure you read the FluxML contributing guide for general guidelines and tips.
Fixing bugs
To fix a bug in Metalhead.jl, you can open a PR. It would be helpful to file an issue first so that we can confirm the bug.
Adding models
To add a new model architecture to Metalhead.jl, you can open a PR. Keep in mind a few guiding principles for how this package is designed:
- reuse layers from Flux as much as possible (e.g. use
Parallel
before defining aBottleneck
struct) - adhere as closely as possible to a reference such as a published paper (i.e. the structure of your model should follow intuitively from the paper)
- use generic functional builders (e.g.
Metalhead.resnet
is the underlying function that builds "ResNet-like" models) - use multiple dispatch to add convenience constructors that wrap your functional builder
When in doubt, just open a PR! We are more than happy to help review your code to help it align with the rest of the library. After adding a model, you might consider adding some pre-trained weights (see below).
Adding pre-trained weights
To add pre-trained weights for an existing model or new model, you can open a PR. Below, we describe the steps you should follow to get there.
All Metalhead.jl model artifacts are hosted on HuggingFace. You can find the FluxML account here. This documentation from HuggingFace will provide you with an introduction to their ModelHub. In short, the Model Hub is a collection of Git repositories, similar to Julia packages on GitHub. This means you can make a pull request to our HuggingFace repositories to upload updated weight artifacts just like you would make a PR on GitHub to upload code.
- Train your model or port the weights from another framework.
- Save the model state using BSON.jl with
BSON.@save "modelname.bson" model_state=Flux.state(model)
. It is important that your model is saved under the keymodel_state
. - Compress the saved model as a tarball using
tar -cvzf modelname.tar.gz modelname.bson
. - Obtain the SHAs (see the Pkg docs). Edit the
Artifacts.toml
file in the Metalhead.jl repository and add entry for your model. You can leave the URL empty for now. - Open a PR on Metalhead.jl. Be sure to ping a maintainer (e.g.
@darsnack
or@theabhirath
) to let us know that you are adding a pre-trained weight. We will create a model repository on HuggingFace if it does not already exist. - Open a PR to the corresponding HuggingFace repo. Do this by going to the "Community" tab in the HuggingFace repository. PRs and discussions are shown as the same thing in the HuggingFace web app. You can use your local Git program to make clone the repo and make PRs if you wish. Check out the guide on PRs to HuggingFace for more information.
- Copy the download URL for the model file that you added to HuggingFace. Make sure to grab the URL for a specific commit and not for the
main
branch. - Update your Metalhead.jl PR by adding the URL to the Artifacts.toml.
- If the tests pass for your weights, we will merge your PR! Your model should pass the
acctest
function in the Metalhead.jl test suite. If your model already exists in the repo, then these tests are already in place, and you can add your model configuration to thePRETRAINED_MODELS
list in theruntests.jl
file. Please refer to the ResNet tests as an example.
If you want to fix existing weights, then you can follow the same set of steps.
See the scripts/ folder in the repo for some helpful scripts that can be used to automate some of these steps.