Flux provides a single, intuitive way to define models, just like mathematical notation. Julia transparently compiles your code, optimising and fusing kernels for the GPU, for the best performance.
Existing Julia libraries are differentiable and can be incorporated directly into Flux models. Cutting edge models such as Neural ODEs are first class, and Zygote enables overhead-free gradients.
GPU kernels can be written directly in Julia via CUDAnative. Flux is uniquely hackable and any part can be tweaked, from GPU code to custom gradients and layers.
A rich collection of Flux scripts to learn from, or tweak to your own data. Trained Flux models can be used from TextAnalysis or Metalhead.
Import trained ONNX models as Flux scripts, for high-quality inference or for transfer learning.
Export your models to JavaScript for the browser, or see our experiments page for demos.
The Turing.jl and Stheno libraries enables probabalistic programming, bayesian inference and Gaussian processes on top of Flux.
Flux models can be compiled to TPUs for cloud supercomputing, and run from Google Colab notebooks.
We introduce Policy Guided Monte Carlo (PGMC), a computational framework using reinforcement learning to improve Markov chain Monte Carlo (MCMC) sampling. arXiv
We draw inspiration from both classical system identification and modern machine learning in order to solve estimation problems for real-world, physical systems. link
This paper presents reverse-mode algorithmic differentiation (AD) based on source code transformation, in particular of the Static Single Assignment (SSA) form used by modern compilers. arXiv
We describe a method and implementation for offloading suitable sections of Julia programs to TPUs via the Google XLA compiler. arXiv