There are many deep learning frameworks available today. Why use Keras rather than any other?
Here are some of the areas in which Keras compares favorably to existing alternatives.
In early 2019, we ran a survey among teams that ended in the top 5 of any Kaggle competition in the two previous years (N=120). We asked them about:
Keras ranked as #1 for deep learning both among primary frameworks and among all frameworks used.
With over 375,000 individual users as of early 2020, Keras has strong adoption across both the industry and the research community. Together with TensorFlow 2.0, Keras has more adoption than any other deep learning solution -- in every vertical.
You are already constantly interacting with features built with Keras -- it is in use at Netflix, Uber, Yelp, Instacart, Zocdoc, Square, and many others. It is especially popular among startups that place deep learning at the core of their products.
Keras & TensorFlow 2.0 are also a favorite among researchers, coming in #1 in terms of mentions in scientific papers indexed by Google Scholar. Keras has also been adopted by researchers at large scientific organizations, such as CERN and NASA.
Your Keras models can be easily deployed across a greater range of platforms than any other deep learning API:
Keras is scalable. Using the TensorFlow
DistributionStrategy API, which is supported natively by Keras,
you easily can run your models on large GPU clusters (up to thousands of devices) or an entire TPU pod, representing over one exaFLOPs of computing power.
Keras also has native support for mixed-precision training on the latest NVIDIA GPUs as well as on TPUs, which can offer up to 2x speedup for training and inference.
For more information, see our guide to multi-GPU & distributed training.
Like you, we know firsthand that building and training a model is only one slice of a machine learning workflow. Keras is built for the real world, and in the real world, a successful model begins with data collection and ends with production deployment.
Keras is at the center of a wide ecosystem of tightly-connected projects that together cover every step of the machine learning workflow, in particular: