You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 3, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+4-38Lines changed: 4 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ limitations under the License.
53
53
54
54
## Overview
55
55
56
-
SparseZoo is a constantly-growing repository of sparsified (pruned and pruned-quantized) models with matching sparsification recipes for neural networks.
56
+
[SparseZoo is a constantly-growing repository](https://sparsezoo.neuralmagic.com) of sparsified (pruned and pruned-quantized) models with matching sparsification recipes for neural networks.
57
57
It simplifies and accelerates your time-to-value in building performant deep learning models with a collection of inference-optimized models and recipes to prototype from.
58
58
Read more about sparsification [here](https://docs.neuralmagic.com/main/source/getstarted.html#sparsification).
59
59
@@ -66,8 +66,9 @@ The [GitHub repository](https://github.com/neuralmagic/sparsezoo) contains the P
| DOMAIN | The type of solution the model is architected and trained for | cv, nlp |
94
-
| SUB_DOMAIN | The sub type of solution the model is architected and trained for | classification, segmentation |
95
-
| ARCHITECTURE | The name of the guiding setup for the network's graph | resnet_v1, mobilenet_v1 |
96
-
| SUB_ARCHITECTURE | (optional) The scaled version of the architecture such as width or depth | 50, 101, 152 |
97
-
| FRAMEWORK | The machine learning framework the model was defined and trained in | pytorch, tensorflow_v1 |
98
-
| REPO | The model repository the model and baseline weights originated from | sparseml, torchvision |
99
-
| DATASET | The dataset the model was trained on | imagenet, cifar10 |
100
-
| TRAINING_SCHEME | (optional) A description on how the model was trained | augmented, lower_lr |
101
-
| SPARSE_NAME | An overview of what was done to sparsify the model | base, pruned, quant (quantized), pruned_quant, arch (architecture modified) |
102
-
| SPARSE_CATEGORY | Descriptor on the degree to which the model is sparsified as compared with the baseline metric | none, conservative (100% baseline), moderate (>= 99% baseline), aggressive (< 99%) |
103
-
| SPARSE_TARGET | (optional) Descriptor for the target environment the model was sparsified for | disk, edge, deepsparse, gpu |
104
-
105
-
The contents of each model are made up of the following:
106
-
107
-
- model.md: The model card containing metadata, descriptions, and information for the model.
108
-
- model.onnx: The [ONNX](https://onnx.ai/) representation of the model's graph.
109
-
- model.onnx.tar.gz: A compressed format for the ONNX file.
110
-
Currently ONNX does not support sparse tensors and quantized sparse tensors well for compression.
111
-
-[FRAMEWORK]/model.[EXTENSION]: The native ML framework file(s) for the model in which it was originally trained.
112
-
Such as PyTorch, Keras, TensorFlow V1
113
-
- recipes/original.[md|yaml]: The original sparsification recipe used to create the model.
114
-
- recipes/[NAME].[md|yaml]: Additional sparsification recipes that can be used with the model such as transfer learning.
115
-
- sample-originals: The original sample data without any preprocessing for use with the model.
116
-
- sample-inputs: The sample data after pre processing for use with the model.
117
-
- sample-outputs: The outputs after running the sample inputs through the model.
118
-
- sample-labels: The labels that classify the sample inputs.
119
-
120
86
### Python APIS
121
87
122
88
The Python APIs respect this format enabling you to search and download models. Some code examples are given below.
0 commit comments