MixtralBackbone
classkeras_hub.models.MixtralBackbone(
vocabulary_size,
num_layers,
num_query_heads,
hidden_dim,
intermediate_dim,
num_key_value_heads,
num_experts,
top_k=2,
router_jitter_noise=0.0,
rope_max_wavelength=10000,
rope_scaling_factor=1.0,
layer_norm_epsilon=1e-06,
router_aux_loss_coef=0.02,
sliding_window=512,
dropout=0,
dtype=None,
output_router_logits=False,
**kwargs
)
The Mixtral Transformer core architecture with hyperparameters.
This network implements a mixture of Experts based decoder network, Mixtral, as described in "Mixtral of Experts". It includes the embedding lookups and transformer layers.
The default constructor gives a fully customizable, randomly initialized
Mixtral model with any number of layers, heads, and embedding
dimensions. To load preset architectures and weights, use the from_preset
constructor.
Arguments
10000
.1.0
.1e-6
.sliding_window
number of tokens are saved in the cache and used to generate the
next token. Defaults to 512
.keras.mixed_precision.DTypePolicy
. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.Examples
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained Mixtral decoder.
model = keras_hub.models.MixtralBackbone.from_preset("mixtral7b_base_en")
model(input_data)
# Randomly initialized Mixtral decoder with custom config.
model = keras_hub.models.MixtralBackbone(
vocabulary_size=10,
hidden_dim=512,
num_layers=2,
num_query_heads=32,
num_key_value_heads=8,
intermediate_dim=1024,
sliding_window=512,
layer_norm_epsilon=1e-6,
dtype="float32"
)
model(input_data)
from_preset
methodMixtralBackbone.from_preset(preset, load_weights=True, **kwargs)
Instantiate a keras_hub.models.Backbone
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as a
one of:
'bert_base_en'
'kaggle://user/bert/keras/bert_base_en'
'hf://user/bert_base_en'
'./bert_base_en'
This constructor can be called in one of two ways. Either from the base
class like keras_hub.models.Backbone.from_preset()
, or from
a model class like keras_hub.models.GemmaBackbone.from_preset()
.
If calling from the base class, the subclass of the returning object
will be inferred from the config in the preset directory.
For any Backbone
subclass, you can run cls.presets.keys()
to list
all built-in presets available on the class.
Arguments
True
, the weights will be loaded into the
model architecture. If False
, the weights will be randomly
initialized.Examples
# Load a Gemma backbone with pre-trained weights.
model = keras_hub.models.Backbone.from_preset(
"gemma_2b_en",
)
# Load a Bert backbone with a pre-trained config and random weights.
model = keras_hub.models.Backbone.from_preset(
"bert_base_en",
load_weights=False,
)
Preset | Parameters | Description |
---|---|---|
mixtral_8_7b_en | 46.70B | 32-layer Mixtral MoE model with 7 billionactive parameters and 8 experts per MoE layer. |
mixtral_8_instruct_7b_en | 46.70B | Instruction fine-tuned 32-layer Mixtral MoE modelwith 7 billion active parameters and 8 experts per MoE layer. |
token_embedding
propertykeras_hub.models.MixtralBackbone.token_embedding
A keras.layers.Embedding
instance for embedding token ids.
This layer embeds integer token ids to the hidden dim of the model.
enable_lora
methodMixtralBackbone.enable_lora(rank, target_names=None)
Enable Lora on the backbone.
Calling this method will freeze all weights on the backbone,
while enabling Lora on the query & value EinsumDense
layers
of the attention layers.