XLMRobertaClassifier
classkeras_nlp.models.XLMRobertaClassifier(
backbone,
num_classes,
preprocessor=None,
activation=None,
hidden_dim=None,
dropout=0.0,
**kwargs
)
An end-to-end XLM-RoBERTa model for classification tasks.
This model attaches a classification head to a
keras_nlp.model.XLMRobertaBackbone
instance, mapping from the backbone
outputs to logits suitable for a classification task. For usage of
this model with pre-trained weights, see the from_preset()
constructor.
This model can optionally be configured with a preprocessor
layer, in
which case it will automatically apply preprocessing to raw inputs during
fit()
, predict()
, and evaluate()
. This is done by default when
creating the model with from_preset()
.
Disclaimer: Pre-trained models are provided on an "as is" basis, without warranties or conditions of any kind. The underlying model is provided by a third party and subject to a separate license, available here.
Arguments
keras_nlp.models.XLMRobertaBackbone
instance.keras_nlp.models.XLMRobertaPreprocessor
or None
. If
None
, this model will not apply preprocessing, and inputs should
be preprocessed before calling the model.str
or callable. The activation function to use
on the model outputs. Set activation="softmax"
to return output
probabilities. Defaults to None
.Examples
Raw string data.
features = ["The quick brown fox jumped.", "نسيت الواجب"]
labels = [0, 3]
# Pretrained classifier.
classifier = keras_nlp.models.XLMRobertaClassifier.from_preset(
"xlm_roberta_base_multi",
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
classifier.predict(x=features, batch_size=2)
# Re-compile (e.g., with a new learning rate).
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
Preprocessed integer data.
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_nlp.models.XLMRobertaClassifier.from_preset(
"xlm_roberta_base_multi",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
Custom backbone and vocabulary.
features = ["The quick brown fox jumped.", "نسيت الواجب"]
labels = [0, 3]
def train_sentencepiece(ds, vocab_size):
bytes_io = io.BytesIO()
sentencepiece.SentencePieceTrainer.train(
sentence_iterator=ds.as_numpy_iterator(),
model_writer=bytes_io,
vocab_size=vocab_size,
model_type="WORD",
unk_id=0,
bos_id=1,
eos_id=2,
)
return bytes_io.getvalue()
ds = tf.data.Dataset.from_tensor_slices(
["the quick brown fox", "the earth is round"]
)
proto = train_sentencepiece(ds, vocab_size=10)
tokenizer = keras_nlp.models.XLMRobertaTokenizer(
proto=proto
)
preprocessor = keras_nlp.models.XLMRobertaPreprocessor(
tokenizer,
sequence_length=128,
)
backbone = keras_nlp.models.XLMRobertaBackbone(
vocabulary_size=250002,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128,
)
classifier = keras_nlp.models.XLMRobertaClassifier(
backbone=backbone,
preprocessor=preprocessor,
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
from_preset
methodXLMRobertaClassifier.from_preset()
Instantiate XLMRobertaClassifier model from preset architecture and weights.
Arguments
True
.Examples
# Load architecture and weights from preset
model = XLMRobertaClassifier.from_preset("xlm_roberta_base_multi")
# Load randomly initialized model from preset architecture
model = XLMRobertaClassifier.from_preset(
"xlm_roberta_base_multi",
load_weights=False
)
Preset name | Parameters | Description |
---|---|---|
xlm_roberta_base_multi | 277.45M | 12-layer XLM-RoBERTa model where case is maintained. Trained on CommonCrawl in 100 languages. |
xlm_roberta_large_multi | 558.84M | 24-layer XLM-RoBERTa model where case is maintained. Trained on CommonCrawl in 100 languages. |
backbone
propertykeras_nlp.models.XLMRobertaClassifier.backbone
A keras.Model
instance providing the backbone sub-model.
preprocessor
propertykeras_nlp.models.XLMRobertaClassifier.preprocessor
A keras.layers.Layer
instance used to preprocess inputs.