ESMProteinClassifierPreprocessor
classkeras_hub.models.ESMProteinClassifierPreprocessor(
tokenizer, sequence_length=512, truncate="round_robin", **kwargs
)
A ESM preprocessing layer which tokenizes and packs inputs.
This preprocessing layer will do three things:
tokenizer
.keras_hub.layers.StartEndPacker
.
with the appropriate start, end and pad tokens."token_ids"
, that can be passed
directly to an ESM model.
This layer can be used directly with tf.data.Dataset.map
to preprocess
string data in the (x, y, sample_weight)
format used by
keras.Model.fit
.Arguments
keras_hub.models.ESMTokenizer
instance.sequence_length
. The value can be either
round_robin
or waterfall
:"round_robin"
: Available space is assigned one token at a
time in a round-robin fashion to the inputs that still need
some, until the limit is reached."waterfall"
: The allocation of the budget is done using a
"waterfall" algorithm that allocates quota in a
left-to-right manner and fills up the buckets until we run
out of budget. It supports an arbitrary number of segments.Call arguments
Examples
Directly calling the layer on data.
preprocessor = keras_hub.models.ESMProteinClassifierPreprocessor.from_preset
(
hf://facebook/esm2_t6_8M_UR50D
)
# Tokenize and pack a single sentence.
preprocessor("The quick brown fox jumped.")
# Tokenize a batch of single sentences.
preprocessor(["The quick brown fox jumped.", "Call me Ishmael."])
# Preprocess a batch of sentence pairs.
# When handling multiple sequences, always convert to tensors first!
first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])
preprocessor((first, second))
# Custom vocabulary.
vocab = ["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]
vocab += ["The", "quick", "brown", "fox", "jumped", "."]
tokenizer = keras_hub.models.ESMTokenizer(vocabulary=vocab)
preprocessor =
keras_hub.models.ESMProteinClassifierPreprocessor(tokenizer)
preprocessor("The quick brown fox jumped.")
Mapping with tf.data.Dataset
.
preprocessor = keras_hub.models.ESMProteinClassifierPreprocessor.from_preset
(
hf://facebook/esm2_t6_8M_UR50D
)
first = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
second = tf.constant(["The fox tripped.", "Oh look, a whale."])
label = tf.constant([1, 1])
# Map labeled single sentences.
ds = tf.data.Dataset.from_tensor_slices((first, label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map unlabeled single sentences.
ds = tf.data.Dataset.from_tensor_slices(first)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map labeled sentence pairs.
ds = tf.data.Dataset.from_tensor_slices(((first, second), label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map unlabeled sentence pairs.
ds = tf.data.Dataset.from_tensor_slices((first, second))
# Watch out for tf.data's default unpacking of tuples here!
# Best to invoke the `preprocessor` directly in this case.
ds = ds.map(
lambda first, second: preprocessor(x=(first, second)),
num_parallel_calls=tf.data.AUTOTUNE,
)
from_preset
methodESMProteinClassifierPreprocessor.from_preset(
preset, config_file="preprocessor.json", **kwargs
)
Instantiate a keras_hub.models.Preprocessor
from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset
can be passed as
one of:
'bert_base_en'
'kaggle://user/bert/keras/bert_base_en'
'hf://user/bert_base_en'
'./bert_base_en'
For any Preprocessor
subclass, you can run cls.presets.keys()
to
list all built-in presets available on the class.
As there are usually multiple preprocessing classes for a given model,
this method should be called on a specific subclass like
keras_hub.models.BertTextClassifierPreprocessor.from_preset()
.
Arguments
Examples
# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.CausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.TextClassifierPreprocessor.from_preset(
"bert_base_en",
)
Preset | Parameters | Description |
---|---|---|
esm2_t6_8M | 7.41M | 6 transformer layers version of the ESM-2 protein language model, trained on the UniRef50 clustered protein sequence dataset. |
esm2_t12_35M | 33.27M | 12 transformer layers version of the ESM-2 protein language model, trained on the UniRef50 clustered protein sequence dataset. |
esm2_t30_150M | 147.73M | 30 transformer layers version of the ESM-2 protein language model, trained on the UniRef50 clustered protein sequence dataset. |
esm2_t33_650M | 649.40M | 33 transformer layers version of the ESM-2 protein language model, trained on the UniRef50 clustered protein sequence dataset. |
tokenizer
propertykeras_hub.models.ESMProteinClassifierPreprocessor.tokenizer
The tokenizer used to tokenize strings.