SAM3PromptableConceptImageSegmenterPreprocessor classkeras_hub.models.SAM3PromptableConceptImageSegmenterPreprocessor(
tokenizer,
image_converter,
sequence_length=32,
add_start_token=True,
add_end_token=True,
point_pad_value=-10,
**kwargs
)
SAM3 Promptable Concept Image Segmenter preprocessor.
This preprocessing layer is meant for use with
keras_hub.models.SAM3PromptableConceptImageSegmenter.
Arguments
keras_hub.models.SAM3Tokenizer instance.keras_hub.layers.SAM3ImageConverter instance.32.True, the preprocessor will prepend the tokenizer
start token to each input sequence. Defaults to True.True, the preprocessor will append the tokenizer
end token to each input sequence. Defaults to True.-10.Call arguments
(height, width, 3) or (batch_size, height, width, 3).(num_boxes, 4) or
(batch_size, num_boxes, 4) containing box coordinates in
(x_min, y_min, x_max, y_max) format. Coordinates should be in
absolute pixel values. If not provided, no box prompts will be
used. -10 is used as the padding value.(num_boxes,) or
(batch_size, num_boxes) containing box labels. If not provided,
no box labels will be used. -10 is used as the padding value.Examples
# Load the preprocessor from a preset.
preprocessor = keras_hub.models.SAM3PromptableConceptImageSegmenterPreprocessor.from_preset(
"sam3_pcs"
)
# Unbatched inputs, with one image and one text prompt.
preprocessor(
{
"prompts": "ear",
"images": np.ones((896, 896, 3), dtype="float32")
}
)
# Unbatched inputs, with one image and one box prompt.
preprocessor(
{
"boxes": [[0, 0, 300, 300]],
"box_labels": [1],
"images": np.ones((896, 896, 3), dtype="float32")
}
)
# Batched inputs, one image per text prompt.
preprocessor(
{
"prompts": [
"ear",
"head"
],
"images": [
np.ones((896, 896, 3), dtype="float32"),
np.ones((896, 896, 3), dtype="float32")
]
}
)
# Batched inputs, one image per box prompt.
preprocessor(
{
"boxes": [
[[0, 0, 300, 300]],
[[50, 50, 100, 100]]
],
"box_labels": [
[1],
[1]
],
"images": [
np.ones((896, 896, 3), dtype="float32"),
np.ones((896, 896, 3), dtype="float32")
]
}
)
# Different number of box prompts in every sample.
preprocessor(
{
"boxes": [
[[0, 0, 300, 300]],
[[50, 50, 100, 100], [150, 150, 200, 200]]
],
"box_labels": [
[1],
[1, 1]
],
"images": [
np.ones((896, 896, 3), dtype="float32"),
np.ones((896, 896, 3), dtype="float32")
]
}
)
# Apply preprocessing to a [`tf.data.Dataset`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset).
inputs = {
"prompts": [
"ear",
"head",
],
"images": np.ones((2, 896, 896, 3), dtype="float32")
}
ds = tf.data.Dataset.from_tensor_slices(inputs)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
from_preset methodSAM3PromptableConceptImageSegmenterPreprocessor.from_preset(
preset, config_file="preprocessor.json", **kwargs
)
Instantiate a keras_hub.models.Preprocessor from a model preset.
A preset is a directory of configs, weights and other file assets used
to save and load a pre-trained model. The preset can be passed as
one of:
'bert_base_en''kaggle://user/bert/keras/bert_base_en''hf://user/bert_base_en''./bert_base_en'For any Preprocessor subclass, you can run cls.presets.keys() to
list all built-in presets available on the class.
As there are usually multiple preprocessing classes for a given model,
this method should be called on a specific subclass like
keras_hub.models.BertTextClassifierPreprocessor.from_preset().
Arguments
Examples
# Load a preprocessor for Gemma generation.
preprocessor = keras_hub.models.CausalLMPreprocessor.from_preset(
"gemma_2b_en",
)
# Load a preprocessor for Bert classification.
preprocessor = keras_hub.models.TextClassifierPreprocessor.from_preset(
"bert_base_en",
)
| Preset | Parameters | Description |
|---|---|---|
| sam3_pcs | 30.00M | 30 million parameter Promptable Concept Segmentation (PCS) SAM model. |
image_converter propertykeras_hub.models.SAM3PromptableConceptImageSegmenterPreprocessor.image_converter
The image converter used to preprocess image data.