Embedding
The embedding class is used to store and retrieve word embeddings from their indices. There are two types of embeddings in bitsandbytes, the standard PyTorch Embedding
class and the StableEmbedding
class.
The StableEmbedding
class was introduced in the 8-bit Optimizers via Block-wise Quantization paper to reduce gradient variance as a result of the non-uniform distribution of input tokens. This class is designed to support quantization.
Embedding
class bitsandbytes.nn.Embedding
< source >( num_embeddings: int embedding_dim: int padding_idx: typing.Optional[int] = None max_norm: typing.Optional[float] = None norm_type: float = 2.0 scale_grad_by_freq: bool = False sparse: bool = False _weight: typing.Optional[torch.Tensor] = None device: typing.Optional[torch.device] = None )
Embedding class to store and retrieve word embeddings from their indices.
__init__
< source >( num_embeddings: int embedding_dim: int padding_idx: typing.Optional[int] = None max_norm: typing.Optional[float] = None norm_type: float = 2.0 scale_grad_by_freq: bool = False sparse: bool = False _weight: typing.Optional[torch.Tensor] = None device: typing.Optional[torch.device] = None )
Parameters
- num_embeddings (
int
) — The number of unique embeddings (vocabulary size). - embedding_dim (
int
) — The dimensionality of the embedding. - padding_idx (
Optional[int]
) — Pads the output with zeros at the given index. - max_norm (
Optional[float]
) — Renormalizes embeddings to have a maximum L2 norm. - norm_type (
float
, defaults to2.0
) — The p-norm to compute for themax_norm
option. - scale_grad_by_freq (
bool
, defaults toFalse
) — Scale gradient by frequency during backpropagation. - sparse (
bool
, defaults toFalse
) — Computes dense gradients. Set toTrue
to compute sparse gradients instead. - _weight (
Optional[Tensor]
) — Pretrained embeddings.
StableEmbedding
class bitsandbytes.nn.StableEmbedding
< source >( num_embeddings: int embedding_dim: int padding_idx: typing.Optional[int] = None max_norm: typing.Optional[float] = None norm_type: float = 2.0 scale_grad_by_freq: bool = False sparse: bool = False _weight: typing.Optional[torch.Tensor] = None device = None dtype = None )
Custom embedding layer designed to improve stability during training for NLP tasks by using 32-bit optimizer states. It is designed to reduce gradient variations that can result from quantization. This embedding layer is initialized with Xavier uniform initialization followed by layer normalization.
Example:
# Initialize StableEmbedding layer with vocabulary size 1000, embedding dimension 300
embedding_layer = StableEmbedding(num_embeddings=1000, embedding_dim=300)
# Reset embedding parameters
embedding_layer.reset_parameters()
# Perform a forward pass with input tensor
input_tensor = torch.tensor([1, 2, 3])
output_embedding = embedding_layer(input_tensor)
Methods: reset_parameters(): Reset embedding parameters using Xavier uniform initialization. forward(input: Tensor) -> Tensor: Forward pass through the stable embedding layer.
__init__
< source >( num_embeddings: int embedding_dim: int padding_idx: typing.Optional[int] = None max_norm: typing.Optional[float] = None norm_type: float = 2.0 scale_grad_by_freq: bool = False sparse: bool = False _weight: typing.Optional[torch.Tensor] = None device = None dtype = None )
Parameters
- num_embeddings (
int
) — The number of unique embeddings (vocabulary size). - embedding_dim (
int
) — The dimensionality of the embedding. - padding_idx (
Optional[int]
) — Pads the output with zeros at the given index. - max_norm (
Optional[float]
) — Renormalizes embeddings to have a maximum L2 norm. - norm_type (
float
, defaults to2.0
) — The p-norm to compute for themax_norm
option. - scale_grad_by_freq (
bool
, defaults toFalse
) — Scale gradient by frequency during backpropagation. - sparse (
bool
, defaults toFalse
) — Computes dense gradients. Set toTrue
to compute sparse gradients instead. - _weight (
Optional[Tensor]
) — Pretrained embeddings.