Class Transformer
Its later variation has been prevalently adopted for training large language models (LLM). Text is converted to numerical representations called tokens, and each token is converted into a vector via looking up from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism allowing the signal for key tokens to be amplified and less important tokens to be diminished.
This architecture is now used not only in natural language processing and computer vision, but also in audio and multi-modal processing. It has also led to the development of pre-trained systems, such as GPTs (Generative Pre-trained Transformers) and BERT (Bidirectional Encoder Representations from Transformers).
-
Nested Class Summary
Modifier and TypeClassDescriptionstatic final record
Transformer architecture configuration. -
Constructor Summary
ConstructorDescriptionTransformer
(int numTokens) Creates a Transformer model with default architecture configuration.Transformer
(Transformer.Options options) Creates a Transformer model with custom architecture configuration. -
Method Summary
-
Constructor Details
-
Transformer
public Transformer(int numTokens) Creates a Transformer model with default architecture configuration.- Parameters:
numTokens
- the number of tokens in the vocabulary.
-
Transformer
Creates a Transformer model with custom architecture configuration.- Parameters:
options
- Transformer architecture configuration.
-
-
Method Details
-
init
public void init()Initializes the model weights. -
forward
Forward propagation (or forward pass).- Parameters:
source
- the source sequence.- Returns:
- the log probability of prediction.
-
to
Moves the model to a device.- Parameters:
device
- the compute device.- Returns:
- this model.
-