Adapters
modelgenerator.adapters.MLPAdapter
Bases: Sequential
, TokenAdapter
Multi-layer perceptron (MLP) adapter.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_features
|
int
|
Number of features of the input |
required |
out_features
|
int
|
Number of features of the output |
required |
hidden_sizes
|
List[int]
|
List of the hidden feature dimensions. Defaults to []. |
[]
|
activation_layer
|
Callable[..., Module]
|
Activation function. Defaults to torch.nn.Tanh. |
Tanh
|
bias
|
bool
|
Whether to use bias in the linear layer. Defaults to True |
True
|
dropout
|
float
|
The probability for the dropout layer. Defaults to 0.0 |
0.0
|
dropout_in_middle
|
bool
|
Whether to use dropout in the middle layers. Defaults to True |
True
|
modelgenerator.adapters.MLPAdapterWithoutOutConcat
Bases: Module
, TokenAdapter
Multi-layer perceptron (MLP) adapter without outer concatenate
This class is generally used in PairwiseTokenClassification. The following two implementations are equivalent: 1. hidden_states -> outer_concat -> MLPAdapter 2. hidden_states -> MLPAdapterWithoutOutConcat MLPAdapterWithoutOutConcat avoids the large memory consumption of outer_concat
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_features
|
int
|
Number of features of the input |
required |
out_features
|
int
|
Number of features of the output |
required |
hidden_sizes
|
List[int]
|
List of the hidden feature dimensions. Defaults to []. |
[]
|
activation_layer
|
Callable[..., Module]
|
Activation function. Defaults to torch.nn.Tanh. |
Tanh
|
bias
|
bool
|
Whether to use bias in the linear layer. Defaults to True |
True
|
dropout
|
float
|
The probability for the dropout layer. Defaults to 0.0 |
0.0
|
dropout_in_middle
|
bool
|
Whether to use dropout in the middle layers. Defaults to True |
True
|
modelgenerator.adapters.LinearAdapter
Bases: MLPAdapter
Simple linear adapter for a 1D embedding
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_features
|
int
|
Number of input features |
required |
out_features
|
int
|
Number of output features |
required |
modelgenerator.adapters.LinearCLSAdapter
Bases: Module
, SequenceAdapter
Simple linear adapter for a 1D embedding
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_features
|
int
|
Number of input features |
required |
out_features
|
int
|
Number of output features |
required |
modelgenerator.adapters.LinearTransformerAdapter
Bases: Module
, SequenceAdapter
Transformer adapter
Note: Sopport cls_pooling only.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
embed_dim
|
int
|
Hidden size |
required |
out_features
|
int
|
Number of output features |
required |
modelgenerator.adapters.ConditionalLMAdapter
Bases: Module
, ConditionalGenerationAdapter
Conditional sequence adapter
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_features
|
int
|
Number of input features |
required |
embed_dim
|
int
|
Hidden size |
required |
seq_len
|
int
|
Sequence length |
required |
modelgenerator.adapters.MMFusionSeqAdapter
Bases: Module
, FusionAdapter
Multimodal embeddings fusion with SequenceAdapter.
Note: Accept 2-3 sequence embeddings as input and fused them into a multimodal embedding for the adpter
Parameters:
Name | Type | Description | Default |
---|---|---|---|
out_features
|
int
|
Number of output features |
required |
input_size
|
int
|
number of input features for the first modality |
required |
input_size_1
|
int
|
number of input features for the second modality |
required |
input_size_2
|
(int, optinal)
|
number of input features for the third modality |
None
|
fusion
|
(Callable, Module)
|
The callable that returns a fusion module. Defaults to CrossAttentionFusion. |
CrossAttentionFusion
|
adapter
|
Callable[[int, int], SequenceAdapter]
|
The callable that returns an adapter. Defaults to LinearCLSAdapter. |
LinearCLSAdapter
|
modelgenerator.adapters.MMFusionTokenAdapter
Bases: Module
, FusionAdapter
Multimodal embeddings fusion with TokenAdapter, fuse embeddings at token level
Note: Accept 2-3 sequence embeddings as input and fused them into a multimodal embedding for the adpter
Parameters:
Name | Type | Description | Default |
---|---|---|---|
out_features
|
int
|
Number of output features |
required |
input_size
|
int
|
number of input features for the first modality |
required |
input_size_1
|
int
|
number of input features for the second modality |
required |
input_size_2
|
(int, optinal)
|
number of input features for the third modality |
None
|
fusion
|
(Callable, Module)
|
The callable that returns a fusion module. Defaults to CrossAttentionFusion. |
CrossAttentionFusion
|
adapter
|
Callable[[int, int], TokenAdapter]
|
The callable that returns an adapter. Defaults to MLPAdapter. |
MLPAdapter
|