Skip to content

Adding Tasks

Tasks are use-cases for pre-trained foundation models.

Pre-trained foundation models (FMs, backbones) improve performance across a wide range of ML tasks. However, tasks utilize FMs in very different ways, often requiring a unique reimplementation or adaptation for every backbone-task pair, a process that is time-consuming and error-prone. For FM-enabled research and development to be practical, modularity and reusability are essential.

AIDO.ModelGenerator tasks enable rapid prototyping and experimentation through hot-swappable backbone and adapter components, which make use of standard interfaces. All of this is made possible by the PyTorch Lightning framework, which provides the LightningModule interface for hardware-agnostic training, evaluation, and prediction, as well as configified experiment management and extensive CLI support.

Available Tasks: Inference, MLM, SequenceClassification, TokenClassification, PairwiseTokenClassification, Diffusion, ConditionalDiffusion, SequenceRegression, Embed

Note: Adapters and Backbones are typed as Callables, since some args are reserved to automatically configure the adapter with the backbone. Create an AdapterCallable signature for a task to specify which arguments are configurable, and which are reserved.

Adding Adapters

Adapters serve as a linker between a backbone's output and a task's objective function.

They are simple nn.Module objects that use the backbone interface to configure their weights and forward pass. Their construction is handled within the task's configure_model method. Each task only tolerates a specific adapter type, which all adapters for that task must subclass. See the SequenceAdapter type and implemented LinearCLSAdapter for SequenceRegression as an example below.

modelgenerator.tasks.TaskInterface

Bases: LightningModule

Interface class to ensure consistent implementation of essential methods for all tasks.

Note

Tasks will usually take a backbone and adapter as arguments, but these are not strictly required. See SequenceRegression task for an succinct example implementation. Handles the boilerplate of setting up training, validation, and testing steps, as well as the optimizer and learning rate scheduler. Subclasses must implement the init, configure_model, collate, forward, and evaluate methods.

Parameters:

Name Type Description Default
use_legacy_adapter bool

Whether to use the adapter from the backbone (HF head support). Defaults to False.

False
strict_loading bool

Whether to strictly load the model. Defaults to True. Set it to False if you want to replace the adapter (e.g. for continue pretraining)

True
batch_size int

The batch size to use for training. Defaults to None.

None
optimizer OptimizerCallable

The optimizer to use for training. Defaults to torch.optim.AdamW.

AdamW
reset_optimizer_states bool

Whether to reset the optimizer states. Defaults to False. Set it to True if you want to replace the adapter (e.g. for continue pretraining).

False
lr_scheduler LRSchedulerCallable

The learning rate scheduler to use for training. Defaults to None.

None
Source code in modelgenerator/tasks/base.py
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
class TaskInterface(pl.LightningModule):
    """Interface class to ensure consistent implementation of essential methods for all tasks.

    Note:
        Tasks will usually take a backbone and adapter as arguments, but these are not strictly required.
        See [SequenceRegression](./#modelgenerator.tasks.SequenceRegression) task for an succinct example implementation.
        Handles the boilerplate of setting up training, validation, and testing steps,
        as well as the optimizer and learning rate scheduler. Subclasses must implement
        the __init__, configure_model, collate, forward, and evaluate methods.

    Args:
        use_legacy_adapter (bool, optional):
            Whether to use the adapter from the backbone (HF head support). Defaults to False.
        strict_loading (bool, optional): Whether to strictly load the model. Defaults to True.
            Set it to False if you want to replace the adapter (e.g. for continue pretraining)
        batch_size (int, optional): The batch size to use for training. Defaults to None.
        optimizer (OptimizerCallable, optional): The optimizer to use for training. Defaults to torch.optim.AdamW.
        reset_optimizer_states (bool, optional): Whether to reset the optimizer states. Defaults to False.
            Set it to True if you want to replace the adapter (e.g. for continue pretraining).
        lr_scheduler (LRSchedulerCallable, optional): The learning rate scheduler to use for training. Defaults to None.
    """

    def __init__(
        self,
        optimizer: OptimizerCallable = torch.optim.AdamW,
        lr_scheduler: Optional[LRSchedulerCallable] = None,
        batch_size: Optional[int] = None,
        use_legacy_adapter: bool = False,
        strict_loading: bool = True,
        reset_optimizer_states: bool = False,
        **kwargs,
    ):
        super().__init__(**kwargs)
        # NOTE: A very explicit way of preventing unwanted hparams from being
        # saved due to inheritance. All subclasses should include the
        # following condition under super().__init__().
        # Converting it to a reusable method could work but it would rely
        # on the implementation detail of save_hyperparameters() walking up
        # the call stack, which can change at any time.
        if self.__class__ is TaskInterface:
            self.save_hyperparameters()
        self.optimizer = optimizer
        self.lr_scheduler = lr_scheduler
        self.batch_size = batch_size
        self.use_legacy_adapter = use_legacy_adapter
        self.metrics = nn.ModuleDict(
            {
                "train_metrics": nn.ModuleDict(),
                "val_metrics": nn.ModuleDict(),
                "test_metrics": nn.ModuleDict(),
            }
        )
        self.metrics_to_pbar: Set[str] = {}
        self.strict_loading = strict_loading
        self.reset_optimizer_states = reset_optimizer_states

    def configure_model(self) -> None:
        """Configures the model for training and interence. Subclasses must implement this method."""
        raise NotImplementedError

    def transform(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: int
    ) -> dict[str, Union[list, Tensor]]:
        """Collates and tokenizes a batch of data into a format that can be passed to the forward and evaluate methods. Subclasses must implement this method.

        Note:
            Tokenization is handled here using the backbone interface.
            Tensor typing and device moving should be handled here.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The collated batch
        """
        raise NotImplementedError

    def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
        """Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

        Args:
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.

        Returns:
            Tensor: The model predictions
        """
        raise NotImplementedError

    def evaluate(
        self,
        preds: Tensor,
        collated_batch: dict[str, Union[list, Tensor]],
        stage: Optional[Literal["train", "val", "test"]] = None,
        loss_only: bool = False,
    ) -> dict[str, Union[Tensor, float]]:
        """Calculate loss and update metrics states. Subclasses must implement this method.

        Args:
            preds (Tensor): The model predictions from forward.
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.
            stage (str, optional): The stage of training (train, val, test). Defaults to None.
            loss_only (bool, optional): If true, only update loss metric. Defaults to False.

        Returns:
            dict[str, Union[Tensor, float]]: The loss and any additional metrics.
        """
        raise NotImplementedError

    def configure_optimizers(self):
        """Configures the optimizer and learning rate scheduler for training.

        Returns:
            list: A list of optimizers and learning rate schedulers
        """
        config = {
            "optimizer": self.optimizer(
                filter(lambda p: p.requires_grad, self.parameters())
            )
        }
        if self.lr_scheduler is not None:
            scheduler = self.lr_scheduler(config["optimizer"])
            if isinstance(scheduler, LazyLRScheduler):
                scheduler.initialize(self.trainer)
            config["lr_scheduler"] = {
                "scheduler": scheduler,
                "interval": "step",
                "monitor": "train_loss",  # Only used for torch.optim.lr_scheduler.ReduceLROnPlateau
            }
        return config

    def on_save_checkpoint(self, checkpoint: dict):
        if hasattr(self.backbone, "on_save_checkpoint"):
            self.backbone.on_save_checkpoint(checkpoint)

    def on_load_checkpoint(self, checkpoint: dict):
        if self.reset_optimizer_states:
            checkpoint["optimizer_states"] = {}
            checkpoint["lr_schedulers"] = {}

    def training_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a training step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the training step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "train", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "train")
        return outputs

    def validation_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a validation step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the validation step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "val", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "val")
        return outputs

    def test_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a test step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the test step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "test", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "test")
        return {"predictions": preds, **collated_batch}

    def predict_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> dict[str, Union[list, Tensor]]:
        """Infers predictions from a batch of data. Calls collate and forward methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The predictions from the model along with the collated batch.
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        return {"predictions": preds, **collated_batch}

    def get_metrics_by_stage(
        self, stage: Literal["train", "val", "test"]
    ) -> nn.ModuleDict:
        """Returns the metrics dict for a given stage.

        Args:
            stage (str): The stage of training (train, val, test)

        Returns:
            nn.ModuleDict: The metrics for the given stage
        """
        try:
            return self.metrics[f"{stage}_metrics"]
        except KeyError:
            raise ValueError(
                f"Stage must be one of 'train', 'val', or 'test'. Got {stage}"
            )

    def log_loss_and_metrics(
        self, loss: Tensor, stage: Literal["train", "val", "test"]
    ) -> None:
        """Logs the loss and metrics for a given stage.

        Args:
            loss (Tensor): The loss from the training, validation, or testing step
            stage (str): The stage of training (train, val, test)
        """
        self.log(f"{stage}_loss", loss, prog_bar=True, sync_dist=stage != "train")
        for k, v in self.metrics[f"{stage}_metrics"].items():
            self.log(f"{stage}_{k}", v, prog_bar=k in self.metrics_to_pbar)

    def call_or_update_metric(
        self, stage: Literal["train", "val", "test"], metric: tm.Metric, *args, **kwargs
    ):
        if stage == "train":
            # in addition to .update(), metric.__call__ also .compute() the metric
            # for the current batch. However, .compute() may fail if data is insufficient.
            try:
                metric(*args, **kwargs)
            except ValueError:
                metric.update(*args, **kwargs)
        else:
            # update only since per step metrics are not logged in val and test stages
            metric.update(*args, **kwargs)

    @classmethod
    def from_config(cls, config: dict) -> "TaskInterface":
        """Creates a task model from a configuration dictionary

        Args:
            config (Dict[str, Any]): Configuration dictionary

        Returns:
            TaskInterface: Task model
        """
        parser = ArgumentParser()
        parser.add_class_arguments(cls, "model")
        init = parser.instantiate_classes(parser.parse_object(config))
        init.model.configure_model()
        return init.model

configure_model()

Configures the model for training and interence. Subclasses must implement this method.

Source code in modelgenerator/tasks/base.py
86
87
88
def configure_model(self) -> None:
    """Configures the model for training and interence. Subclasses must implement this method."""
    raise NotImplementedError

forward(collated_batch)

Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

Parameters:

Name Type Description Default
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data from collate.

required

Returns:

Name Type Description
Tensor Tensor

The model predictions

Source code in modelgenerator/tasks/base.py
108
109
110
111
112
113
114
115
116
117
def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
    """Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

    Args:
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.

    Returns:
        Tensor: The model predictions
    """
    raise NotImplementedError

evaluate(preds, collated_batch, stage=None, loss_only=False)

Calculate loss and update metrics states. Subclasses must implement this method.

Parameters:

Name Type Description Default
preds Tensor

The model predictions from forward.

required
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data from collate.

required
stage str

The stage of training (train, val, test). Defaults to None.

None
loss_only bool

If true, only update loss metric. Defaults to False.

False

Returns:

Type Description
dict[str, Union[Tensor, float]]

dict[str, Union[Tensor, float]]: The loss and any additional metrics.

Source code in modelgenerator/tasks/base.py
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
def evaluate(
    self,
    preds: Tensor,
    collated_batch: dict[str, Union[list, Tensor]],
    stage: Optional[Literal["train", "val", "test"]] = None,
    loss_only: bool = False,
) -> dict[str, Union[Tensor, float]]:
    """Calculate loss and update metrics states. Subclasses must implement this method.

    Args:
        preds (Tensor): The model predictions from forward.
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.
        stage (str, optional): The stage of training (train, val, test). Defaults to None.
        loss_only (bool, optional): If true, only update loss metric. Defaults to False.

    Returns:
        dict[str, Union[Tensor, float]]: The loss and any additional metrics.
    """
    raise NotImplementedError

Examples

modelgenerator.tasks.SequenceRegression

Bases: TaskInterface

Task for fine-tuning a model on a regression task.

Parameters:

Name Type Description Default
backbone BackboneCallable

The callable that returns a backbone. Defaults to aido_dna_dummy.

aido_dna_dummy
adapter Callable[[int, int], SequenceAdapter]

The callable that returns an adapter. Defaults to LinearCLSAdapter.

LinearCLSAdapter
num_outputs int

The number of outputs in the regression task. Defaults to 1.

1
optimizer OptimizerCallable

The optimizer to use for training. Defaults to torch.optim.AdamW.

required
lr_scheduler LRSchedulerCallable

The learning rate scheduler to use for training. Defaults to None.

required
batch_size int

The batch size to use for training. Defaults to None.

required
strict_loading bool

Whether to strictly load the model. Defaults to True. Set it to False if you want to replace the adapter (e.g. for continue pretraining)

required
reset_optimizer_states bool

Whether to reset the optimizer states. Defaults to False. Set it to True if you want to replace the adapter (e.g. for continue pretraining).

required
Source code in modelgenerator/tasks/tasks.py
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
class SequenceRegression(TaskInterface):
    """Task for fine-tuning a model on a regression task.

    Args:
        backbone (BackboneCallable, optional): The callable that returns a backbone. Defaults to aido_dna_dummy.
        adapter (Callable[[int, int], SequenceAdapter], optional): The callable that returns an adapter. Defaults to LinearCLSAdapter.
        num_outputs (int, optional): The number of outputs in the regression task. Defaults to 1.
        optimizer (OptimizerCallable, optional): The optimizer to use for training. Defaults to torch.optim.AdamW.
        lr_scheduler (LRSchedulerCallable, optional): The learning rate scheduler to use for training. Defaults to None.
        batch_size (int, optional): The batch size to use for training. Defaults to None.
        strict_loading (bool, optional): Whether to strictly load the model. Defaults to True.
            Set it to False if you want to replace the adapter (e.g. for continue pretraining)
        reset_optimizer_states (bool, optional): Whether to reset the optimizer states. Defaults to False.
            Set it to True if you want to replace the adapter (e.g. for continue pretraining).
    """

    def __init__(
        self,
        backbone: BackboneCallable = aido_dna_dummy,
        adapter: Optional[Callable[[int, int], SequenceAdapter]] = LinearCLSAdapter,
        num_outputs: int = 1,
        **kwargs,
    ):
        super().__init__(**kwargs)
        if self.__class__ is SequenceRegression:
            self.save_hyperparameters()
        self.backbone_fn = backbone
        self.adapter_fn = adapter
        self.num_outputs = num_outputs
        self.backbone = None
        self.adapter = None
        self.loss = nn.MSELoss()
        for stage in ["train", "val", "test"]:
            self.metrics[f"{stage}_metrics"] = nn.ModuleDict(
                {
                    "pearson": tm.PearsonCorrCoef(num_outputs=num_outputs),
                    "spearman": tm.SpearmanCorrCoef(num_outputs=num_outputs),
                    "mae": tm.MeanAbsoluteError(num_outputs=num_outputs),
                    "r2": tm.R2Score(),
                    "mse": tm.MeanSquaredError(num_outputs=num_outputs),
                }
            )
        self.metrics_to_pbar = set(self.metrics["train_metrics"].keys())

    def configure_model(self) -> None:
        if self.backbone is not None:
            return
        if self.use_legacy_adapter:
            self.backbone = self.backbone_fn(
                LegacyAdapterType.SEQ_CLS,
                DefaultConfig(
                    config_overwrites={
                        "problem_type": "regression",
                        "num_labels": self.num_outputs,
                    }
                ),
            )
            self.adapter = self.backbone.get_decoder()
        else:
            self.backbone = self.backbone_fn(None, None)
            self.adapter = self.adapter_fn(
                self.backbone.get_embedding_size(), self.num_outputs
            )

    def transform(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> dict[str, Union[list, Tensor]]:
        """Collates a batch of data into a format that can be passed to the forward and evaluate methods.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data containing sequences and labels
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The collated batch containing sequences, input_ids, attention_mask, and labels
        """
        input_ids, attention_mask, special_tokens_mask = self.backbone.tokenize(
            batch["sequences"]
        )
        input_ids = torch.tensor(input_ids, dtype=torch.long).to(self.device)
        attention_mask = torch.tensor(attention_mask, dtype=torch.long).to(self.device)
        labels = None
        if batch.get("labels") is not None:
            labels = batch["labels"].to(self.device, dtype=self.dtype)
        return {
            "sequences": batch["sequences"],
            "input_ids": input_ids,
            "attention_mask": attention_mask,
            "special_tokens_mask": special_tokens_mask,
            "labels": labels,
        }

    def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
        """Runs a forward pass of the model.

        Args:
            collated_batch (dict[str, Union[list, Tensor]]): A collated batch of data containing input_ids and attention_mask.

        Returns:
            Tensor: The regression predictions
        """
        hidden_states = self.backbone(
            collated_batch["input_ids"], collated_batch["attention_mask"]
        )  # (bs, seq_len, dim)
        preds = self.adapter(hidden_states, collated_batch["attention_mask"])
        return preds

    def evaluate(
        self,
        preds: Tensor,
        collated_batch: dict[str, Union[list, Tensor]],
        stage: Optional[Literal["train", "val", "test"]] = None,
        loss_only: bool = False,
    ) -> dict[str, Union[Tensor, float]]:
        """Evaluates the model predictions against the ground truth labels.

        Args:
            logits (Tensor): The model predictions
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data containing labels
            loss_only (bool, optional): Whether to only return the loss. Defaults to False.

        Returns:
            dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse
        """
        labels = collated_batch["labels"]
        loss = self.loss(preds, labels)
        if loss_only:
            return {"loss": loss}
        metrics = self.get_metrics_by_stage(stage)
        for metric in metrics.values():
            self.call_or_update_metric(stage, metric, preds, labels)
        return {"loss": loss}

configure_model()

Source code in modelgenerator/tasks/tasks.py
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
def configure_model(self) -> None:
    if self.backbone is not None:
        return
    if self.use_legacy_adapter:
        self.backbone = self.backbone_fn(
            LegacyAdapterType.SEQ_CLS,
            DefaultConfig(
                config_overwrites={
                    "problem_type": "regression",
                    "num_labels": self.num_outputs,
                }
            ),
        )
        self.adapter = self.backbone.get_decoder()
    else:
        self.backbone = self.backbone_fn(None, None)
        self.adapter = self.adapter_fn(
            self.backbone.get_embedding_size(), self.num_outputs
        )

forward(collated_batch)

Runs a forward pass of the model.

Parameters:

Name Type Description Default
collated_batch dict[str, Union[list, Tensor]]

A collated batch of data containing input_ids and attention_mask.

required

Returns:

Name Type Description
Tensor Tensor

The regression predictions

Source code in modelgenerator/tasks/tasks.py
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
    """Runs a forward pass of the model.

    Args:
        collated_batch (dict[str, Union[list, Tensor]]): A collated batch of data containing input_ids and attention_mask.

    Returns:
        Tensor: The regression predictions
    """
    hidden_states = self.backbone(
        collated_batch["input_ids"], collated_batch["attention_mask"]
    )  # (bs, seq_len, dim)
    preds = self.adapter(hidden_states, collated_batch["attention_mask"])
    return preds

evaluate(preds, collated_batch, stage=None, loss_only=False)

Evaluates the model predictions against the ground truth labels.

Parameters:

Name Type Description Default
logits Tensor

The model predictions

required
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data containing labels

required
loss_only bool

Whether to only return the loss. Defaults to False.

False

Returns:

Type Description
dict[str, Union[Tensor, float]]

dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse

Source code in modelgenerator/tasks/tasks.py
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
def evaluate(
    self,
    preds: Tensor,
    collated_batch: dict[str, Union[list, Tensor]],
    stage: Optional[Literal["train", "val", "test"]] = None,
    loss_only: bool = False,
) -> dict[str, Union[Tensor, float]]:
    """Evaluates the model predictions against the ground truth labels.

    Args:
        logits (Tensor): The model predictions
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data containing labels
        loss_only (bool, optional): Whether to only return the loss. Defaults to False.

    Returns:
        dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse
    """
    labels = collated_batch["labels"]
    loss = self.loss(preds, labels)
    if loss_only:
        return {"loss": loss}
    metrics = self.get_metrics_by_stage(stage)
    for metric in metrics.values():
        self.call_or_update_metric(stage, metric, preds, labels)
    return {"loss": loss}

modelgenerator.adapters.SequenceAdapter

Base class only for type hinting purposes. Used for Callable[[int, int] SequenceAdapter] types.

Source code in modelgenerator/adapters/base.py
1
2
3
4
class SequenceAdapter:
    """Base class only for type hinting purposes. Used for Callable[[int, int] SequenceAdapter] types."""

    pass

modelgenerator.adapters.LinearCLSAdapter

Bases: Module, SequenceAdapter

Simple linear adapter for a 1D embedding

Parameters:

Name Type Description Default
in_features int

Number of input features

required
out_features int

Number of output features

required
Source code in modelgenerator/adapters/adapters.py
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
class LinearCLSAdapter(nn.Module, SequenceAdapter):
    """Simple linear adapter for a 1D embedding

    Args:
        in_features (int): Number of input features
        out_features (int): Number of output features
    """

    def __init__(self, in_features: int, out_features: int):
        super().__init__()
        self.linear = nn.Linear(in_features, out_features)

    def forward(self, hidden_states: Tensor, attention_mask: Tensor = None) -> Tensor:
        """Forward pass

        Args:
            hidden_states (torch.Tensor): of shape (n, seq_len, in_features)
            attention_mask (torch.Tensor): of shape (n, seq_len)

        Returns:
            torch.Tensor: predictions (n, out_features)
        """
        output = self.linear(hidden_states[:, 0])
        return output

forward(hidden_states, attention_mask=None)

Forward pass

Parameters:

Name Type Description Default
hidden_states Tensor

of shape (n, seq_len, in_features)

required
attention_mask Tensor

of shape (n, seq_len)

None

Returns:

Type Description
Tensor

torch.Tensor: predictions (n, out_features)

Source code in modelgenerator/adapters/adapters.py
135
136
137
138
139
140
141
142
143
144
145
146
def forward(self, hidden_states: Tensor, attention_mask: Tensor = None) -> Tensor:
    """Forward pass

    Args:
        hidden_states (torch.Tensor): of shape (n, seq_len, in_features)
        attention_mask (torch.Tensor): of shape (n, seq_len)

    Returns:
        torch.Tensor: predictions (n, out_features)
    """
    output = self.linear(hidden_states[:, 0])
    return output