Skip to content

Adding Tasks

Tasks are use-cases for pre-trained foundation models.

Pre-trained foundation models (FMs, backbones) improve performance across a wide range of ML tasks. However, tasks utilize FMs in very different ways, often requiring a unique reimplementation or adaptation for every backbone-task pair, a process that is time-consuming and error-prone. For FM-enabled research and development to be practical, modularity and reusability are essential.

AIDO.ModelGenerator tasks enable rapid prototyping and experimentation through hot-swappable backbone and adapter components, which make use of standard interfaces. All of this is made possible by the PyTorch Lightning framework, which provides the LightningModule interface for hardware-agnostic training, evaluation, and prediction, as well as configified experiment management and extensive CLI support.

Available Tasks: Inference, MLM, SequenceClassification, TokenClassification, PairwiseTokenClassification, Diffusion, ConditionalDiffusion, SequenceRegression, Embed

Note: Adapters and Backbones are typed as Callables, since some args are reserved to automatically configure the adapter with the backbone. Create an AdapterCallable signature for a task to specify which arguments are configurable, and which are reserved.

Adding Adapters

Adapters serve as a linker between a backbone's output and a task's objective function.

They are simple nn.Module objects that use the backbone interface to configure their weights and forward pass. Their construction is handled within the task's configure_model method. Each task only tolerates a specific adapter type, which all adapters for that task must subclass. See the SequenceAdapter type and implemented LinearCLSAdapter for SequenceRegression as an example below.

modelgenerator.tasks.TaskInterface

Bases: LightningModule

Interface class to ensure consistent implementation of essential methods for all tasks.

Note

Tasks will usually take a backbone and adapter as arguments, but these are not strictly required. See SequenceRegression task for an succinct example implementation. Handles the boilerplate of setting up training, validation, and testing steps, as well as the optimizer and learning rate scheduler. Subclasses must implement the init, configure_model, collate, forward, and evaluate methods.

Parameters:

Name Type Description Default
use_legacy_adapter bool

Whether to use the adapter from the backbone (HF head support). Defaults to False.

False
strict_loading bool

Whether to strictly load the model. Defaults to True. Set it to False if you want to replace the adapter (e.g. for continue pretraining)

True
batch_size int

The batch size to use for training. Defaults to None.

None
optimizer OptimizerCallable

The optimizer to use for training. Defaults to torch.optim.AdamW.

AdamW
reset_optimizer_states bool

Whether to reset the optimizer states. Defaults to False. Set it to True if you want to replace the adapter (e.g. for continue pretraining).

False
lr_scheduler LRSchedulerCallable

The learning rate scheduler to use for training. Defaults to None.

None
Source code in modelgenerator/tasks/base.py
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
class TaskInterface(pl.LightningModule):
    """Interface class to ensure consistent implementation of essential methods for all tasks.

    Note:
        Tasks will usually take a backbone and adapter as arguments, but these are not strictly required.
        See [SequenceRegression](./#modelgenerator.tasks.SequenceRegression) task for an succinct example implementation.
        Handles the boilerplate of setting up training, validation, and testing steps,
        as well as the optimizer and learning rate scheduler. Subclasses must implement
        the __init__, configure_model, collate, forward, and evaluate methods.

    Args:
        use_legacy_adapter (bool, optional):
            Whether to use the adapter from the backbone (HF head support). Defaults to False.
        strict_loading (bool, optional): Whether to strictly load the model. Defaults to True.
            Set it to False if you want to replace the adapter (e.g. for continue pretraining)
        batch_size (int, optional): The batch size to use for training. Defaults to None.
        optimizer (OptimizerCallable, optional): The optimizer to use for training. Defaults to torch.optim.AdamW.
        reset_optimizer_states (bool, optional): Whether to reset the optimizer states. Defaults to False.
            Set it to True if you want to replace the adapter (e.g. for continue pretraining).
        lr_scheduler (LRSchedulerCallable, optional): The learning rate scheduler to use for training. Defaults to None.
    """

    def __init__(
        self,
        optimizer: OptimizerCallable = torch.optim.AdamW,
        lr_scheduler: Optional[LRSchedulerCallable] = None,
        batch_size: Optional[int] = None,
        use_legacy_adapter: bool = False,
        strict_loading: bool = True,
        reset_optimizer_states: bool = False,
        **kwargs,
    ):
        super().__init__(**kwargs)
        # NOTE: A very explicit way of preventing unwanted hparams from being
        # saved due to inheritance. All subclasses should include the
        # following condition under super().__init__().
        # Converting it to a reusable method could work but it would rely
        # on the implementation detail of save_hyperparameters() walking up
        # the call stack, which can change at any time.
        if self.__class__ is TaskInterface:
            self.save_hyperparameters()
        self.optimizer = optimizer
        self.lr_scheduler = lr_scheduler
        self.batch_size = batch_size
        self.use_legacy_adapter = use_legacy_adapter
        self.metrics = nn.ModuleDict(
            {
                "train_metrics": nn.ModuleDict(),
                "val_metrics": nn.ModuleDict(),
                "test_metrics": nn.ModuleDict(),
            }
        )
        self.metrics_to_pbar: Set[str] = {}
        self.strict_loading = strict_loading
        self.reset_optimizer_states = reset_optimizer_states

    def configure_model(self) -> None:
        """Configures the model for training and interence. Subclasses must implement this method."""
        raise NotImplementedError

    def transform(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: int
    ) -> dict[str, Union[list, Tensor]]:
        """Collates and tokenizes a batch of data into a format that can be passed to the forward and evaluate methods. Subclasses must implement this method.

        Note:
            Tokenization is handled here using the backbone interface.
            Tensor typing and device moving should be handled here.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The collated batch
        """
        raise NotImplementedError

    def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
        """Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

        Args:
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.

        Returns:
            Tensor: The model predictions
        """
        raise NotImplementedError

    def evaluate(
        self,
        preds: Tensor,
        collated_batch: dict[str, Union[list, Tensor]],
        stage: Optional[Literal["train", "val", "test"]] = None,
        loss_only: bool = False,
    ) -> dict[str, Union[Tensor, float]]:
        """Calculate loss and update metrics states. Subclasses must implement this method.

        Args:
            preds (Tensor): The model predictions from forward.
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.
            stage (str, optional): The stage of training (train, val, test). Defaults to None.
            loss_only (bool, optional): If true, only update loss metric. Defaults to False.

        Returns:
            dict[str, Union[Tensor, float]]: The loss and any additional metrics.
        """
        raise NotImplementedError

    def configure_optimizers(self):
        """Configures the optimizer and learning rate scheduler for training.

        Returns:
            list: A list of optimizers and learning rate schedulers
        """
        config = {
            "optimizer": self.optimizer(
                filter(lambda p: p.requires_grad, self.parameters())
            )
        }
        if self.lr_scheduler is not None:
            scheduler = self.lr_scheduler(config["optimizer"])
            if isinstance(scheduler, LazyLRScheduler):
                scheduler.initialize(self.trainer)
            config["lr_scheduler"] = {
                "scheduler": scheduler,
                "interval": "step",
                "monitor": "train_loss",  # Only used for torch.optim.lr_scheduler.ReduceLROnPlateau
            }
        return config

    def on_save_checkpoint(self, checkpoint: dict):
        if hasattr(self.backbone, "on_save_checkpoint"):
            self.backbone.on_save_checkpoint(checkpoint)

    def on_load_checkpoint(self, checkpoint: dict):
        if self.reset_optimizer_states:
            checkpoint["optimizer_states"] = {}
            checkpoint["lr_schedulers"] = {}

    def training_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a training step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the training step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "train", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "train")
        return outputs

    def validation_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a validation step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the validation step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "val", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "val")
        return {"predictions": preds, **batch, **collated_batch, **outputs}

    def test_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a test step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the test step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "test", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "test")
        return {"predictions": preds, **batch, **collated_batch, **outputs}

    def predict_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> dict[str, Union[list, Tensor]]:
        """Infers predictions from a batch of data. Calls collate and forward methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The predictions from the model along with the collated batch.
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        return {"predictions": preds, **batch, **collated_batch}

    def get_metrics_by_stage(
        self, stage: Literal["train", "val", "test"]
    ) -> nn.ModuleDict:
        """Returns the metrics dict for a given stage.

        Args:
            stage (str): The stage of training (train, val, test)

        Returns:
            nn.ModuleDict: The metrics for the given stage
        """
        try:
            return self.metrics[f"{stage}_metrics"]
        except KeyError:
            raise ValueError(
                f"Stage must be one of 'train', 'val', or 'test'. Got {stage}"
            )

    def log_loss_and_metrics(
        self, loss: Tensor, stage: Literal["train", "val", "test"]
    ) -> None:
        """Logs the loss and metrics for a given stage.

        Args:
            loss (Tensor): The loss from the training, validation, or testing step
            stage (str): The stage of training (train, val, test)
        """
        self.log(f"{stage}_loss", loss, prog_bar=True, sync_dist=stage != "train")
        for k, v in self.metrics[f"{stage}_metrics"].items():
            self.log(f"{stage}_{k}", v, prog_bar=k in self.metrics_to_pbar)

    def call_or_update_metric(
        self, stage: Literal["train", "val", "test"], metric: tm.Metric, *args, **kwargs
    ):
        if stage == "train":
            # in addition to .update(), metric.__call__ also .compute() the metric
            # for the current batch. However, .compute() may fail if data is insufficient.
            try:
                metric(*args, **kwargs)
            except ValueError:
                metric.update(*args, **kwargs)
        else:
            # update only since per step metrics are not logged in val and test stages
            metric.update(*args, **kwargs)

    @classmethod
    def from_config(cls, config: dict) -> "TaskInterface":
        """Creates a task model from a configuration dictionary

        Args:
            config (Dict[str, Any]): Configuration dictionary

        Returns:
            TaskInterface: Task model
        """
        parser = ArgumentParser()
        parser.add_class_arguments(cls, "model")
        init = parser.instantiate_classes(parser.parse_object(config))
        init.model.configure_model()
        return init.model

configure_model()

Configures the model for training and interence. Subclasses must implement this method.

Source code in modelgenerator/tasks/base.py
82
83
84
def configure_model(self) -> None:
    """Configures the model for training and interence. Subclasses must implement this method."""
    raise NotImplementedError

forward(collated_batch)

Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

Parameters:

Name Type Description Default
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data from collate.

required

Returns:

Name Type Description
Tensor Tensor

The model predictions

Source code in modelgenerator/tasks/base.py
104
105
106
107
108
109
110
111
112
113
def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
    """Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

    Args:
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.

    Returns:
        Tensor: The model predictions
    """
    raise NotImplementedError

evaluate(preds, collated_batch, stage=None, loss_only=False)

Calculate loss and update metrics states. Subclasses must implement this method.

Parameters:

Name Type Description Default
preds Tensor

The model predictions from forward.

required
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data from collate.

required
stage str

The stage of training (train, val, test). Defaults to None.

None
loss_only bool

If true, only update loss metric. Defaults to False.

False

Returns:

Type Description
dict[str, Union[Tensor, float]]

dict[str, Union[Tensor, float]]: The loss and any additional metrics.

Source code in modelgenerator/tasks/base.py
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
def evaluate(
    self,
    preds: Tensor,
    collated_batch: dict[str, Union[list, Tensor]],
    stage: Optional[Literal["train", "val", "test"]] = None,
    loss_only: bool = False,
) -> dict[str, Union[Tensor, float]]:
    """Calculate loss and update metrics states. Subclasses must implement this method.

    Args:
        preds (Tensor): The model predictions from forward.
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.
        stage (str, optional): The stage of training (train, val, test). Defaults to None.
        loss_only (bool, optional): If true, only update loss metric. Defaults to False.

    Returns:
        dict[str, Union[Tensor, float]]: The loss and any additional metrics.
    """
    raise NotImplementedError

Examples

modelgenerator.tasks.SequenceRegression

Bases: TaskInterface

Task for fine-tuning a model on single-/multi-task regression.

Parameters:

Name Type Description Default
backbone BackboneCallable

The callable that returns a backbone.

required
adapter Callable[[int, int], SequenceAdapter]

The callable that returns an adapter. Defaults to LinearCLSAdapter.

LinearCLSAdapter
num_outputs int

The number of outputs in the regression task. Defaults to 1.

1
loss_func Callable

Loss function for regression tasks. Defaults to nn.MSELoss.

MSELoss
optimizer OptimizerCallable

The optimizer to use for training. Defaults to torch.optim.AdamW.

required
lr_scheduler LRSchedulerCallable

The learning rate scheduler to use for training. Defaults to None.

required
batch_size int

The batch size to use for training. Defaults to None.

required
strict_loading bool

Whether to strictly load the model. Defaults to True. Set it to False if you want to replace the adapter (e.g. for continue pretraining)

required
reset_optimizer_states bool

Whether to reset the optimizer states. Defaults to False. Set it to True if you want to replace the adapter (e.g. for continue pretraining).

required
Source code in modelgenerator/tasks/tasks.py
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
class SequenceRegression(TaskInterface):
    """Task for fine-tuning a model on single-/multi-task regression.

    Args:
        backbone (BackboneCallable): The callable that returns a backbone.
        adapter (Callable[[int, int], SequenceAdapter], optional): The callable that returns an adapter. Defaults to LinearCLSAdapter.
        num_outputs (int, optional): The number of outputs in the regression task. Defaults to 1.
        loss_func (Callable, optional): Loss function for regression tasks. Defaults to nn.MSELoss.
        optimizer (OptimizerCallable, optional): The optimizer to use for training. Defaults to torch.optim.AdamW.
        lr_scheduler (LRSchedulerCallable, optional): The learning rate scheduler to use for training. Defaults to None.
        batch_size (int, optional): The batch size to use for training. Defaults to None.
        strict_loading (bool, optional): Whether to strictly load the model. Defaults to True.
            Set it to False if you want to replace the adapter (e.g. for continue pretraining)
        reset_optimizer_states (bool, optional): Whether to reset the optimizer states. Defaults to False.
            Set it to True if you want to replace the adapter (e.g. for continue pretraining).
    """

    def __init__(
        self,
        backbone: BackboneCallable,
        adapter: Optional[Callable[[int, int], SequenceAdapter]] = LinearCLSAdapter,
        num_outputs: int = 1,
        loss_func: Callable[..., torch.nn.Module] = torch.nn.MSELoss,
        log_grad_norm_step: int = 0,
        **kwargs,
    ):
        super().__init__(**kwargs)
        if self.__class__ is SequenceRegression:
            self.save_hyperparameters()
        self.backbone_fn = backbone
        self.adapter_fn = adapter
        self.num_outputs = num_outputs
        self.backbone = None
        self.adapter = None
        self.loss = loss_func()
        self.log_grad_norm_step = log_grad_norm_step
        for stage in ["train", "val", "test"]:
            self.metrics[f"{stage}_metrics"] = nn.ModuleDict(
                {
                    "pearson": PearsonCorrCoef(
                        num_outputs=num_outputs, multioutput="uniform_average"
                    ),
                    "spearman": SpearmanCorrCoef(
                        num_outputs=num_outputs, multioutput="uniform_average"
                    ),
                    "mae": MeanAbsoluteError(
                        num_outputs=num_outputs, multioutput="uniform_average"
                    ),
                    "r2": tm.R2Score(multioutput="uniform_average"),
                    "mse": MeanSquaredError(
                        num_outputs=num_outputs, multioutput="uniform_average"
                    ),
                }
            )
            if stage == "test" and self.num_outputs > 1:
                # calculate scores for each task
                label_wise_spearman = nn.ModuleDict(
                        {
                            "spearman_" + str(i): SpearmanCorrCoef(num_outputs=1)
                            for i in range(self.num_outputs)
                        }
                    )
                label_wise_pearson = nn.ModuleDict(
                        {
                            "pearson_" + str(i): PearsonCorrCoef(num_outputs=1)
                            for i in range(self.num_outputs)
                        }
                    )
                label_wise_r2 = nn.ModuleDict(
                        {
                            "r2_" + str(i): tm.R2Score()
                            for i in range(self.num_outputs)
                        }
                    )
                label_wise_mse = nn.ModuleDict(
                        {
                            "mse_" + str(i): MeanSquaredError(num_outputs=1)
                            for i in range(self.num_outputs)
                        }
                    )
                label_wise_mae = nn.ModuleDict(
                        {
                            "mae_" + str(i): MeanAbsoluteError(num_outputs=1)
                            for i in range(self.num_outputs)
                        }
                    )
                self.metrics[f"{stage}_metrics"].update(label_wise_spearman)
                self.metrics[f"{stage}_metrics"].update(label_wise_pearson)
                self.metrics[f"{stage}_metrics"].update(label_wise_r2)
                self.metrics[f"{stage}_metrics"].update(label_wise_mse)
                self.metrics[f"{stage}_metrics"].update(label_wise_mae)
        self.metrics_to_pbar = set(self.metrics["train_metrics"].keys())

    def configure_model(self) -> None:
        if self.backbone is not None:
            return
        if self.use_legacy_adapter:
            self.backbone = self.backbone_fn(
                LegacyAdapterType.SEQ_CLS,
                DefaultConfig(
                    config_overwrites={
                        "problem_type": "regression",
                        "num_labels": self.num_outputs,
                    }
                ),
            )
            self.adapter = self.backbone.get_decoder()
        else:
            self.backbone = self.backbone_fn(None, None)
            self.adapter = self.adapter_fn(
                self.backbone.get_embedding_size(), self.num_outputs
            )

    def transform(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> dict[str, Union[list, Tensor]]:
        """Collates a batch of data into a format that can be passed to the forward and evaluate methods.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data containing sequences and labels
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The collated batch containing sequences, input_ids, attention_mask, and labels
        """

        sequences = batch.pop("sequences")
        tokenized_result = self.backbone.tokenize(sequences, **batch)
        input_ids = tokenized_result.pop("input_ids", None)
        attention_mask = tokenized_result.pop("attention_mask", None)
        special_tokens_mask = tokenized_result.pop("special_tokens_mask", None)

        input_ids = torch.tensor(input_ids, dtype=torch.long).to(self.device)
        if attention_mask is not None:
            attention_mask = torch.tensor(attention_mask, dtype=torch.long).to(self.device)
        labels = None
        if batch.get("labels") is not None:
            labels = batch["labels"].to(self.device, dtype=self.dtype)
        return {
            "sequences": sequences,
            "input_ids": input_ids,
            "attention_mask": attention_mask,
            "special_tokens_mask": special_tokens_mask,
            "labels": labels,
            **tokenized_result,
        }

    def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
        """Runs a forward pass of the model.

        Args:
            collated_batch (dict[str, Union[list, Tensor]]): A collated batch of data containing input_ids and attention_mask.

        Returns:
            Tensor: The regression predictions
        """

        hidden_states = self.backbone(**collated_batch)  # (bs, seq_len, dim)
        preds = self.adapter(hidden_states, collated_batch["attention_mask"])
        return preds

    def evaluate(
        self,
        preds: Tensor,
        collated_batch: dict[str, Union[list, Tensor]],
        stage: Optional[Literal["train", "val", "test"]] = None,
        loss_only: bool = False,
    ) -> dict[str, Union[Tensor, float]]:
        """Evaluates the model predictions against the ground truth labels.

        Args:
            logits (Tensor): The model predictions
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data containing labels
            loss_only (bool, optional): Whether to only return the loss. Defaults to False.

        Returns:
            dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse
        """

        labels = collated_batch["labels"]
        loss = self.loss(preds, labels)
        if loss_only:
            return {"loss": loss}
        metrics = self.get_metrics_by_stage(stage)

        if self.num_outputs > 1 and stage == "test":
            for name, metric in metrics.items():
                if len(name.split("_")) == 1:
                    self.call_or_update_metric(stage, metric, preds, labels)
                else:
                    i = int(name.split("_")[-1])
                    self.call_or_update_metric(stage, metric, preds[:, i], labels[:, i])
        else:
            for metric in metrics.values():
                self.call_or_update_metric(stage, metric, preds, labels)

        return {"loss": loss}

    def log_grad_norm(self, optimizer):
        """
        Log the total total_norm, adaptor_param_norm and adaptor_grad_norm.

        Refer to
        https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/plugins/precision/precision.py
        https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/core/module.py
        for the calculation of the gradient norm
        """
        parameters = self.trainer.precision_plugin.main_params(optimizer)
        parameters = list(parameters)
        if len(parameters) > 0:
            assert all([p.requires_grad for p in parameters])
            if all([p.grad is not None for p in parameters]):
                total_norm = vector_norm(
                    torch.stack([vector_norm(p.grad, ord=2) for p in parameters]), ord=2
                )
                adaptor_param_norm = vector_norm(
                    torch.stack(
                        [vector_norm(p, ord=2) for p in self.adapter.parameters()]
                    ),
                    ord=2,
                )
                adaptor_grad_norm = vector_norm(
                    torch.stack(
                        [vector_norm(p.grad, ord=2) for p in self.adapter.parameters()]
                    ),
                    ord=2,
                )

                self.log("total_norm", total_norm, rank_zero_only=True)
                self.log("adaptor_param_norm", adaptor_param_norm, rank_zero_only=True)
                self.log("adaptor_grad_norm", adaptor_grad_norm, rank_zero_only=True)

    def on_before_optimizer_step(self, optimizer):
        """
        Log gradient norm of adaptor's parameters
        """
        if (
            self.log_grad_norm_step > 0
            and self.trainer.global_step % self.log_grad_norm_step == 0
        ):
            self.log_grad_norm(optimizer)

configure_model()

Source code in modelgenerator/tasks/tasks.py
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
def configure_model(self) -> None:
    if self.backbone is not None:
        return
    if self.use_legacy_adapter:
        self.backbone = self.backbone_fn(
            LegacyAdapterType.SEQ_CLS,
            DefaultConfig(
                config_overwrites={
                    "problem_type": "regression",
                    "num_labels": self.num_outputs,
                }
            ),
        )
        self.adapter = self.backbone.get_decoder()
    else:
        self.backbone = self.backbone_fn(None, None)
        self.adapter = self.adapter_fn(
            self.backbone.get_embedding_size(), self.num_outputs
        )

forward(collated_batch)

Runs a forward pass of the model.

Parameters:

Name Type Description Default
collated_batch dict[str, Union[list, Tensor]]

A collated batch of data containing input_ids and attention_mask.

required

Returns:

Name Type Description
Tensor Tensor

The regression predictions

Source code in modelgenerator/tasks/tasks.py
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
    """Runs a forward pass of the model.

    Args:
        collated_batch (dict[str, Union[list, Tensor]]): A collated batch of data containing input_ids and attention_mask.

    Returns:
        Tensor: The regression predictions
    """

    hidden_states = self.backbone(**collated_batch)  # (bs, seq_len, dim)
    preds = self.adapter(hidden_states, collated_batch["attention_mask"])
    return preds

evaluate(preds, collated_batch, stage=None, loss_only=False)

Evaluates the model predictions against the ground truth labels.

Parameters:

Name Type Description Default
logits Tensor

The model predictions

required
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data containing labels

required
loss_only bool

Whether to only return the loss. Defaults to False.

False

Returns:

Type Description
dict[str, Union[Tensor, float]]

dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse

Source code in modelgenerator/tasks/tasks.py
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
def evaluate(
    self,
    preds: Tensor,
    collated_batch: dict[str, Union[list, Tensor]],
    stage: Optional[Literal["train", "val", "test"]] = None,
    loss_only: bool = False,
) -> dict[str, Union[Tensor, float]]:
    """Evaluates the model predictions against the ground truth labels.

    Args:
        logits (Tensor): The model predictions
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data containing labels
        loss_only (bool, optional): Whether to only return the loss. Defaults to False.

    Returns:
        dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse
    """

    labels = collated_batch["labels"]
    loss = self.loss(preds, labels)
    if loss_only:
        return {"loss": loss}
    metrics = self.get_metrics_by_stage(stage)

    if self.num_outputs > 1 and stage == "test":
        for name, metric in metrics.items():
            if len(name.split("_")) == 1:
                self.call_or_update_metric(stage, metric, preds, labels)
            else:
                i = int(name.split("_")[-1])
                self.call_or_update_metric(stage, metric, preds[:, i], labels[:, i])
    else:
        for metric in metrics.values():
            self.call_or_update_metric(stage, metric, preds, labels)

    return {"loss": loss}

modelgenerator.adapters.SequenceAdapter

Base class only for type hinting purposes. Used for Callable[[int, int] SequenceAdapter] types.

Source code in modelgenerator/adapters/base.py
1
2
3
4
class SequenceAdapter:
    """Base class only for type hinting purposes. Used for Callable[[int, int] SequenceAdapter] types."""

    pass

modelgenerator.adapters.LinearCLSAdapter

Bases: Module, SequenceAdapter

Simple linear adapter for a 1D embedding

Parameters:

Name Type Description Default
in_features int

Number of input features

required
out_features int

Number of output features

required
Source code in modelgenerator/adapters/adapters.py
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
class LinearCLSAdapter(nn.Module, SequenceAdapter):
    """Simple linear adapter for a 1D embedding

    Args:
        in_features (int): Number of input features
        out_features (int): Number of output features
    """

    def __init__(self, in_features: int, out_features: int):
        super().__init__()
        self.linear = nn.Linear(in_features, out_features)

    def forward(self, hidden_states: Tensor, attention_mask: Tensor = None) -> Tensor:
        """Forward pass

        Args:
            hidden_states (torch.Tensor): of shape (n, seq_len, in_features)
            attention_mask (torch.Tensor): of shape (n, seq_len)

        Returns:
            torch.Tensor: predictions (n, out_features)
        """
        output = self.linear(hidden_states[:, 0])
        return output

forward(hidden_states, attention_mask=None)

Forward pass

Parameters:

Name Type Description Default
hidden_states Tensor

of shape (n, seq_len, in_features)

required
attention_mask Tensor

of shape (n, seq_len)

None

Returns:

Type Description
Tensor

torch.Tensor: predictions (n, out_features)

Source code in modelgenerator/adapters/adapters.py
228
229
230
231
232
233
234
235
236
237
238
239
def forward(self, hidden_states: Tensor, attention_mask: Tensor = None) -> Tensor:
    """Forward pass

    Args:
        hidden_states (torch.Tensor): of shape (n, seq_len, in_features)
        attention_mask (torch.Tensor): of shape (n, seq_len)

    Returns:
        torch.Tensor: predictions (n, out_features)
    """
    output = self.linear(hidden_states[:, 0])
    return output