Skip to content

Adding Tasks

Tasks are use-cases for pre-trained foundation models.

Pre-trained foundation models (FMs, backbones) improve performance across a wide range of ML tasks. However, tasks utilize FMs in very different ways, often requiring a unique reimplementation or adaptation for every backbone-task pair, a process that is time-consuming and error-prone. For FM-enabled research and development to be practical, modularity and reusability are essential.

AIDO.ModelGenerator tasks enable rapid prototyping and experimentation through hot-swappable backbone and adapter components, which make use of standard interfaces. All of this is made possible by the PyTorch Lightning framework, which provides the LightningModule interface for hardware-agnostic training, evaluation, and prediction, as well as configified experiment management and extensive CLI support.

Available Tasks: Inference, MLM, SequenceClassification, TokenClassification, PairwiseTokenClassification, Diffusion, ConditionalDiffusion, SequenceRegression, Embed

Note: Adapters and Backbones are typed as Callables, since some args are reserved to automatically configure the adapter with the backbone. Create an AdapterCallable signature for a task to specify which arguments are configurable, and which are reserved.

Adding Adapters

Adapters serve as a linker between a backbone's output and a task's objective function.

They are simple nn.Module objects that use the backbone interface to configure their weights and forward pass. Their construction is handled within the task's configure_model method. Each task only tolerates a specific adapter type, which all adapters for that task must subclass. See the SequenceAdapter type and implemented LinearCLSAdapter for SequenceRegression as an example below.

modelgenerator.tasks.TaskInterface

Bases: LightningModule

Interface class to ensure consistent implementation of essential methods for all tasks.

Note

Tasks will usually take a backbone and adapter as arguments, but these are not strictly required. See SequenceRegression task for an succinct example implementation. Handles the boilerplate of setting up training, validation, and testing steps, as well as the optimizer and learning rate scheduler. Subclasses must implement the init, configure_model, collate, forward, and evaluate methods.

Parameters:

Name Type Description Default
optimizer OptimizerCallable

The optimizer to use for training.

AdamW
lr_scheduler Optional[LRSchedulerCallable]

The learning rate scheduler to use for training.

None
batch_size Optional[int]

The batch size to use for training.

None
use_legacy_adapter bool

Whether to use the adapter from the backbone (HF head support). Warning: This is not supported for all tasks and will be depreciated in the future.

False
strict_loading bool

Whether to strictly load the model from the checkpoint. If False, replaces missing weights with pretrained weights. Should be enabled when loading a checkpoint from a run with use_peft and save_peft_only.

True
reset_optimizer_states bool

Whether to reset the optimizer states. Set it to True if you want to replace the adapter (e.g. for continued pretraining).

False
**kwargs

Additional arguments passed to the parent class.

{}
Source code in modelgenerator/tasks/base.py
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
class TaskInterface(pl.LightningModule, metaclass=GoogleDocstringInheritanceInitMeta):
    """Interface class to ensure consistent implementation of essential methods for all tasks.

    Note:
        Tasks will usually take a backbone and adapter as arguments, but these are not strictly required.
        See [SequenceRegression](./#modelgenerator.tasks.SequenceRegression) task for an succinct example implementation.
        Handles the boilerplate of setting up training, validation, and testing steps,
        as well as the optimizer and learning rate scheduler. Subclasses must implement
        the __init__, configure_model, collate, forward, and evaluate methods.

    Args:
        use_legacy_adapter: Whether to use the adapter from the backbone (HF head support). 
            **Warning**: This is not supported for all tasks and will be depreciated in the future.
        strict_loading: Whether to strictly load the model from the checkpoint. 
            If False, replaces missing weights with pretrained weights. 
            Should be enabled when loading a checkpoint from a run with `use_peft` and `save_peft_only`.
        batch_size: The batch size to use for training.
        optimizer: The optimizer to use for training.
        reset_optimizer_states: Whether to reset the optimizer states.
            Set it to True if you want to replace the adapter (e.g. for continued pretraining).
        lr_scheduler: The learning rate scheduler to use for training.
        **kwargs: Additional arguments passed to the parent class.
    """

    def __init__(
        self,
        optimizer: OptimizerCallable = torch.optim.AdamW,
        lr_scheduler: Optional[LRSchedulerCallable] = None,
        batch_size: Optional[int] = None,
        use_legacy_adapter: bool = False,
        strict_loading: bool = True,
        reset_optimizer_states: bool = False,
        **kwargs,
    ):
        super().__init__(**kwargs)
        # NOTE: A very explicit way of preventing unwanted hparams from being
        # saved due to inheritance. All subclasses should include the
        # following condition under super().__init__().
        # Converting it to a reusable method could work but it would rely
        # on the implementation detail of save_hyperparameters() walking up
        # the call stack, which can change at any time.
        if self.__class__ is TaskInterface:
            self.save_hyperparameters()
        self.optimizer = optimizer
        self.lr_scheduler = lr_scheduler
        self.batch_size = batch_size
        self.use_legacy_adapter = use_legacy_adapter
        self.metrics = nn.ModuleDict(
            {
                "train_metrics": nn.ModuleDict(),
                "val_metrics": nn.ModuleDict(),
                "test_metrics": nn.ModuleDict(),
            }
        )
        self.metrics_to_pbar: Set[str] = {}
        self.strict_loading = strict_loading
        self.reset_optimizer_states = reset_optimizer_states

    def configure_model(self) -> None:
        """Configures the model for training and interence. Subclasses must implement this method."""
        raise NotImplementedError

    def transform(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: int
    ) -> dict[str, Union[list, Tensor]]:
        """Collates and tokenizes a batch of data into a format that can be passed to the forward and evaluate methods. Subclasses must implement this method.

        Note:
            Tokenization is handled here using the backbone interface.
            Tensor typing and device moving should be handled here.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The collated batch
        """
        raise NotImplementedError

    def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
        """Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

        Args:
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.

        Returns:
            Tensor: The model predictions
        """
        raise NotImplementedError

    def evaluate(
        self,
        preds: Tensor,
        collated_batch: dict[str, Union[list, Tensor]],
        stage: Optional[Literal["train", "val", "test"]] = None,
        loss_only: bool = False,
    ) -> dict[str, Union[Tensor, float]]:
        """Calculate loss and update metrics states. Subclasses must implement this method.

        Args:
            preds (Tensor): The model predictions from forward.
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.
            stage (str, optional): The stage of training (train, val, test). Defaults to None.
            loss_only (bool, optional): If true, only update loss metric. Defaults to False.

        Returns:
            dict[str, Union[Tensor, float]]: The loss and any additional metrics.
        """
        raise NotImplementedError

    def configure_optimizers(self):
        """Configures the optimizer and learning rate scheduler for training.

        Returns:
            list: A list of optimizers and learning rate schedulers
        """
        config = {
            "optimizer": self.optimizer(
                filter(lambda p: p.requires_grad, self.parameters())
            )
        }
        if self.lr_scheduler is not None:
            scheduler = self.lr_scheduler(config["optimizer"])
            if isinstance(scheduler, LazyLRScheduler):
                scheduler.initialize(self.trainer)
            config["lr_scheduler"] = {
                "scheduler": scheduler,
                "interval": "step",
                "monitor": "train_loss",  # Only used for torch.optim.lr_scheduler.ReduceLROnPlateau
            }
        return config

    def on_save_checkpoint(self, checkpoint: dict):
        if hasattr(self.backbone, "on_save_checkpoint"):
            self.backbone.on_save_checkpoint(checkpoint)

    def on_load_checkpoint(self, checkpoint: dict):
        if self.reset_optimizer_states:
            checkpoint["optimizer_states"] = {}
            checkpoint["lr_schedulers"] = {}

    def training_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a training step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the training step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "train", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "train")
        return outputs

    def validation_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a validation step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the validation step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "val", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "val")
        return {"predictions": preds, **batch, **collated_batch, **outputs}

    def test_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> Tensor:
        """Runs a test step on a batch of data. Calls collate, forward, and evaluate methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            Tensor: The loss from the test step
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        outputs = self.evaluate(preds, collated_batch, "test", loss_only=False)
        self.log_loss_and_metrics(outputs["loss"], "test")
        return {"predictions": preds, **batch, **collated_batch, **outputs}

    def predict_step(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> dict[str, Union[list, Tensor]]:
        """Infers predictions from a batch of data. Calls collate and forward methods in order.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data from the DataLoader
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The predictions from the model along with the collated batch.
        """
        collated_batch = self.transform(batch, batch_idx)
        preds = self.forward(collated_batch)
        return {"predictions": preds, **batch, **collated_batch}

    def get_metrics_by_stage(
        self, stage: Literal["train", "val", "test"]
    ) -> nn.ModuleDict:
        """Returns the metrics dict for a given stage.

        Args:
            stage (str): The stage of training (train, val, test)

        Returns:
            nn.ModuleDict: The metrics for the given stage
        """
        try:
            return self.metrics[f"{stage}_metrics"]
        except KeyError:
            raise ValueError(
                f"Stage must be one of 'train', 'val', or 'test'. Got {stage}"
            )

    def log_loss_and_metrics(
        self, loss: Tensor, stage: Literal["train", "val", "test"]
    ) -> None:
        """Logs the loss and metrics for a given stage.

        Args:
            loss (Tensor): The loss from the training, validation, or testing step
            stage (str): The stage of training (train, val, test)
        """
        self.log(f"{stage}_loss", loss, prog_bar=True, sync_dist=stage != "train")
        for k, v in self.metrics[f"{stage}_metrics"].items():
            self.log(f"{stage}_{k}", v, prog_bar=k in self.metrics_to_pbar)

    def call_or_update_metric(
        self, stage: Literal["train", "val", "test"], metric: tm.Metric, *args, **kwargs
    ):
        if stage == "train":
            # in addition to .update(), metric.__call__ also .compute() the metric
            # for the current batch. However, .compute() may fail if data is insufficient.
            try:
                metric(*args, **kwargs)
            except ValueError:
                metric.update(*args, **kwargs)
        else:
            # update only since per step metrics are not logged in val and test stages
            metric.update(*args, **kwargs)

    @classmethod
    def from_config(cls, config: dict) -> "TaskInterface":
        """Creates a task model from a configuration dictionary

        Args:
            config (Dict[str, Any]): Configuration dictionary

        Returns:
            TaskInterface: Task model
        """
        parser = ArgumentParser()
        parser.add_class_arguments(cls, "model")
        init = parser.instantiate_classes(parser.parse_object(config))
        init.model.configure_model()
        return init.model

configure_model()

Configures the model for training and interence. Subclasses must implement this method.

Source code in modelgenerator/tasks/base.py
85
86
87
def configure_model(self) -> None:
    """Configures the model for training and interence. Subclasses must implement this method."""
    raise NotImplementedError

forward(collated_batch)

Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

Parameters:

Name Type Description Default
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data from collate.

required

Returns:

Name Type Description
Tensor Tensor

The model predictions

Source code in modelgenerator/tasks/base.py
107
108
109
110
111
112
113
114
115
116
def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
    """Runs a forward pass of the model on the collated batch of data. Subclasses must implement this method.

    Args:
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.

    Returns:
        Tensor: The model predictions
    """
    raise NotImplementedError

evaluate(preds, collated_batch, stage=None, loss_only=False)

Calculate loss and update metrics states. Subclasses must implement this method.

Parameters:

Name Type Description Default
preds Tensor

The model predictions from forward.

required
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data from collate.

required
stage str

The stage of training (train, val, test). Defaults to None.

None
loss_only bool

If true, only update loss metric. Defaults to False.

False

Returns:

Type Description
dict[str, Union[Tensor, float]]

dict[str, Union[Tensor, float]]: The loss and any additional metrics.

Source code in modelgenerator/tasks/base.py
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
def evaluate(
    self,
    preds: Tensor,
    collated_batch: dict[str, Union[list, Tensor]],
    stage: Optional[Literal["train", "val", "test"]] = None,
    loss_only: bool = False,
) -> dict[str, Union[Tensor, float]]:
    """Calculate loss and update metrics states. Subclasses must implement this method.

    Args:
        preds (Tensor): The model predictions from forward.
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data from collate.
        stage (str, optional): The stage of training (train, val, test). Defaults to None.
        loss_only (bool, optional): If true, only update loss metric. Defaults to False.

    Returns:
        dict[str, Union[Tensor, float]]: The loss and any additional metrics.
    """
    raise NotImplementedError

Examples

modelgenerator.tasks.SequenceRegression

Bases: TaskInterface

Task for fine-tuning a sequence model for single-/multi-task regression. Evaluates in terms of mean absolute error, mean squared error, R2 score, Pearson correlation, and Spearman correlation.

Parameters:

Name Type Description Default
backbone BackboneCallable

A pretrained backbone from the modelgenerator library.

required
adapter Optional[Callable[[int, int], SequenceAdapter]]

A SequenceAdapter for the model.

LinearCLSAdapter
num_outputs int

The number of outputs for the regression task.

1
loss_func Callable[..., Module]

The loss function to use for training.

MSELoss
log_grad_norm_step int

The step interval for logging gradient norms.

0
**kwargs

Additional arguments passed to the parent class.

{}
Source code in modelgenerator/tasks/tasks.py
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
class SequenceRegression(TaskInterface):
    """Task for fine-tuning a sequence model for single-/multi-task regression.
    Evaluates in terms of mean absolute error, mean squared error, R2 score, Pearson correlation, and Spearman correlation.

    Args:
        backbone: A pretrained backbone from the modelgenerator library.
        adapter: A SequenceAdapter for the model.
        num_outputs: The number of outputs for the regression task.
        loss_func: The loss function to use for training.
        log_grad_norm_step: The step interval for logging gradient norms.
    """

    def __init__(
        self,
        backbone: BackboneCallable,
        adapter: Optional[Callable[[int, int], SequenceAdapter]] = LinearCLSAdapter,
        num_outputs: int = 1,
        loss_func: Callable[..., torch.nn.Module] = torch.nn.MSELoss,
        log_grad_norm_step: int = 0,
        **kwargs,
    ):
        super().__init__(**kwargs)
        if self.__class__ is SequenceRegression:
            self.save_hyperparameters()
        self.backbone_fn = backbone
        self.adapter_fn = adapter
        self.num_outputs = num_outputs
        self.backbone = None
        self.adapter = None
        self.loss = loss_func()
        self.log_grad_norm_step = log_grad_norm_step
        for stage in ["train", "val", "test"]:
            self.metrics[f"{stage}_metrics"] = nn.ModuleDict(
                {
                    "pearson": PearsonCorrCoef(
                        num_outputs=num_outputs, multioutput="uniform_average"
                    ),
                    "spearman": SpearmanCorrCoef(
                        num_outputs=num_outputs, multioutput="uniform_average"
                    ),
                    "mae": MeanAbsoluteError(
                        num_outputs=num_outputs, multioutput="uniform_average"
                    ),
                    "r2": tm.R2Score(multioutput="uniform_average"),
                    "mse": MeanSquaredError(
                        num_outputs=num_outputs, multioutput="uniform_average"
                    ),
                }
            )
            if stage == "test" and self.num_outputs > 1:
                # calculate scores for each task
                label_wise_spearman = nn.ModuleDict(
                        {
                            "spearman_" + str(i): SpearmanCorrCoef(num_outputs=1)
                            for i in range(self.num_outputs)
                        }
                    )
                label_wise_pearson = nn.ModuleDict(
                        {
                            "pearson_" + str(i): PearsonCorrCoef(num_outputs=1)
                            for i in range(self.num_outputs)
                        }
                    )
                label_wise_r2 = nn.ModuleDict(
                        {
                            "r2_" + str(i): tm.R2Score()
                            for i in range(self.num_outputs)
                        }
                    )
                label_wise_mse = nn.ModuleDict(
                        {
                            "mse_" + str(i): MeanSquaredError(num_outputs=1)
                            for i in range(self.num_outputs)
                        }
                    )
                label_wise_mae = nn.ModuleDict(
                        {
                            "mae_" + str(i): MeanAbsoluteError(num_outputs=1)
                            for i in range(self.num_outputs)
                        }
                    )
                self.metrics[f"{stage}_metrics"].update(label_wise_spearman)
                self.metrics[f"{stage}_metrics"].update(label_wise_pearson)
                self.metrics[f"{stage}_metrics"].update(label_wise_r2)
                self.metrics[f"{stage}_metrics"].update(label_wise_mse)
                self.metrics[f"{stage}_metrics"].update(label_wise_mae)
        self.metrics_to_pbar = set(self.metrics["train_metrics"].keys())

    def configure_model(self) -> None:
        if self.backbone is not None:
            return
        if self.use_legacy_adapter:
            self.backbone = self.backbone_fn(
                LegacyAdapterType.SEQ_CLS,
                DefaultConfig(
                    config_overwrites={
                        "problem_type": "regression",
                        "num_labels": self.num_outputs,
                    }
                ),
            )
            self.adapter = self.backbone.get_decoder()
        else:
            self.backbone = self.backbone_fn(None, None)
            self.adapter = self.adapter_fn(
                self.backbone.get_embedding_size(), self.num_outputs
            )

    def transform(
        self, batch: dict[str, Union[list, Tensor]], batch_idx: Optional[int] = None
    ) -> dict[str, Union[list, Tensor]]:
        """Collates a batch of data into a format that can be passed to the forward and evaluate methods.

        Args:
            batch (dict[str, Union[list, Tensor]]): A batch of data containing sequences and labels
            batch_idx (int, optional): The index of the current batch in the DataLoader

        Returns:
            dict[str, Union[list, Tensor]]: The collated batch containing sequences, input_ids, attention_mask, and labels
        """

        sequences = batch.pop("sequences")
        tokenized_result = self.backbone.tokenize(sequences, **batch)
        input_ids = tokenized_result.pop("input_ids", None)
        attention_mask = tokenized_result.pop("attention_mask", None)
        special_tokens_mask = tokenized_result.pop("special_tokens_mask", None)

        input_ids = torch.tensor(input_ids, dtype=torch.long).to(self.device)
        if attention_mask is not None:
            attention_mask = torch.tensor(attention_mask, dtype=torch.long).to(self.device)
        labels = None
        if batch.get("labels") is not None:
            labels = batch["labels"].to(self.device, dtype=self.dtype)
        return {
            "sequences": sequences,
            "input_ids": input_ids,
            "attention_mask": attention_mask,
            "special_tokens_mask": special_tokens_mask,
            "labels": labels,
            **tokenized_result,
        }

    def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
        """Runs a forward pass of the model.

        Args:
            collated_batch (dict[str, Union[list, Tensor]]): A collated batch of data containing input_ids and attention_mask.

        Returns:
            Tensor: The regression predictions
        """

        hidden_states = self.backbone(**collated_batch)  # (bs, seq_len, dim)
        preds = self.adapter(hidden_states, collated_batch["attention_mask"])
        return preds

    def evaluate(
        self,
        preds: Tensor,
        collated_batch: dict[str, Union[list, Tensor]],
        stage: Optional[Literal["train", "val", "test"]] = None,
        loss_only: bool = False,
    ) -> dict[str, Union[Tensor, float]]:
        """Evaluates the model predictions against the ground truth labels.

        Args:
            logits (Tensor): The model predictions
            collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data containing labels
            loss_only (bool, optional): Whether to only return the loss. Defaults to False.

        Returns:
            dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse
        """

        labels = collated_batch["labels"]
        loss = self.loss(preds, labels)
        if loss_only:
            return {"loss": loss}
        metrics = self.get_metrics_by_stage(stage)

        if self.num_outputs > 1 and stage == "test":
            for name, metric in metrics.items():
                if len(name.split("_")) == 1:
                    self.call_or_update_metric(stage, metric, preds, labels)
                else:
                    i = int(name.split("_")[-1])
                    self.call_or_update_metric(stage, metric, preds[:, i], labels[:, i])
        else:
            for metric in metrics.values():
                self.call_or_update_metric(stage, metric, preds, labels)

        return {"loss": loss}

    def log_grad_norm(self, optimizer):
        """
        Log the total total_norm, adaptor_param_norm and adaptor_grad_norm.

        Refer to
        https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/plugins/precision/precision.py
        https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/core/module.py
        for the calculation of the gradient norm
        """
        parameters = self.trainer.precision_plugin.main_params(optimizer)
        parameters = list(parameters)
        if len(parameters) > 0:
            assert all([p.requires_grad for p in parameters])
            if all([p.grad is not None for p in parameters]):
                total_norm = vector_norm(
                    torch.stack([vector_norm(p.grad, ord=2) for p in parameters]), ord=2
                )
                adaptor_param_norm = vector_norm(
                    torch.stack(
                        [vector_norm(p, ord=2) for p in self.adapter.parameters()]
                    ),
                    ord=2,
                )
                adaptor_grad_norm = vector_norm(
                    torch.stack(
                        [vector_norm(p.grad, ord=2) for p in self.adapter.parameters()]
                    ),
                    ord=2,
                )

                self.log("total_norm", total_norm, rank_zero_only=True)
                self.log("adaptor_param_norm", adaptor_param_norm, rank_zero_only=True)
                self.log("adaptor_grad_norm", adaptor_grad_norm, rank_zero_only=True)

    def on_before_optimizer_step(self, optimizer):
        """
        Log gradient norm of adaptor's parameters
        """
        if (
            self.log_grad_norm_step > 0
            and self.trainer.global_step % self.log_grad_norm_step == 0
        ):
            self.log_grad_norm(optimizer)

configure_model()

Configures the model for training and interence. Subclasses must implement this method.

Source code in modelgenerator/tasks/tasks.py
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
def configure_model(self) -> None:
    if self.backbone is not None:
        return
    if self.use_legacy_adapter:
        self.backbone = self.backbone_fn(
            LegacyAdapterType.SEQ_CLS,
            DefaultConfig(
                config_overwrites={
                    "problem_type": "regression",
                    "num_labels": self.num_outputs,
                }
            ),
        )
        self.adapter = self.backbone.get_decoder()
    else:
        self.backbone = self.backbone_fn(None, None)
        self.adapter = self.adapter_fn(
            self.backbone.get_embedding_size(), self.num_outputs
        )

forward(collated_batch)

Runs a forward pass of the model.

Parameters:

Name Type Description Default
collated_batch dict[str, Union[list, Tensor]]

A collated batch of data containing input_ids and attention_mask.

required

Returns:

Name Type Description
Tensor Tensor

The regression predictions

Source code in modelgenerator/tasks/tasks.py
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
def forward(self, collated_batch: dict[str, Union[list, Tensor]]) -> Tensor:
    """Runs a forward pass of the model.

    Args:
        collated_batch (dict[str, Union[list, Tensor]]): A collated batch of data containing input_ids and attention_mask.

    Returns:
        Tensor: The regression predictions
    """

    hidden_states = self.backbone(**collated_batch)  # (bs, seq_len, dim)
    preds = self.adapter(hidden_states, collated_batch["attention_mask"])
    return preds

evaluate(preds, collated_batch, stage=None, loss_only=False)

Evaluates the model predictions against the ground truth labels.

Parameters:

Name Type Description Default
preds Tensor

The model predictions from forward.

required
collated_batch dict[str, Union[list, Tensor]]

The collated batch of data containing labels

required
stage str

The stage of training (train, val, test). Defaults to None.

None
loss_only bool

Whether to only return the loss. Defaults to False.

False

Returns:

Type Description
dict[str, Union[Tensor, float]]

dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse

Source code in modelgenerator/tasks/tasks.py
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
def evaluate(
    self,
    preds: Tensor,
    collated_batch: dict[str, Union[list, Tensor]],
    stage: Optional[Literal["train", "val", "test"]] = None,
    loss_only: bool = False,
) -> dict[str, Union[Tensor, float]]:
    """Evaluates the model predictions against the ground truth labels.

    Args:
        logits (Tensor): The model predictions
        collated_batch (dict[str, Union[list, Tensor]]): The collated batch of data containing labels
        loss_only (bool, optional): Whether to only return the loss. Defaults to False.

    Returns:
        dict[str, Union[Tensor, float]]: A dictionary of metrics containing loss and mse
    """

    labels = collated_batch["labels"]
    loss = self.loss(preds, labels)
    if loss_only:
        return {"loss": loss}
    metrics = self.get_metrics_by_stage(stage)

    if self.num_outputs > 1 and stage == "test":
        for name, metric in metrics.items():
            if len(name.split("_")) == 1:
                self.call_or_update_metric(stage, metric, preds, labels)
            else:
                i = int(name.split("_")[-1])
                self.call_or_update_metric(stage, metric, preds[:, i], labels[:, i])
    else:
        for metric in metrics.values():
            self.call_or_update_metric(stage, metric, preds, labels)

    return {"loss": loss}

modelgenerator.adapters.SequenceAdapter

Base class only for type hinting purposes. Used for Callable[[int, int] SequenceAdapter] types.

Source code in modelgenerator/adapters/base.py
1
2
3
4
class SequenceAdapter:
    """Base class only for type hinting purposes. Used for Callable[[int, int] SequenceAdapter] types."""

    pass

modelgenerator.adapters.LinearCLSAdapter

Bases: Module, SequenceAdapter

Simple linear adapter for a 1D embedding

Parameters:

Name Type Description Default
in_features int

Number of input features

required
out_features int

Number of output features

required
Source code in modelgenerator/adapters/adapters.py
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
class LinearCLSAdapter(nn.Module, SequenceAdapter):
    """Simple linear adapter for a 1D embedding

    Args:
        in_features (int): Number of input features
        out_features (int): Number of output features
    """

    def __init__(self, in_features: int, out_features: int):
        super().__init__()
        self.linear = nn.Linear(in_features, out_features)

    def forward(self, hidden_states: Tensor, attention_mask: Tensor = None) -> Tensor:
        """Forward pass

        Args:
            hidden_states (torch.Tensor): of shape (n, seq_len, in_features)
            attention_mask (torch.Tensor): of shape (n, seq_len)

        Returns:
            torch.Tensor: predictions (n, out_features)
        """
        output = self.linear(hidden_states[:, 0])
        return output