Skip to content

Appendix III: XGen built-in API documentation

CL (Co_Lib)

The CL is a built-in optimization library in XGen. It is a class method that provides several built-in methods for model optimization. It includes the following functions:

init(self, args=None, model=None, optimizer=None, logger=None, data_loader=None, teacher_models={}, **kwargs)

This function initializes the CL library.

  • args: Arguments for initialization.
  • model: The model to be optimized.
  • optimizer: The optimizer to be used.
  • logger: The logger for recording logs.
  • data_loader: The data loader for loading data.
  • teacher_models: Dictionary of teacher models (if applicable).
  • kwargs: Additional keyword arguments.
  • return: If modify model graph, a new model will be returned, otherwise it will not be returned.

  • example

    cl_model = CL.init(args=args_ai, model=AImodel:torch.nn.Module, optimizer=AIoptimizer:torch.optim,
            data_loader=dataLoader:torch.utils.data.dataloader.DataLoader)
    if cl_model:
        AImodel = cl_model
    

after_scheduler_step(epoch)

This function is called after each scheduler step.

  • epoch: The current epoch.

  • example

    CL.after_scheduler_step(epoch)
    

before_each_train_epoch(epoch)

This function is called before each training epoch.

  • epoch: The current epoch.
  • example

    CL.before_each_train_epoch(epoch)
    

update_loss(loss)

This function updates the loss.

  • loss: The updated loss value.
  • return: Return new loss value.
  • example

    loss = CL.update_loss(loss)
    

xgen_tools

The xgen_tools is a built-in library in XGen that provides utility functions for parameter merging, weight loading, and weight saving.

xgen_init(user_args: Union[argparse.Namespace, dict], args_ai: dict = None, map: dict = None) -> dict

This function initializes XGen. Returns the initialized user_args and args_ai, which is the XGen config loaded from a file.

  • user_args: User-defined arguments.
  • args_ai: AI-related arguments (ignored in the feature).
  • map: Arguments map (maps xgen-defined args to user-defined args).
  • example

    user_args,args_ai = xgen_init(orginalArgs,map = COCOPIE_MAP)
    

xgen_load(model, args_ai=None, path=None, optimizer=None)

  • model: The PyTorch model.
  • args_ai: The XGen config, including the weights path.
  • path: If args_ai is provided, this parameter can be ignored.
  • optimizer: If provided, the optimizer state dict may be loaded for resuming.

This function loads weights for the model. It can load weights from different formats of .pth files.

When using XGen, if you have prepared pretrained model weights, you can select the absolute path of your weights in the "Pretrained model absolute path" field. XGen only supports PyTorch weights and has certain requirements for the organization of your weights, as described below:

  • The first method allows you to obtain the weight information for each layer of the network directly using the following approach:

    weights = torch.load("your pretrained model weights path")
    model.load_state_dict(weights)
    
  • If your uploaded weights contain additional information, please ensure that the weights are saved within the "model" field:

    weights = torch.load("your pretrained model weights path")["model"]
    model.load_state_dict(weights)
    
  • example

    xgen_load(model,args_ai=args_ai)
    

Please make sure to follow these guidelines when organizing and loading your pretrained model weights in XGen.

xgen_record(args_ai, origin_model, metric, epoch=None, optimizer=None, onnx_export=None, onnx_file_path=None)

  • args_ai: The XGen config.
  • origin_model: The user's model.
  • metric: The evaluation metric.
  • optimizer: The optimizer parameters for resuming (optional).
  • epoch: The epoch of the training (-1 means the standard record).
  • onnx_export: A callable function (may be removed in the future).
  • onnx_file_path: The path to the generated ONNX file.
  • example

    xgen_record(args_ai,model,accuracy,epoch=epochCount)
    
    This function saves, profiles the model, and collects data for the training.