neurai.finetune package#
Subpackages#
Submodules#
- class neurai.finetune.finetune_config.LoRAConfig(base_model_name_or_path=None, revision=None, tuning_type=None, inference_mode=False, r=8, target_modules=None, lora_alpha=8)#
Bases:
TuningConfig
This is the configuration class to store the configuration of a [LoraModel].
- Parameters:
- class neurai.finetune.finetune_config.PrefixTuningConfig(base_model_name_or_path=None, revision=None, tuning_type=None, inference_mode=False, r=8, target_modules=None, num_virtual_tokens=None, token_dim=None, num_transformer_submodules=None, num_attention_heads=None, num_layers=None, encoder_hidden_size=None, prefix_projection=False)#
Bases:
PromptLearningConfig
This is the configuration class to store the configuration of a [PrefixEncoder].
- class neurai.finetune.finetune_config.PromptLearningConfig(base_model_name_or_path=None, revision=None, tuning_type=None, inference_mode=False, r=8, target_modules=None, num_virtual_tokens=None, token_dim=None, num_transformer_submodules=None, num_attention_heads=None, num_layers=None)#
Bases:
TuningConfig
This is the base configuration class to store the configuration of [PrefixTuning], [PromptEncoder], or [PromptTuning].
- Parameters:
num_virtual_tokens (int) – The number of virtual tokens to use.
token_dim (int) – The hidden embedding dimension of the base transformer model.
num_transformer_submodules (int) – The number of transformer submodules in the base transformer model.
num_attention_heads (int) – The number of attention heads in the base transformer model.
num_layers (int) – The number of layers in the base transformer model.
tuning_type (
Union
[str
,TuningType
,None
]) –inference_mode (
bool
) –r (
int
) –
- class neurai.finetune.finetune_config.TuningConfig(base_model_name_or_path=None, revision=None, tuning_type=None, inference_mode=False, r=8, target_modules=None)#
Bases:
object
This is the base configuration class to store the configuration of a [PeftModel].
- Parameters:
base_model_name_or_path (str) – The name of the base model to use.
revision (str) – The specific model version to use.
tuning_type (Union[str, TuningType]) – Peft type.
inference_mode (bool) – Whether to use inference mode.
r (int) – Lora attention dimension.
target_modules (Optional[Union[List[str], str]]) – List of module names or regex expression of the module names to replace with Lora. For example, [‘q’, ‘v’] or ‘.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$’