L

Initializing Studio...

Docs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog
Docs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog

LLM Hyperparameter Optimization

Automated hyperparameter optimization for Large Language Models using advanced search algorithms and neural architecture search techniques.

LLM Hyperparameter Optimization Overview

Optimize critical hyperparameters for LLM training and fine-tuning using state-of-the-art optimization algorithms:

Critical LLM Hyperparameters:
- Learning Rate & Schedule: Peak LR, warmup steps, decay strategy (cosine, linear, polynomial)
- Batch Configuration: Global batch size, micro-batch size, gradient accumulation steps
- Optimizer Parameters: β₁, β₂, ε, weight decay, gradient clipping threshold
- Architecture Choices: Hidden dimensions, attention heads, intermediate size, number of layers
- Regularization: Dropout rates, attention dropout, activation dropout, label smoothing

Advanced Optimization Targets:
- LoRA Parameters: Rank (r), alpha scaling, target modules, dropout rate
- Quantization Settings: Bit precision, calibration data, quantization schemes
- Memory Optimization: Gradient checkpointing intervals, ZeRO stage selection
- Data Pipeline: Sequence length, packing strategy, data mixing ratios

Multi-Objective Optimization:
- Performance vs Efficiency: Balance model quality against training time/cost
- Accuracy vs Safety: Optimize for task performance while minimizing harmful outputs
- Perplexity vs Downstream Tasks: Joint optimization across multiple evaluation metrics

Advanced Search Algorithms

Modern hyperparameter optimization algorithms tailored for large-scale language model training:

Bayesian Optimization with Gaussian Processes:
- Acquisition Functions: Expected Improvement (EI), Upper Confidence Bound (UCB), Probability of Improvement
- Kernel Selection: RBF, Matérn kernels with automatic relevance determination
- Multi-fidelity Optimization: BOHB (Bayesian Optimization and HyperBand) for budget-aware search
- Transfer Learning: Leverage knowledge from previous optimization runs

Population-Based Training (PBT):
- Evolutionary Strategy: Mutate and crossover hyperparameters of top performers
- Dynamic Resource Allocation: Reallocate compute from poor to promising configurations
- Online Hyperparameter Adaptation: Continuously adjust hyperparameters during training
- Truncation Selection: Periodically eliminate bottom percentile of population

Multi-Armed Bandits & Successive Halving:
- Hyperband: Principled early stopping with successive halving
- ASHA (Asynchronous Successive Halving): Efficient parallel hyperparameter search
- BOHB: Combine Bayesian optimization with Hyperband for sample efficiency
- DEHB: Differential Evolution combined with Hyperband

Neural Architecture Search (NAS):
- DARTS: Differentiable architecture search for transformer components
- Progressive Search: Incrementally grow model complexity during search
- Hardware-Aware NAS: Optimize for specific accelerator architectures (TPU, GPU)
- Efficient Attention: Search optimal attention patterns and sparse attention mechanisms

Configuration Options

Customize auto-tuning behavior:

Search Space Definition:
- Define ranges for each hyperparameter
- Specify distributions (uniform, log-uniform, categorical)
- Set constraints and dependencies

Resource Allocation:
- Maximum training budget
- Number of parallel trials
- Early stopping criteria

Optimization Objectives:
- Single or multi-objective optimization
- Custom metric definitions
- Trade-offs between performance and efficiency

Best Practices

Maximize auto-tuning effectiveness:

Data Preparation:
- Ensure representative validation sets
- Handle data imbalance appropriately
- Use consistent evaluation metrics

Search Space Design:
- Start with reasonable ranges
- Include important hyperparameters
- Avoid overly large search spaces

Resource Management:
- Allocate sufficient compute budget
- Use early stopping for efficiency
- Monitor progress and adjust as needed

Full Examples

Basic Auto-tuning

python
1import langtrain
2
3# Create model with auto-tuning enabled
4model = langtrain.Model.create(
5 name="auto-tuned-classifier",
6 architecture="bert-base-uncased",
7 task="classification",
8 auto_tune=True # Enable auto-tuning
9)
10
11# Load your dataset
12dataset = langtrain.Dataset.from_csv("data.csv")
13
14# Start auto-tuning
15tuner = langtrain.AutoTuner(
16 model=model,
17 dataset=dataset,
18 max_trials=50, # Number of configurations to try
19 max_epochs=10, # Maximum epochs per trial
20 objective="f1_score" # Metric to optimize
21)
22
23# Run optimization
24best_config = tuner.optimize()
25print(f"Best configuration: {best_config}")
26print(f"Best score: {tuner.best_score}")

Custom Search Space

python
1# Define custom hyperparameter search space
2search_space = {
3 'learning_rate': langtrain.hp.loguniform(1e-6, 1e-3),
4 'batch_size': langtrain.hp.choice([8, 16, 32, 64]),
5 'dropout_rate': langtrain.hp.uniform(0.1, 0.5),
6 'weight_decay': langtrain.hp.loguniform(1e-6, 1e-2),
7 'warmup_ratio': langtrain.hp.uniform(0.0, 0.2),
8 'optimizer': langtrain.hp.choice(['adam', 'adamw', 'sgd'])
9}
10
11# Configure auto-tuner with custom search space
12tuner = langtrain.AutoTuner(
13 model=model,
14 dataset=dataset,
15 search_space=search_space,
16 algorithm="bayesian", # Optimization algorithm
17 max_trials=100,
18 timeout=3600 # 1 hour timeout
19)
20
21# Run with early stopping
22best_config = tuner.optimize(
23 early_stopping_patience=10,
24 min_improvement=0.001
25)

Multi-objective Optimization

python
1# Optimize for multiple objectives
2objectives = {
3 'accuracy': 'maximize',
4 'inference_time': 'minimize',
5 'model_size': 'minimize'
6}
7
8tuner = langtrain.MultiObjectiveTuner(
9 model=model,
10 dataset=dataset,
11 objectives=objectives,
12 max_trials=200
13)
14
15# Get Pareto-optimal solutions
16pareto_solutions = tuner.optimize()
17
18# Select best trade-off based on your priorities
19best_config = tuner.select_best(
20 weights={'accuracy': 0.7, 'inference_time': 0.2, 'model_size': 0.1}
21)

Population-Based Training

python
1# Use population-based training for dynamic optimization
2pbt_config = langtrain.PBTConfig(
3 population_size=20, # Number of parallel training runs
4 perturbation_interval=5, # Epochs between perturbations
5 mutation_rate=0.2, # Probability of parameter mutation
6 truncation_percentage=0.2 # Bottom 20% get replaced
7)
8
9tuner = langtrain.PopulationBasedTuner(
10 model=model,
11 dataset=dataset,
12 config=pbt_config,
13 total_epochs=50
14)
15
16# This will train multiple models simultaneously
17# and evolve their hyperparameters over time
18results = tuner.train_population()
19
20# Get the best performing model
21best_model = results.best_model
22best_hyperparams = results.best_hyperparams
Previous
Features
Next
Model Monitoring

On this page

LLM Hyperparameter Optimization OverviewAdvanced Search AlgorithmsConfiguration OptionsBest Practices