L

Initializing Studio...

Docs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog
Docs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog

Features

Discover the powerful features that make LangTrain the best platform for training and deploying language models.

Auto-Tuning

LangTrain's auto-tuning feature automatically optimizes your model's hyperparameters for best performance.

What Gets Optimized:
- Learning rate and scheduling
- Batch size and accumulation steps
- Architecture parameters (layers, attention heads)
- Regularization (dropout, weight decay)
- Optimization algorithms (Adam, AdamW, SGD)

How It Works:
1. Search Space Definition: Define parameter ranges
2. Smart Sampling: Use Bayesian optimization for efficient search
3. Early Stopping: Terminate poor configurations quickly
4. Resource Management: Balance exploration vs exploitation
5. Best Configuration: Select optimal parameters automatically

Benefits:
- Save weeks of manual hyperparameter tuning
- Achieve better performance than manual tuning
- Reduce computational costs through smart search
- Reproducible and explainable optimization process

Monitoring & Analytics

Comprehensive monitoring for training and production models.

Training Monitoring:
- Real-time loss and metric curves
- Resource utilization (GPU, memory, disk)
- Training speed and estimated completion time
- Data throughput and batch processing stats
- Error tracking and debugging information

Production Monitoring:
- Request latency and throughput metrics
- Model accuracy and drift detection
- Cost tracking and budget alerts
- Error rates and failure analysis
- User feedback and satisfaction scores

Advanced Analytics:
- A/B testing between model versions
- Cohort analysis for different user segments
- Performance trending and forecasting
- Custom dashboard creation
- Automated alerting and notifications

Scalable Infrastructure

Built-in scaling capabilities for any workload size.

Training Scaling:
- Single GPU: For small datasets and prototyping
- Multi-GPU: Distributed training on single machine
- Multi-Node: Scale across multiple machines
- Spot Instances: Cost-effective training with interruption handling
- Auto-Scaling: Dynamically adjust resources based on demand

Inference Scaling:
- Auto-Scaling: Adjust replicas based on traffic
- Load Balancing: Distribute requests efficiently
- Global Deployment: Deploy in multiple regions
- Edge Computing: Run models closer to users
- Serverless: Pay only for actual inference time

Resource Management:
- Intelligent resource allocation
- Priority-based scheduling
- Resource quotas and limits
- Cost optimization recommendations

Full Examples

Enable Auto-Tuning

python
1import langtrain
2
3client = langtrain.LangTrain()
4
5# Create model with auto-tuning
6model = client.models.create(
7 name="optimized-classifier",
8 type="text-classification"
9)
10
11# Train with auto-tuning enabled
12training_job = model.train(
13 dataset_id="your-dataset-id",
14 auto_tune=True,
15 auto_tune_config={
16 "max_trials": 50,
17 "optimization_metric": "f1_score",
18 "search_space": {
19 "learning_rate": [1e-5, 5e-4],
20 "batch_size": [16, 32, 64],
21 "num_epochs": [3, 5, 10]
22 }
23 }
24)
25
26# Get best configuration
27best_config = training_job.get_best_config()
28print(f"Best learning rate: {best_config['learning_rate']}")
29print(f"Best batch size: {best_config['batch_size']}")
Previous
Data Formats
Next
Auto-tuning

On this page

Auto-TuningMonitoring & AnalyticsScalable Infrastructure