L

Initializing Studio...

Langtrain Logo
LangtrainDocs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog

Installation

Install LangTrain across different platforms and environments.

Windows
Linux
macOS
Python 3.10+
Docker
CUDA

System Requirements

Minimum Requirements:
•Python 3.8+

•8GB RAM

•10GB storage

Recommended:
•Python 3.10+

•16GB+ RAM

•NVIDIA GPU with 8GB+ VRAM

•50GB+ storage for models
1# Check system compatibility
2python -c "
3import sys
4print(f'Python version: {sys.version}')
5
6import platform
7print(f'OS: {platform.system()} {platform.release()}')
8
9try:
10 import torch
11 print(f'PyTorch: {torch.__version__}')
12 print(f'CUDA available: {torch.cuda.is_available()}')
13 if torch.cuda.is_available():
14 print(f'GPU: {torch.cuda.get_device_name()}')
15 print(f'VRAM: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f}GB')
16except ImportError:
17 print('PyTorch not installed')
18"

Quick Installation

Install LangTrain using pip. We recommend creating a virtual environment first.
Basic Installation:
``bash
pip install langtrain-ai
`
With GPU Support:
`bash
pip install langtrain-ai[gpu]
``
1# Create and activate virtual environment
2python -m venv langtrain-env
3source langtrain-env/bin/activate # On Windows: langtrain-env\Scripts\activate
4
5# Install LangTrain
6pip install langtrain-ai
7
8# Verify installation
9python -c "import langtrain; print(f'LangTrain {langtrain.__version__} installed successfully!')"
10
11# Install with GPU support (recommended)
12pip install langtrain-ai[gpu]
13
14# Check GPU support
15python -c "import torch; print(f'CUDA: {torch.cuda.is_available()}, Device: {torch.cuda.get_device_name() if torch.cuda.is_available() else "N/A"}')"

Docker Installation

For production environments, use our official Docker image with all dependencies pre-configured.
Available Images:
•langtrain/langtrain:latest - CPU only

•langtrain/langtrain:gpu - NVIDIA GPU support

•langtrain/langtrain:cuda12 - CUDA 12.x specific
1# Pull the official image
2docker pull langtrain/langtrain:gpu
3
4# Run with GPU support
5docker run --gpus all -it langtrain/langtrain:gpu
6
7# Run with mounted volume for data persistence
8docker run --gpus all -v $(pwd)/data:/data -p 8000:8000 langtrain/langtrain:gpu
9
10# Docker Compose example
11# docker-compose.yml
12version: '3.8'
13services:
14 langtrain:
15 image: langtrain/langtrain:gpu
16 deploy:
17 resources:
18 reservations:
19 devices:
20 - driver: nvidia
21 count: 1
22 capabilities: [gpu]
23 volumes:
24 - ./data:/data
25 - ./models:/models
26 ports:
27 - "8000:8000"
28 environment:
29 - LANGTRAIN_API_KEY=${LANGTRAIN_API_KEY}

Troubleshooting

Common Issues:
CUDA Not Found:
Ensure NVIDIA drivers and CUDA toolkit are installed.
Out of Memory:
Reduce batch size or use QLoRA for memory-efficient training.
Import Errors:
Reinstall with pip install --force-reinstall langtrain-ai
1# Diagnose installation issues
2python -c "
3import sys
4print('=== System Info ===')
5print(f'Python: {sys.version}')
6
7print('\n=== Package Versions ===')
8packages = ['langtrain', 'torch', 'transformers', 'peft', 'bitsandbytes']
9for pkg in packages:
10 try:
11 mod = __import__(pkg)
12 print(f'{pkg}: {getattr(mod, "__version__", "installed")}')
13 except ImportError:
14 print(f'{pkg}: NOT INSTALLED')
15
16print('\n=== GPU Status ===')
17try:
18 import torch
19 print(f'CUDA available: {torch.cuda.is_available()}')
20 if torch.cuda.is_available():
21 print(f'GPU: {torch.cuda.get_device_name()}')
22 print(f'VRAM: {torch.cuda.get_device_properties(0).total_memory / 1e9:.1f}GB')
23 print(f'CUDA version: {torch.version.cuda}')
24except Exception as e:
25 print(f'Error: {e}')
26"
27
28# Fix common issues
29pip install --upgrade langtrain-ai
30pip install torch --index-url https://download.pytorch.org/whl/cu121 # CUDA 12.1