L
000

Initializing Studio...

LangtrainLangtrain
DocsAPI ReferenceSDK Reference
ModelsChat
  • REST API
  • Python SDK
GitHubDiscord

Python SDK

The official Python SDK for training, deploying, and managing AI models with Langtrain.

Python 3.9+
Type-safe
Async Support

Installation

Install the SDK with pip. Python 3.9+ is required.
1pip install langtrain-ai
2
3# With GPU support
4pip install langtrain-ai[gpu]
5
6# Verify installation
7python -c "import langtrain; print(langtrain.__version__)"

Authentication

Configure your API key to authenticate with Langtrain.
1import langtrain
2
3# Option 1: Environment variable (recommended)
4# export LANGTRAIN_API_KEY=your-api-key
5
6# Option 2: Direct configuration
7langtrain.api_key = "your-api-key"
8
9# Option 3: Client initialization
10client = langtrain.Client(api_key="your-api-key")

LoRA Training

Fine-tune models using LoRA for efficient, memory-friendly training.
1from langtrain import LoRATrainer
2
3# Initialize trainer with base model
4trainer = LoRATrainer(
5 model="meta-llama/Llama-3.3-8B",
6 output_dir="./my-model"
7)
8
9# Train on your data
10trainer.train("training_data.jsonl")
11
12# Save the trained model
13trainer.save()
14trainer.push("my-custom-model") # Upload to Langtrain Cloud

Dataset Management

Upload and manage training datasets programmatically.
1from langtrain import Dataset
2
3# Upload a dataset
4dataset = Dataset.upload(
5 file_path="data.jsonl",
6 name="customer-support-v1"
7)
8
9print(f"Dataset ID: {dataset.id}")
10print(f"Rows: {dataset.row_count}")
11
12# List all datasets
13datasets = Dataset.list()
14for ds in datasets:
15 print(f"- {ds.name} ({ds.status})")

Training Jobs

Create and monitor fine-tuning jobs on Langtrain Cloud.
1from langtrain import TrainingJob
2import time
3
4# Create a training job
5job = TrainingJob.create(
6 model_id="llama-3.3-8b",
7 dataset_id=dataset.id,
8 config={
9 "method": "qlora",
10 "epochs": 3,
11 "learning_rate": 2e-4,
12 "batch_size": 4
13 }
14)
15
16# Monitor progress
17while job.status in ["pending", "running"]:
18 job.refresh()
19 print(f"Status: {job.status}, Progress: {job.progress}%")
20 time.sleep(30)
21
22print(f"Training completed: {job.model_id}")

Inference

Generate text with your trained models.
1from langtrain import Model
2
3# Load your model
4model = Model.load("my-custom-model")
5
6# Generate text
7response = model.generate(
8 prompt="Explain machine learning",
9 max_tokens=200,
10 temperature=0.7
11)
12print(response)
13
14# Chat interface
15messages = [{"role": "user", "content": "Hello!"}]
16response = model.chat(messages)
17print(response["content"])
18
19# Streaming
20for chunk in model.stream("Tell me a story"):
21 print(chunk, end="", flush=True)

Async Support

Use async/await for non-blocking operations.
1import asyncio
2from langtrain import AsyncClient
3
4async def main():
5 client = AsyncClient()
6
7 # Async generation
8 response = await client.generate(
9 model="my-model",
10 prompt="Explain async programming"
11 )
12 print(response)
13
14 # Async streaming
15 async for chunk in client.stream("Tell me about Python"):
16 print(chunk, end="")
17
18asyncio.run(main())

Error Handling

Handle errors gracefully with specific exception types.
1from langtrain.exceptions import (
2 AuthenticationError,
3 RateLimitError,
4 ValidationError,
5 NotFoundError
6)
7
8try:
9 job = TrainingJob.create(...)
10except AuthenticationError:
11 print("Invalid API key")
12except RateLimitError as e:
13 print(f"Rate limited, retry in {e.retry_after}s")
14except ValidationError as e:
15 print(f"Invalid config: {e.message}")
16except NotFoundError:
17 print("Model or dataset not found")
Previous
REST API
Next
Cloud Deployment