L

Initializing Studio...

Docs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog
Docs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog

Python SDK

Complete guide to using LangTrain's Python SDK for model training and deployment.

Installation

Install the LangTrain Python SDK using pip:

Requirements: Python 3.8 or higher

The SDK includes all necessary dependencies for model training and inference.
python
1pip install langtrain-ai
2
3# Or install with optional dependencies
4pip install langtrain-ai[gpu] # For GPU support
5pip install langtrain-ai[dev] # For development tools

Quick Start

Get started with LangTrain in just a few lines of Python code:

Authentication: Use your API key from the dashboard.
python
1import langtrain
2
3# Initialize client
4client = langtrain.Client(api_key="your-api-key")
5
6# Start a fine-tuning job
7job = client.fine_tune.create(
8 model="llama-2-7b",
9 dataset="your-dataset-id",
10 config={
11 "learning_rate": 2e-5,
12 "batch_size": 4,
13 "epochs": 3
14 }
15)
16
17print(f"Fine-tuning job started: {job.id}")

Fine-tuning Models

Fine-tune models with custom datasets and configurations:

Supported Models: LLaMA, Mistral, CodeLlama, and more

LoRA Support: Efficient fine-tuning with Low-Rank Adaptation
python
1# Upload dataset
2dataset = client.datasets.upload(
3 file_path="training_data.jsonl",
4 name="my-dataset"
5)
6
7# Create fine-tuning job with LoRA
8job = client.fine_tune.create(
9 model="mistral-7b",
10 dataset=dataset.id,
11 config={
12 "method": "lora",
13 "rank": 16,
14 "alpha": 32,
15 "learning_rate": 1e-4,
16 "max_steps": 1000
17 }
18)
19
20# Monitor progress
21while job.status == "running":
22 job = client.fine_tune.get(job.id)
23 print(f"Progress: {job.progress}%")
24 time.sleep(30)

Model Inference

Use your fine-tuned models for inference:

Streaming: Support for real-time streaming responses

Batch Processing: Efficient batch inference for large datasets
python
1# Load fine-tuned model
2model = client.models.get("your-model-id")
3
4# Single inference
5response = model.generate(
6 prompt="What is the capital of France?",
7 max_tokens=100,
8 temperature=0.7
9)
10
11print(response.text)
12
13# Streaming inference
14for chunk in model.stream(prompt="Tell me a story"):
15 print(chunk.text, end="", flush=True)

Error Handling

Robust error handling and retry mechanisms:

Automatic Retries: Built-in retry logic for transient failures

Custom Exceptions: Specific exceptions for different error types
python
1from langtrain.exceptions import (
2 AuthenticationError,
3 RateLimitError,
4 ModelNotFoundError
5)
6
7try:
8 job = client.fine_tune.create(...)
9except AuthenticationError:
10 print("Invalid API key")
11except RateLimitError as e:
12 print(f"Rate limited. Retry after {e.retry_after} seconds")
13except ModelNotFoundError:
14 print("Model not found")
15except Exception as e:
16 print(f"Unexpected error: {e}")

On this page

InstallationQuick StartFine-tuning ModelsModel InferenceError Handling