L
000

Initializing Studio...

LangtrainLangtrain
DocsAPI ReferenceSDK Reference
ModelsChat
  • Introduction
  • Quick Start
  • Installation
GitHubDiscord

Introduction

Langtrain is a platform for fine-tuning, deploying, and scaling custom AI models.

100+ Models
Simple API
Enterprise Ready
Cloud Deploy

What is Langtrain?

Langtrain lets you customize Large Language Models (LLMs) with your own data. Train models like Llama 3.3, Mistral, Qwen, and DeepSeek in minutes, then deploy them to production with a single command.
No ML expertise required—just Python and your data.

Key Capabilities

  • •Fine-tuning: Train LoRA adapters on 100+ open-source models
  • •Model Hub: Browse and select from pre-trained base models
  • •Datasets: Upload and manage training data in JSONL, CSV, or Parquet
  • •Training Jobs: Monitor progress, metrics, and logs in real-time
  • •Inference: Deploy models with OpenAI-compatible API endpoints
  • •Agents: Build and deploy AI agents with custom workflows

How It Works

1. Choose a model from our hub (Llama, Mistral, Qwen, etc.)2. Upload your data in chat or instruction format3. Train with LoRA for fast, memory-efficient fine-tuning4. Deploy to cloud with one click for production inference

Platform Features

The Langtrain dashboard provides:
  • •Training Dashboard: Start and monitor fine-tuning jobs
  • •Model Hub: Explore 100+ models with benchmarks
  • •Agents Builder: Create AI agents with custom tools
  • •Analytics: Track usage, costs, and performance
  • •API Keys: Manage authentication for your applications

Supported Models

We support the latest open-source models:
  • •Llama 3.3 by Meta (8B, 70B, 405B)
  • •Mistral/Mixtral by Mistral AI
  • •Qwen 2.5 by Alibaba
  • •DeepSeek V3 (70B, 236B)
  • •Phi-4 by Microsoft
  • •Gemma 2 by Google
All models are optimized for LoRA/QLoRA fine-tuning.

Next Steps

Ready to get started?
  • •Quick Start - Train your first model in 5 minutes
  • •Installation - Set up the SDK
  • •API Reference - REST API documentation
Next
Quick Start