L

Initializing Studio...

Docs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog
Docs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog

Quick Start

Get your first fine-tuned AI model running in under 5 minutes. No deep ML knowledge required.

⚡

5 Minute Setup

From zero to fine-tuned model in minutes

💻

No GPU Required

Works on CPU, faster with NVIDIA GPU

🎯

Beginner Friendly

Clear explanations at every step

🚀

Production Ready

Same code works for production

What You'll Need

Before we start, make sure you have these basics:

Python 3.8 or newer - Check with python --version

8GB RAM minimum - 16GB recommended for larger models

10GB free disk space - For downloading models

Optional but recommended: NVIDIA GPU for 10-50x faster training

Don't worry if you don't have a GPU - LangTrain works great on CPU too, just slower.
python
1# Quick system check - run this to see if you're ready
2python --version # Should show 3.8 or higher
3
4# Optional: Check if you have GPU support
5python -c "import torch; print('GPU ready!' if torch.cuda.is_available() else 'CPU mode - still works!')"
6
7# If you don't have torch yet, that's fine - we'll install it next

Install LangTrain

Installing LangTrain is just one command. We recommend a virtual environment to keep things clean, but it's optional.

What gets installed: LangTrain automatically installs PyTorch, Transformers, and everything else you need.
python
1# Step 1: Create a clean environment (recommended but optional)
2python -m venv langtrain-env
3source langtrain-env/bin/activate # Windows: langtrain-env\Scripts\activate
4
5# Step 2: Install LangTrain
6pip install langtrain-ai
7
8# Step 3: Verify it worked
9python -c "import langtrain; print('✅ LangTrain installed!')"
10
11# That's it! You're ready to train your first model.

Train Your First Model

Let's fine-tune a chatbot! This example creates a simple conversational AI that you can customize with your own data.

What's happening: We're taking a pre-trained model and teaching it to respond in a specific way using LoRA (a technique that makes training fast and efficient).

Time required: About 5-10 minutes on GPU, 30-60 minutes on CPU.
python
1from langtrain import LoRATrainer
2
3# Step 1: Define your training data
4# This is what you want your AI to learn
5training_data = [
6 {"user": "Hello!", "assistant": "Hi there! How can I help you today?"},
7 {"user": "What can you do?", "assistant": "I can answer questions, have conversations, and help with various tasks!"},
8 {"user": "Thanks!", "assistant": "You're welcome! Feel free to ask anything else."}
9]
10
11# Step 2: Create the trainer
12# This sets up everything for you automatically
13trainer = LoRATrainer(
14 model_name="microsoft/DialoGPT-medium", # The base model to fine-tune
15 output_dir="./my_first_chatbot", # Where to save your model
16)
17
18# Step 3: Train!
19trainer.train(training_data)
20
21# Step 4: Test your model
22response = trainer.chat("Hello!")
23print(f"Your AI says: {response}")

Use Your Trained Model

Once training is complete, you can use your model anywhere. Here's how to load and use it.

Your model is saved in ./my_first_chatbot - you can share this folder or deploy it.
python
1from langtrain import ChatModel
2
3# Load your trained model
4model = ChatModel.load("./my_first_chatbot")
5
6# Have a conversation
7print(model.chat("Hello!"))
8print(model.chat("What can you do?"))
9print(model.chat("Thanks for the help!"))
10
11# Your AI will respond based on what you trained it on!

Using Your Own Data

Want to train on your own conversations? LangTrain accepts data in multiple formats.

JSONL format is recommended - one JSON object per line.

Structure: Each line should have a "user" message and "assistant" response.
python
1# Method 1: Load from a JSONL file
2# Your file should look like:
3# {"user": "Hello", "assistant": "Hi there!"}
4# {"user": "How are you?", "assistant": "I'm doing great!"}
5
6from langtrain import LoRATrainer
7
8trainer = LoRATrainer(
9 model_name="microsoft/DialoGPT-medium",
10 output_dir="./custom_chatbot",
11)
12
13# Train from your file
14trainer.train_from_file("my_conversations.jsonl")
15
16# Method 2: Load from Hugging Face datasets
17trainer.train_from_hub("your_username/your_dataset")

Next Steps

Congratulations! 🎉 You've trained your first AI model with LangTrain.

What to explore next:

- LoRA & QLoRA Guide - Train larger models like Llama 3 on consumer hardware
- API Reference - Use LangTrain's cloud API for production
- Deployment Guide - Deploy your model as an API endpoint

Need help? Join our Discord community or check the FAQ.
python
1# Ready for more? Try these next:
2
3# 1. Train a larger, more capable model with QLoRA
4from langtrain import QLoRATrainer
5trainer = QLoRATrainer(
6 model_name="meta-llama/Llama-3.1-8B",
7 load_in_4bit=True, # Uses only 6GB VRAM!
8)
9
10# 2. Use the cloud API for instant inference
11import langtrain
12client = langtrain.Client(api_key="your-key")
13response = client.chat("Hello!")
14
15# 3. Deploy your model as an API
16from langtrain import deploy
17deploy("./my_first_chatbot", port=8000)
18
19# Visit http://localhost:8000 to use your model!
Previous
Installation
Next
Core Concepts

On this page

What You'll NeedInstall LangTrainTrain Your First ModelUse Your Trained ModelUsing Your Own DataNext Steps