L

Initializing Studio...

Langtrain Logo
LangtrainDocs

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog

Frequently Asked Questions

Find answers to common questions about LangTrain's features, pricing, and usage.

What models does LangTrain support?

LangTrain supports 50+ open-source models including Llama 3.x, Mistral, Gemma, Phi 4, Qwen 2.5, DeepSeek, Falcon, and more. You can also bring your own model checkpoints from HuggingFace.

How long does fine-tuning take?

Fine-tuning time depends on model size, dataset size, and method:
•QLoRA 7B model: 30-60 minutes

•LoRA 13B model: 1-2 hours

•Full fine-tuning 70B: 6-24 hours

We use H100 GPUs for maximum performance.

Can I use my own data?

Yes! You can upload your own datasets in JSONL, CSV, or Parquet format. We support:
•Instruction-following format

•Chat format (messages)

•Completion format (text only)

Is my data secure?

Absolutely. We use:
•End-to-end encryption for all data

•SOC 2 Type II compliance

•Data isolation per workspace

•No training on your data for other models

•GDPR compliant data handling

What's the difference between LoRA and full fine-tuning?

LoRA (Low-Rank Adaptation):
•Trains only 0.1-1% of parameters

•10x faster, 10x less memory

•Great for most use cases

Full Fine-tuning:
•Updates all parameters

•Maximum performance

•Requires more compute

Can I deploy to my own cloud?

Yes! You can:
•Export models to HuggingFace format

•Deploy to AWS, GCP, or Azure

•Use our containerized inference servers

•Self-host with Docker

Do you offer API access?

Yes, we provide:
•REST API for all features

•Python SDK (pip install langtrain-ai)

•OpenAI-compatible inference endpoints

•Comprehensive API documentation

What are the pricing options?

We offer flexible pricing:
•Starter: Free tier for experimentation

•Pro: $49/month with more compute

•Enterprise: Custom pricing for large teams

All plans include pay-as-you-go GPU compute.