Find answers to common questions about LangTrain's features, pricing, and usage.
What models does LangTrain support?
LangTrain supports 50+ open-source models including Llama 3.x, Mistral, Gemma, Phi 4, Qwen 2.5, DeepSeek, Falcon, and more. You can also bring your own model checkpoints from HuggingFace.
How long does fine-tuning take?
Fine-tuning time depends on model size, dataset size, and method:
•QLoRA 7B model: 30-60 minutes
•LoRA 13B model: 1-2 hours
•Full fine-tuning 70B: 6-24 hours
We use H100 GPUs for maximum performance.
Can I use my own data?
Yes! You can upload your own datasets in JSONL, CSV, or Parquet format. We support:
•Instruction-following format
•Chat format (messages)
•Completion format (text only)
Is my data secure?
Absolutely. We use:
•End-to-end encryption for all data
•SOC 2 Type II compliance
•Data isolation per workspace
•No training on your data for other models
•GDPR compliant data handling
What's the difference between LoRA and full fine-tuning?