Find answers to common questions about LangTrain's features, pricing, and usage.
What models does LangTrain support?
LangTrain supports 50+ open-source models including Llama 3.x, Mistral, Gemma, Phi 4, Qwen 2.5, DeepSeek, Falcon, and more. You can also bring your own model checkpoints from HuggingFace.
How long does fine-tuning take?
Fine-tuning time depends on model size, dataset size, and method:
Yes! You can upload your own datasets in JSONL, CSV, or Parquet format. We support:
- Instruction-following format - Chat format (messages) - Completion format (text only)
Is my data secure?
Absolutely. We use:
- End-to-end encryption for all data - SOC 2 Type II compliance - Data isolation per workspace - No training on your data for other models - GDPR compliant data handling
What's the difference between LoRA and full fine-tuning?
LoRA (Low-Rank Adaptation): - Trains only 0.1-1% of parameters - 10x faster, 10x less memory - Great for most use cases
Full Fine-tuning: - Updates all parameters - Maximum performance - Requires more compute
Can I deploy to my own cloud?
Yes! You can:
- Export models to HuggingFace format - Deploy to AWS, GCP, or Azure - Use our containerized inference servers - Self-host with Docker
Do you offer API access?
Yes, we provide:
- REST API for all features - Python SDK (pip install langtrain-ai) - OpenAI-compatible inference endpoints - Comprehensive API documentation
What are the pricing options?
We offer flexible pricing:
- Starter: Free tier for experimentation - Pro: $49/month with more compute - Enterprise: Custom pricing for large teams