L
Initializing Studio...
Get your first fine-tuned AI model running in under 5 minutes. No deep ML knowledge required.
From zero to fine-tuned model in minutes
Works on CPU, faster with NVIDIA GPU
Clear explanations at every step
Same code works for production
python --version1# Quick system check - run this to see if you're ready2python --version # Should show 3.8 or higher34# Optional: Check if you have GPU support5python -c "import torch; print('GPU ready!' if torch.cuda.is_available() else 'CPU mode - still works!')"67# If you don't have torch yet, that's fine - we'll install it next
1# Step 1: Create a clean environment (recommended but optional)2python -m venv langtrain-env3source langtrain-env/bin/activate # Windows: langtrain-env\Scripts\activate45# Step 2: Install LangTrain6pip install langtrain-ai78# Step 3: Verify it worked9python -c "import langtrain; print('✅ LangTrain installed!')"1011# That's it! You're ready to train your first model.
1from langtrain import LoRATrainer23# Step 1: Define your training data4# This is what you want your AI to learn5training_data = [6 {"user": "Hello!", "assistant": "Hi there! How can I help you today?"},7 {"user": "What can you do?", "assistant": "I can answer questions, have conversations, and help with various tasks!"},8 {"user": "Thanks!", "assistant": "You're welcome! Feel free to ask anything else."}9]1011# Step 2: Create the trainer12# This sets up everything for you automatically13trainer = LoRATrainer(14 model_name="microsoft/DialoGPT-medium", # The base model to fine-tune15 output_dir="./my_first_chatbot", # Where to save your model16)1718# Step 3: Train!19trainer.train(training_data)2021# Step 4: Test your model22response = trainer.chat("Hello!")23print(f"Your AI says: {response}")
./my_first_chatbot - you can share this folder or deploy it.1from langtrain import ChatModel23# Load your trained model4model = ChatModel.load("./my_first_chatbot")56# Have a conversation7print(model.chat("Hello!"))8print(model.chat("What can you do?"))9print(model.chat("Thanks for the help!"))1011# Your AI will respond based on what you trained it on!
1# Method 1: Load from a JSONL file2# Your file should look like:3# {"user": "Hello", "assistant": "Hi there!"}4# {"user": "How are you?", "assistant": "I'm doing great!"}56from langtrain import LoRATrainer78trainer = LoRATrainer(9 model_name="microsoft/DialoGPT-medium",10 output_dir="./custom_chatbot",11)1213# Train from your file14trainer.train_from_file("my_conversations.jsonl")1516# Method 2: Load from Hugging Face datasets17trainer.train_from_hub("your_username/your_dataset")
1# Ready for more? Try these next:23# 1. Train a larger, more capable model with QLoRA4from langtrain import QLoRATrainer5trainer = QLoRATrainer(6 model_name="meta-llama/Llama-3.1-8B",7 load_in_4bit=True, # Uses only 6GB VRAM!8)910# 2. Use the cloud API for instant inference11import langtrain12client = langtrain.Client(api_key="your-key")13response = client.chat("Hello!")1415# 3. Deploy your model as an API16from langtrain import deploy17deploy("./my_first_chatbot", port=8000)1819# Visit http://localhost:8000 to use your model!