L

Initializing Studio...

Documentation

Getting Started

  • Introduction
  • Quick Start
  • Installation

Fine-tuning

  • LoRA & QLoRA
  • Full Fine-tuning

API & SDK

  • REST API
  • Python SDK

Deployment

  • Cloud Deployment
  • Security

Resources

  • FAQ
  • Changelog

Datasets

Learn how to manage and work with datasets in LangTrain for optimal model training.

Dataset Management

LangTrain provides comprehensive tools for loading, preprocessing, and managing your training datasets. Support for multiple formats ensures you can work with your existing data seamlessly.

Supported Formats:
●CSV - Comma-separated values

●JSON/JSONL - JavaScript Object Notation

●Parquet - Columnar storage format

●HuggingFace Datasets - Direct integration

●Custom formats - Via preprocessing pipelines
1# Load dataset from various sources
2from langtrain import Dataset
3
4# From CSV
5dataset = Dataset.from_csv('data.csv',
6 text_column='text',
7 label_column='label')
8
9# From JSON
10dataset = Dataset.from_json('data.jsonl')
11
12# From HuggingFace
13dataset = Dataset.from_huggingface('imdb')
14
15# Custom preprocessing
16dataset = Dataset.from_custom(
17 path='custom_data/',
18 preprocessor=custom_preprocessor
19)

Data Preprocessing

Apply transformations, tokenization, and augmentation to optimize your data for training.
1# Data preprocessing pipeline
2dataset = dataset.preprocess([
3 # Text cleaning
4 dataset.clean_text(remove_urls=True, remove_special=True),
5
6 # Tokenization
7 dataset.tokenize(tokenizer='bert-base-uncased', max_length=512),
8
9 # Data augmentation
10 dataset.augment(techniques=['synonym_replacement', 'back_translation']),
11
12 # Train/validation split
13 dataset.split(train_size=0.8, stratify=True)
14])
15
16# Custom preprocessing function
17def custom_preprocess(batch):
18 batch['text'] = [text.lower().strip() for text in batch['text']]
19 return batch
20
21dataset = dataset.map(custom_preprocess, batched=True)

Data Quality & Validation

Ensure your data quality with built-in validation and quality checks.
1# Data quality analysis
2quality_report = dataset.analyze_quality()
3print(quality_report.summary())
4
5# Validation checks
6dataset.validate([
7 'check_missing_values',
8 'check_label_distribution',
9 'check_text_length',
10 'check_duplicates'
11])
12
13# Automatic data cleaning
14dataset = dataset.clean(
15 remove_duplicates=True,
16 handle_missing='drop',
17 min_text_length=10,
18 max_text_length=1000
19)

Dataset Versioning

Track dataset versions and maintain reproducible experiments.
1# Version your datasets
2dataset.save_version('v1.0', description='Initial dataset')
3
4# Load specific version
5dataset = Dataset.load_version('my_dataset', version='v1.0')
6
7# Compare versions
8comparison = Dataset.compare_versions('my_dataset', 'v1.0', 'v1.1')
9print(comparison.statistics())

On this page

Dataset ManagementData PreprocessingData Quality & ValidationDataset Versioning