This guide walks you through how to fine-tune Gemma on a custom text-to-sql dataset using Hugging Face Transformers and TRL. You will learn:
- What is Quantized Low-Rank Adaptation (QLoRA)
- Setup development environment
- Create and prepare the fine-tuning dataset
- Fine-tune Gemma using TRL and the SFTTrainer
- Test Model Inference and generate SQL queries
What is Quantized Low-Rank Adaptation (QLoRA)
This guide demonstrates the use of Quantized Low-Rank Adaptation (QLoRA), which emerged as a popular method to efficiently fine-tune LLMs as it reduces computational resource requirements while maintaining high performance. In QloRA, the pretrained model is quantized to 4-bit and the weights are frozen. Then trainable adapter layers (LoRA) are attached and only the adapter layers are trained. Afterwards, the adapter weights can be merged with the base model or kept as a separate adapter.
Setup development environment
The first step is to install Hugging Face Libraries, including TRL, and datasets to fine-tune open model, including different RLHF and alignment techniques.
# Install Pytorch & other libraries
%pip install "torch>=2.4.0" tensorboard
# Install Gemma release branch from Hugging Face
%pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3
# Install Hugging Face libraries
%pip install --upgrade \
"datasets==3.3.2" \
"accelerate==1.4.0" \
"evaluate==0.4.3" \
"bitsandbytes==0.45.3" \
"trl==0.15.2" \
"peft==0.14.0" \
protobuf \
sentencepiece
# COMMENT IN: if you are running on a GPU that supports BF16 data type and flash attn, such as NVIDIA L4 or NVIDIA A100
#% pip install flash-attn
Note: If you are using a GPU with Ampere architecture (such as NVIDIA L4) or newer, you can use Flash attention. Flash Attention is a method that significantly speeds computations up and reduces memory usage from quadratic to linear in sequence length, leading to acelerating training up to 3x. Learn more at FlashAttention.
Before you can start training, you have to make sure that you accepted the terms of use for Gemma. You can accept the license on Hugging Face by clicking on the Agree and access repository button on the model page at: http://huggingface.co/google/gemma-3-1b-pt
After you have accepted the license, you need a valid Hugging Face Token to access the model. If you are running inside a Google Colab, you can securely use your Hugging Face Token using the Colab secrets otherwise you can set the token as directly in the login
method. Make sure your token has write access too, as you push your model to the Hub during training.
from google.colab import userdata
from huggingface_hub import login
# Login into Hugging Face Hub
hf_token = userdata.get('HF_TOKEN') # If you are running inside a Google Colab
login(hf_token)
Create and prepare the fine-tuning dataset
When fine-tuning LLMs, it is important to know your use case and the task you want to solve. This helps you create a dataset to fine-tune your model. If you haven't defined your use case yet, you might want to go back to the drawing board.
As an example, this guide focuses on the following use case:
- Fine-tune a natural language to SQL model for seamless integration into a data analysis tool. The objective is to significantly reduce the time and expertise required for SQL query generation, enabling even non-technical users to extract meaningful insights from data.
Text-to-SQL can be a good use case for fine-tuning LLMs, as it is a complex task that requires a lot of (internal) knowledge about the data and the SQL language.
Once you have determined that fine-tuning is the right solution, you need a dataset to fine-tune. The dataset should be a diverse set of demonstrations of the task(s) you want to solve. There are several ways to create such a dataset, including:
- Using existing open-source datasets, such as Spider
- Using synthetic datasets created by LLMs, such as Alpaca
- Using datasets created by humans, such as Dolly.
- Using a combination of the methods, such as Orca
Each of the methods has its own advantages and disadvantages and depends on the budget, time, and quality requirements. For example, using an existing dataset is the easiest but might not be tailored to your specific use case, while using domain experts might be the most accurate but can be time-consuming and expensive. It is also possible to combine several methods to create an instruction dataset, as shown in Orca: Progressive Learning from Complex Explanation Traces of GPT-4.
This guide uses an already existing dataset (philschmid/gretel-synthetic-text-to-sql), a high quality synthetic Text-to-SQL dataset including natural language instructions, schema definitions, reasoning and the corresponding SQL query.
Hugging Face TRL supports automatic templating of conversation dataset formats. This means you only need to convert your dataset into the right json objects, and trl
takes care of templating and putting it into the right format.
{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}
The philschmid/gretel-synthetic-text-to-sql contains over 100k samples. To keep the guide small, it is downsampled to only use 10,000 samples.
You can now use the Hugging Face Datasets library to load the dataset and create a prompt template to combine the natural language instruction, schema definition and add a system message for your assistant.
from datasets import load_dataset
# System message for the assistant
system_message = """You are a text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA."""
# User prompt that combines the user query and the schema
user_prompt = """Given the <USER_QUERY> and the <SCHEMA>, generate the corresponding SQL command to retrieve the desired data, considering the query's syntax, semantics, and schema constraints.
<SCHEMA>
{context}
</SCHEMA>
<USER_QUERY>
{question}
</USER_QUERY>
"""
def create_conversation(sample):
return {
"messages": [
# {"role": "system", "content": system_message},
{"role": "user", "content": user_prompt.format(question=sample["sql_prompt"], context=sample["sql_context"])},
{"role": "assistant", "content": sample["sql"]}
]
}
# Load dataset from the hub
dataset = load_dataset("philschmid/gretel-synthetic-text-to-sql", split="train")
dataset = dataset.shuffle().select(range(12500))
# Convert dataset to OAI messages
dataset = dataset.map(create_conversation, remove_columns=dataset.features,batched=False)
# split dataset into 10,000 training samples and 2,500 test samples
dataset = dataset.train_test_split(test_size=2500/12500)
# Print formatted user prompt
print(dataset["train"][345]["messages"][1]["content"])
Fine-tune Gemma using TRL and the SFTTrainer
You are now ready to fine-tune your model. Hugging Face TRL SFTTrainer makes it straightforward to supervise fine-tune open LLMs. The SFTTrainer
is a subclass of the Trainer
from the transformers
library and supports all the same features, including logging, evaluation, and checkpointing, but adds additional quality of life features, including:
- Dataset formatting, including conversational and instruction formats
- Training on completions only, ignoring prompts
- Packing datasets for more efficient training
- Parameter-efficient fine-tuning (PEFT) support including QloRA
- Preparing the model and tokenizer for conversational fine-tuning (such as adding special tokens)
The following code loads the Gemma model and tokenizer from Hugging Face and initializes the quantization configuration.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForImageTextToText, BitsAndBytesConfig
# Hugging Face model id
model_id = "google/gemma-3-1b-pt" # or `google/gemma-3-4b-pt`, `google/gemma-3-12b-pt`, `google/gemma-3-27b-pt`
# Select model class based on id
if model_id == "google/gemma-3-1b-pt":
model_class = AutoModelForCausalLM
else:
model_class = AutoModelForImageTextToText
# Check if GPU benefits from bfloat16
if torch.cuda.get_device_capability()[0] >= 8:
torch_dtype = torch.bfloat16
else:
torch_dtype = torch.float16
# Define model init arguments
model_kwargs = dict(
attn_implementation="eager", # Use "flash_attention_2" when running on Ampere or newer GPU
torch_dtype=torch_dtype, # What torch dtype to use, defaults to auto
device_map="auto", # Let torch decide how to load the model
)
# BitsAndBytesConfig: Enables 4-bit quantization to reduce model size/memory usage
model_kwargs["quantization_config"] = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_compute_dtype=model_kwargs['torch_dtype'],
bnb_4bit_quant_storage=model_kwargs['torch_dtype'],
)
# Load model and tokenizer
model = model_class.from_pretrained(model_id, **model_kwargs)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-1b-it") # Load the Instruction Tokenizer to use the official Gemma template
The SFTTrainer
supports a native integration with peft
, which makes it straightforward to efficiently tune LLMs using QLoRA. You only need to create a LoraConfig
and provide it to the trainer.
from peft import LoraConfig
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.05,
r=16,
bias="none",
target_modules="all-linear",
task_type="CAUSAL_LM",
modules_to_save=["lm_head", "embed_tokens"] # make sure to save the lm_head and embed_tokens as you train the special tokens
)
Before you can start your training, you need to define the hyperparameter you want to use in a SFTConfig
instance.
from trl import SFTConfig
args = SFTConfig(
output_dir="gemma-text-to-sql", # directory to save and repository id
max_seq_length=512, # max sequence length for model and packing of the dataset
packing=True, # Groups multiple samples in the dataset into a single sequence
num_train_epochs=3, # number of training epochs
per_device_train_batch_size=1, # batch size per device during training
gradient_accumulation_steps=4, # number of steps before performing a backward/update pass
gradient_checkpointing=True, # use gradient checkpointing to save memory
optim="adamw_torch_fused", # use fused adamw optimizer
logging_steps=10, # log every 10 steps
save_strategy="epoch", # save checkpoint every epoch
learning_rate=2e-4, # learning rate, based on QLoRA paper
fp16=True if torch_dtype == torch.float16 else False, # use float16 precision
bf16=True if torch_dtype == torch.bfloat16 else False, # use bfloat16 precision
max_grad_norm=0.3, # max gradient norm based on QLoRA paper
warmup_ratio=0.03, # warmup ratio based on QLoRA paper
lr_scheduler_type="constant", # use constant learning rate scheduler
push_to_hub=True, # push model to hub
report_to="tensorboard", # report metrics to tensorboard
dataset_kwargs={
"add_special_tokens": False, # We template with special tokens
"append_concat_token": True, # Add EOS token as separator token between examples
}
)
You now have every building block you need to create your SFTTrainer
to start the training of your model.
from trl import SFTTrainer
# Create Trainer object
trainer = SFTTrainer(
model=model,
args=args,
train_dataset=dataset["train"],
peft_config=peft_config,
processing_class=tokenizer
)
Start training by calling the train()
method.
# Start training, the model will be automatically saved to the Hub and the output directory
trainer.train()
# Save the final model again to the Hugging Face Hub
trainer.save_model()
Before you can test your model, make sure to free the memory.
# free the memory again
del model
del trainer
torch.cuda.empty_cache()
When using QLoRA, you only train adapters and not the full model. This means when saving the model during training you only save the adapter weights and not the full model. If you want to save the full model, which makes it easier to use with serving stacks like vLLM or TGI, you can merge the adapter weights into the model weights using the merge_and_unload
method and then save the model with the save_pretrained
method. This saves a default model, which can be used for inference.
from peft import PeftModel
# Load Model base model
model = model_class.from_pretrained(model_id, low_cpu_mem_usage=True)
# Merge LoRA and base model and save
peft_model = PeftModel.from_pretrained(model, args.output_dir)
merged_model = peft_model.merge_and_unload()
merged_model.save_pretrained("merged_model", safe_serialization=True, max_shard_size="2GB")
processor = AutoTokenizer.from_pretrained(args.output_dir)
processor.save_pretrained("merged_model")
Test Model Inference and generate SQL queries
After the training is done, you'll want to evaluate and test your model. You can load different samples from the test dataset and evaluate the model on those samples.
import torch
from transformers import pipeline
model_id = "gemma-text-to-sql"
# Load Model with PEFT adapter
model = model_class.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch_dtype,
attn_implementation="eager",
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
Let's load a random sample from the test dataset and generate a SQL command.
from random import randint
import re
# Load the model and tokenizer into the pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Load a random sample from the test dataset
rand_idx = randint(0, len(dataset["test"]))
test_sample = dataset["test"][rand_idx]
# Convert as test example into a prompt with the Gemma template
stop_token_ids = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")]
prompt = pipe.tokenizer.apply_chat_template(test_sample["messages"][:2], tokenize=False, add_generation_prompt=True)
# Generate our SQL query.
outputs = pipe(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=stop_token_ids, disable_compile=True)
# Extract the user query and original answer
print(f"Context:\n", re.search(r'<SCHEMA>\n(.*?)\n</SCHEMA>', test_sample['messages'][0]['content'], re.DOTALL).group(1).strip())
print(f"Query:\n", re.search(r'<USER_QUERY>\n(.*?)\n</USER_QUERY>', test_sample['messages'][0]['content'], re.DOTALL).group(1).strip())
print(f"Original Answer:\n{test_sample['messages'][1]['content']}")
print(f"Generated Answer:\n{outputs[0]['generated_text'][len(prompt):].strip()}")
Summary and next steps
This tutorial covered how to fine-tune a Gemma model using TRL and QLoRA. Check out the following docs next:
- Learn how to generate text with a Gemma model.
- Learn how to fine-tune Gemma for vision tasks using Hugging Face Transformers.
- Learn how to perform distributed fine-tuning and inference on a Gemma model.
- Learn how to use Gemma open models with Vertex AI.
- Learn how to fine-tune Gemma using KerasNLP and deploy to Vertex AI.