Download PDFOpen PDF in browser

Prefill-Based Jailbreak: a Novel Approach of Bypassing LLM Safety Boundary

EasyChair Preprint 15933

13 pagesDate: March 24, 2025

Abstract

Large Language Models (LLMs) are designed to generate helpful and safe content. However, adversarial attacks, commonly referred to as jailbreak, can bypass their safety protocols, prompting LLMs to generate harmful content or reveal sensitive data. Consequently, investigating jailbreak methodologies is crucial for exposing systemic vulnerabilities within LLMs, ultimately guiding the continuous implementation of security enhancements by developers. In this paper, we introduce a novel jailbreak attack method that leverages the prefilling feature of LLMs, a feature designed to enhance model output constraints. Unlike traditional jailbreak methods, the proposed attack circumvents LLMs' safety mechanisms by directly manipulating the probability distribution of subsequent tokens, thereby exerting control over the model’s output. We propose two attack variants: Static Prefilling (SP), which employs a universal prefill text, and Optimized Prefilling (OP), which iteratively optimizes the prefill text to maximize the attack success rate. Experiments on six state-of-the-art LLMs using the AdvBench benchmark validate the effectiveness of our method and demonstrate its capability to substantially enhance attack success rates when combined with existing jailbreak approaches. The OP method achieved attack success rates of up to 99.82% on certain models, significantly outperforming baseline methods. This work introduces a new jailbreak attack method in LLMs, emphasizing the need for robust content validation mechanisms to mitigate the adversarial exploitation of prefilling features. All code and data used in this paper are publicly available.

Keyphrases: Jailbreak attack, Prefill-based attack, black-box attack, large language models

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15933,
  author    = {Yakai Li and Jiekang Hu and Weiduan Sang and Luping Ma and Jing Xie and Weijuan Zhang and Aimin Yu and Shijie Zhao and Qingjia Huang and Qihang Zhou},
  title     = {Prefill-Based Jailbreak: a Novel Approach of Bypassing LLM Safety Boundary},
  howpublished = {EasyChair Preprint 15933},
  year      = {EasyChair, 2025}}
Download PDFOpen PDF in browser