diff --git "a/LUHmWDydue/LUHmWDydue.md" "b/LUHmWDydue/LUHmWDydue.md" new file mode 100644--- /dev/null +++ "b/LUHmWDydue/LUHmWDydue.md" @@ -0,0 +1,1129 @@ +# Generative Models Are Self-Watermarked: Declaring Model Authentication Through Re-Generation + +Anonymous authors Paper under double-blind review + +## Abstract + +As machine- and AI-generated content proliferates, protecting the intellectual property of generative models has become imperative, yet verifying data ownership poses formidable challenges, particularly in cases of unauthorized reuse of generated data. Confirming the ownership of the data is challenging, as the data generation process is opaque to those verifying the authenticity. Our work is dedicated to detecting data reuse from a single sample. + +While watermarking has been the traditional method to detect AI-generated content by embedding specific information within models or their outputs, which could compromise the quality of outputs, our approach instead identifies inherent fingerprints in the outputs without altering models. The verification is achieved by requiring the (authentic) models to re-generate the data. Furthermore, we propose a method that iteratively re-generates the data to enhance these fingerprints in the generation stage. The strategy is both theoretically sound and empirically proven effective with recent advanced text and image generative models. Our approach is significant because it avoids extra operations or measures, such as (1) modifying model parameters, (2) altering the generated outputs, or (3) employing additional classification models for verification. This enhancement broadens the applicability of authorship verification (1) to track the IP violation in generative models published without explicitly designed watermark mechanisms and (2) to produce outputs without compromising their quality. + +## 1 Introduction + +In recent years, the emergence of Artificial Intelligence Generated Content (AIGC), including tools like ChatGPT, Claude, DALL-E, Stable Diffusion, Copilot, has marked a significant advancement in the quality of machine-generated content. These generative models are increasingly being offered by companies as part of pay-as-you-use services on cloud platforms. While such development has undeniably accelerated the advancement and dissemination of AI technology, it has simultaneously raised substantial concerns regarding the misuse of these models. A key challenge lies in authenticating the author (or source) of the content generated by these models, which encompasses two primary aspects: (1) protecting the Intellectual Property (IP) of the authentic generator when content is misused, and (2) tracing the responsibility of the information source to ensure accountability. Traditionally, the primary approach for safeguarding IP of the contents generated by AI generator has involved embedding subtle but verifiable watermarks into their outputs, such as text (He et al., 2022a;b; Kirchenbauer et al., 2023a), images (Zear et al., 2018; Zhao et al., 2023b) and code (Lee et al., 2023). These watermarking techniques typically involve adding supplementary information to the deep learning model's parameters and architectures or direct post-processing alterations to the generated outputs. However, these alterations could potentially degrade the quality of the generated content. An alternative strategy has been the classification of data produced by a specific model to distinguish it from content generated by other models or humans (Solaiman et al., 2019b; Ippolito et al., 2020). Nonetheless, this often requires training additional classifiers to verify authorship, raising concern about their ability to generalize and maintain robustness across evolving generative models, especially with few training samples. + +![1_image_0.png](1_image_0.png) + +Figure 1: The two-stage framework leveraging fingerprints in generative models. In (I) *Generation Stage*, +models generate output in traditional ways and optionally re-generate the output k ∈ [1..K] times to enhance the fingerprints. In (II) *Verification Stage*, the authentication of data ownership is established by assessing the distance between the suspected data (left) and its re-generated version (right). There exists a distinguishable margin between the distances by authentic generator (Ga, at the bottom) and contrasting benign generator (Gc, at the top), exemplified by models from OpenAI and Stability AI respectively. +In response to the challenges of authorship authentication and IP protection, our approach is strategically designed to exploit the inherent characteristics of generative models. Firstly, we recognize that generative models possess unique attributes - akin to model fingerprints - such as specific styles and embedded knowledge. In the *Verification Stage* in our framework, we utilize these implicit fingerprints by measuring the distance D between the genuine data samples with content re-generated by the authentic and contrasting models. Secondly, to enhance the distinctive nature of these fingerprints, our approach in the *Generation* Stage involves using the original model to iteratively re-generate outputs from previous iterations. This process is grounded in fixed-point theory (Granas & Dugundji, 2003) and its practical applications. Through this iterative re-generation, the model's inherent fingerprints become more pronounced, enhancing the effectiveness of our verification process. + +In Figure 1, we present a conceptual framework for authorship verification through re-generation. + +Stage I: Generation The authentic generator targets producing outputs that involve stealthy but significant signatures that are distinguishable from other generative models or humans. We consider two distinct approaches: *(i) 'Traditional Generation'* produces the authentic outputs xa from a given text input as a prompt, *i.e.,* xa = Ga(xp), where xp = "Storm on Sea of Galilee"; and *(ii) 'Iterative Re-Generation'* +enhances the model's unique signature by re-generating the data multiple times using a 're-painting' or 'paraphrasing' mode, *i.e.,* x +⟨k+1⟩ +a = Ga(x +⟨k⟩ +a ). Here Ga is the authentic generative model which is DALL·E (Ramesh et al., 2021) in this example. + +Stage II: Verification In this stage, authentic model Ga verifies the origin of its artefact xa by comparing the 'distance' D between xa and its re-generation by the authentic model Ga(xa) or other contrasting models Gc(xa). Intuitively, the one-step regeneration distance of an image originally generated by the authentic model, such as DALL·E by OpenAI, is expected to be smaller when compared to itself than to a contrasting model not involved in its initial generation, *i.e.,* D(xa, Ga(xa)) < D(xa, Gc(xa)). Furthermore, the more re-generations an image undergoes during the *Generation Stage*, the lower its one-step regeneration distance of the authentic model becomes at the Verification Stage, *i.e.,* D(x +⟨i⟩ +a , Ga(x +⟨i⟩ +a )) < D(x +⟨j⟩ +a , Ga(x +⟨j⟩ +a )), when i > j. We summarize the key advantages and contributions of our work as follows: +- We validate the effectiveness of using re-generated data as a key indicator for authorship verification. + +This approach is designed to be functional in black-box settings and applies across various generative applications, such as Natural Language Generation (NLG) and Image Generation (IG). + +- We introduce an iterative re-generation technique to enhance the inherent fingerprints of generative models. We use fixed-point theory to demonstrate that modifications achieved through re-generation converge to minimal edit distances. This ensures a distinct separation between the outputs of authentic models and those generated by other models. + +- We have developed a practical verification protocol that streamlines the process of data ownership validation in generative models. This protocol is especially useful in legal settings, as it eliminates the need for generators to disclose their model parameters or watermarking strategies, thereby preserving confidentiality and proprietary integrity. + +- A notable advantage of our approach is its reliance solely on the standard generative models, without resorting to additional interventions, including (1) manipulating or fine-tuning generative model parameters, (2) post-editing the outputs, or (3) additional independent classification models for verification. This simplicity in design not only preserves the original quality of the generated content but also enhances the feasibility and accessibility of our verification method. + +## 2 Related Work + +Recent advancements in the field of generative modeling, exemplified by innovations like DALL·E (Ramesh et al.), Stable Diffusion (Rombach et al., 2022), ChatGPT, Claude, Gemini, and others. However, this proliferation of synthetic media has simultaneously raised ethical concerns. These concerns include the potential for misuse in impersonation (Hernandez, 2023; Verma, 2023), dissemination of misinformation (Pantserev, 2020; Hazell, 2023; Mozes et al., 2023; Sjouwerman, 2023), academic dishonesty (Lund et al., 2023), and copyright infringement (Brundage et al., 2018; Rostamzadeh et al., 2021; He et al., 2022a; Xu & He, 2023). + +In response, there is an increasing focus on the need to trace and authenticate the origins of such content to prevent its illegitimate use in Artificial Intelligence-Generated Content (AIGC). Considering the distinct nature of image and text, we will review the authorship identification in image and text generation models separately. + +## Authorship Identification For Image Generation Models + +Image watermarking, recognized as a standard approach to verify ownership and safeguard the copyright of a model, involves imprinting a unique watermark onto generated images. Conventional methods encompass direct alterations to pixel values, for instance, in the spatial domain, or the incorporation of watermarks into altered image forms, such as in the frequency domain (Cox et al., 2008). + +With the advancements of deep learning techniques, multiple works have suggested leveraging neural networks to seamlessly encode concealed information within images in a fully trainable manner (Zhu et al., +2018; Yang et al., 2019; Ahmadi et al., 2020; You et al., 2020). Inspired by this idea, Fernandez et al. (2022) incorporate watermarks into the latent space formulated by a self-supervised network such as DINO (Caron et al., 2021). This approach modulates the features of the image within a specific region of the latent space, ensuring that subsequent transformations applied to watermarked images preserve the integrity of the embedded information. Subsequently, watermark detection can be conducted based on this same latent space. + +Similarly, Fernandez et al. (2023) introduce a binary signature directly into the decoder of a diffusion model, resulting in images that contain an imperceptibly embedded binary signature. This binary signature can be accurately extracted using a pre-trained watermark extractor during verification. + +Given the escalating concerns regarding the misuse of deep fakes, as highlighted in the literature (Brundage et al., 2018; Harris, 2019), several studies have proposed methodologies for attributing the origin of an image, specifically discerning between machine-generated and authentic images. This task is rendered feasible through the identification of subtle, yet machine-detectable, patterns unique to images generated by Generative Adversarial Networks (GANs), as evidenced in recent research (Marra et al., 2019; Afchar et al., 2018; Güera & Delp, 2018; Yu et al., 2019). Furthermore, the detection of deep fakes is enhanced by analyzing inconsistencies in the frequency domain or texture representation between authentic and fabricated images, as indicated in recent studies (Zhang et al., 2019; Durall et al., 2020; Liu et al., 2020). + +## Authorship Identification For Natural Language Generation Models + +Likewise, content generated by text generation models is increasingly vulnerable to various forms of misuse, including the spread of misinformation and the training of surrogate models (Wallace et al., 2020; Xu et al., +2022). Consequently, a growing interest has been in protecting the authorship (or IP) of text generation models or detecting machine-generated text. + +A straightforward solution is to incorporate watermarks into the generated text. However, unlike images, textual information is composed of discrete tokens, making the watermarking process for text a difficult endeavor due to the potential for inadvertent alternation that can change its semantic meaning (Katzenbeisser +& Petitolas, 2000). One solution to preserve semantic integrity during watermarking involves synonym substitution (Topkara et al., 2006; Chang & Clark, 2014; He et al., 2022a). Nevertheless, the simplistic approach to synonym substitution is vulnerable to detection through statistical analyses. In response, He et al. (2022b) +have proposed a conditional synonym substitution method to enhance both stealthiness and robustness of substitution-based watermarks. Moreover, Venugopal et al. (2011) adopted bit representation to encode semantically similar sentences, enabling the selection of watermarked sentences through bit manipulation. + +The previously discussed methods have been centered on applying watermarks through post-editing. However, with the emergence of LLMs, there has been a significant shift towards developing watermarks tailored for LLMs to identify machine-generated text. A notable approach in this area employs biased sampling strategies to alter token distribution at each generation step, favoring tokens from specific pre-defined categories (Kirchenbauer et al., 2023a;b; Zhao et al., 2023a). Despite their innovation, these methods face vulnerabilities to "rewriting" attacks, where watermarked sentences are paraphrased either automatically or manually, thus challenging the identification of original authorship (Christ et al., 2023). To address this issue, Kuditipudi et al. (2023) propose a robust approach that maps a sequence of random numbers generated from a randomized watermark key to the outputs of a language model. This technique maintains the watermark's detectability, notwithstanding text alterations such as substitutions, insertions, or deletions. + +Interest in post-hoc detection has surged as a complementary measure to watermarking. This trend is driven by the fact that developers often keep the details of their watermarking algorithms confidential to prevent them from being compromised if leaked. Depending on whether machine-generated or human-authored text samples are available, one can utilize either zero-shot or training-based detection methods. Zero-shot detection relies on the premise that language model-generated texts inherently contain detectable markers, identifiable through analysis techniques such as perplexity (Idnay et al., 2022; Tian, 2023) or average pertoken log probability (Solaiman et al., 2019a; Mitchell et al., 2023). The training-based approaches leverage features from pre-trained language models to distinguish between machine-generated and human-authored texts (Solaiman et al., 2019a; Bakhtin et al., 2019; Jawahar et al., 2020; Chen et al., 2023). However, these approaches yield a binary outcome, classifying texts as machine-generated or human-generated. Instead, our approach can determine the origin of text generated by any LLMs, moving beyond binary outcomes. + +## 3 Background + +The most recent advanced large generative models usually support two modes for generating outputs: +Prompt-based Generation: The authentic generator G produces outputs xa conditioned on the given prompts xp, expressed as xa = G(xp). (1) +The selection of prompt inputs xp varies widely, ranging from textual descriptions to images. + +Paraphrasing Content: The generators can also "paraphrase" the intended outputs, including texts or images, by reconstructing the content +$${\boldsymbol{x}}_{a}^{\langle\mathrm{new}\rangle}={\mathcal{G}}({\boldsymbol{x}}_{a}^{\langle\mathrm{old}\rangle}).$$ +a). (2) +As a Natural Language Generation (NLG) example, we can use round-trip translation (Gaspari, 2006) or prompt the Large Language Model (LLM) to "paraphrase" a sentence. As an image generation (IG) example, given a generated image xa, the "paraphrasing" process will be (1) rebuilding partial images of the randomly masked region M[t] in the original image x +⟨old⟩ +a , *i.e.,* x +⟨new⟩ +a [t] = G(x +⟨old⟩ +a , M[t]) (von Platen et al., 2022), +and (2) merge these rebuilt portions as a whole image as the "paraphrased" output, + +$$\mathbf{x}_{a}={\mathcal{G}}(\mathbf{x}_{p}).$$ +$$(1)$$ +$$\left(2\right)$$ + +$$\mathbf{x}_{a}^{(\mathrm{new})}=\mathrm{Merge}(\{\mathbf{x}_{a}^{(\mathrm{new})}[t]\}_{t=1}^{T}).\tag{1}$$ +$$\left({\mathrm{3}}\right)$$ + +t=1). (3) +This iterative re-generation uses prior outputs as inputs, where x +⟨0⟩ +a is the initial output from the promptbased conditional generation. Our method will use *Prompt-based Generation* mode for initial output generation and *Paraphrasing Content* mode for both (1) iteratively polishing the generated contents and (2) authorship verification. + +## 4 Methodology + +Our research primarily focuses on the threat posed by malicious users who misuse generated content without authorization. Specifically, we delve into cases where the owner of authentic generative models, referred to as Ga, allows access to their models in a black-box fashion, permitting external queries to their APIs for content generation. This black-box setting is consistent with the current application practices of Large Foundation Models. Hereby, unscrupulous users could take advantage of the generated content, xa, while disregarding the legitimate creator's license for their own profits. To address this issue, API providers can actively monitor the characteristics of publicly available data to identify potential cases of plagiarism or misuse. This can be accomplished by applying the re-generation and measuring the corresponding edit distance in the verification stage, as described in Section 4.2. To enhance verification accuracy, the authentic model Ga can employ an iterative re-generation approach to bolster the fingerprinting signal, as introduced and proved in Section 4.3. + +If there are suspicions of plagiarism, the company can initiate legal proceedings against the alleged plagiarist through a third-party arbitration, following the verification protocol (see Algorithm 2) on the output with iterative re-generation (see Algorithm 1), as demonstrated in Section 4.1. The motivation and intuition of the approach proposed in Section 4.1 are explained in Sections 4.2 and 4.3. + +## 4.1 Data Generation And Verification Protocol + +Hereby, the defense framework is comprised of two key components, responding to the *Generation* and Verification Stages in Figure 1: (1) Iterative Generation (Algorithm 1), which progressively enhances the fingerprint signal in the generated outputs; and (2) Verification (Algorithm 2) is responsible for confirming the authorship of the data sample through a one-step re-generation process using both authentic model Ga and suspected contrasting model Gc, with a confidence margin δ > 0. + +Our verification protocol takes inspiration from human artists' capability to reproduce their work that closely resembles the original. Similarly, our process of iterative re-generation mirrors the way artists refine their unique writing or painting styles. Moreover, the distinctive 'style' of the generative model becomes increasingly pronounced with each re-generation cycle, as it converges to a more 'stable' output. This process is formulated by the iterative functions and their tendency to converge to fixed points is proved in Theorem 2. Algorithm 1 *Generation* Algorithm for *Stage I*. + +Input: Prompt input xp for generation and number of iterations K. + +Output: Image, Text, etc. content with implicit fingerprints. + +1: x +⟨0⟩ +a ← G(xp) ▷ Initial generation from prompts. + +2: for k ← 1 to K do ▷ Iterate K steps. + +3: x +⟨k⟩ +a ← G(x +⟨k−1⟩ +a ) ▷ Re-generation using "paraphrasing". + +4: **end for** 5: **return** x +⟨K⟩ a Algorithm 2 *Verification* Algorithm for *Stage II*. + +Input: Data sample x +⟨K⟩ +a generated by Ga and misused by Gc. + +The threshold δ for confident authentication. + +Output: Unauthorized usage according to a contrasting generator. + +1: ya ← Ga(x +⟨K⟩ +a ) ▷ Regenerate data by model Ga. + +2: yc ← Gc(x +⟨K⟩ +a ) ▷ Regenerate data by model Gc. + +3: r ← D(yc, x +⟨K⟩ +a )/D(ya, x +⟨K⟩ +a ) ▷ Calculate the exceeding distance ratio. + +4: **return** r > 1 + δ + +## 4.2 Authorship Verification Through Re-Generation Distance + +An authentic generative model Ga that aims to distinguish between the data samples it generated, denoted as xa, with the benign samples xc generated by other models Gc for contrast. To verify the data, the authentic model (1) re-generates the data Ga(x) and (2) evaluates the distance between the original sample and the re-generated sample, defined as d(x, G) ≜ D(G(x), x). In essence, samples produced by the authentic model are expected to exhibit lower 'self-edit' distance, as they share the same generative model Ga, which uses identical internal knowledge, such as writing or painting styles, effectively serving as model fingerprints. In mathematical terms, we have + +$$\mathbb{D}(\mathcal{G}_{a}(\mathbf{x}_{a}),\mathbf{x}_{a})<\mathbb{D}(\mathcal{G}_{a}(\mathbf{x}_{c}),\mathbf{x}_{c}),$$ _i.e., $d(\mathbf{x}_{a},\mathcal{G}_{a})