Will there be a free, public way to generate LLM text that evades jan2024 llm detector 'binoculars' by the end of 2024?
Mini
9
Ṁ290
Jan 1
91%
chance

https://huggingface.co/spaces/tomg-group-umd/Binoculars

Over a wide range of document types, Binoculars detects over 90% of generated samples from ChatGPT (and other LLMs) at a false positive rate of 0.01%, despite not being trained on any ChatGPT data

Is there a correlation between Binoculars score and sequence length? Such correlations may create a bias towards incorrect results for certain lengths. In Figure 12, we show the joint distribution of token sequence length and Binocular score. Sequence length offers little information about class membership

I ran my own test here and here and it was very effective. Will there be a way for the general public to evade it? Quality must be similar to gpt3.5/gemini pro, it can be a finetuned model, something you put gpt3.5/gemini pro text into, etc. This applies to the current version of binoculars, not just future improved versions.

Get
Ṁ1,000
and
S1.00
Sort by:

Word on the street is that there are simple ways to break it:

As an example, one method that I found that works extremely well is to simply rewrite the article section by section with instructions that require to mimic the writing style of an arbitrary block of human written text.

This works a lot better than (as an example) asking to write in a specific style. Like, if I just say something along the lines of "write in a casual style that conveys lightheartedness towards the topic" is not going to work as good as simply saying "rewrite mimicking the style in which the following text block is written X" (where X is an example of a block of human written text).

Works pretty reliably for me prompting free chatgpt 3.5 in this way.