I’m finding it harder and harder to tell whether an image has been generated or not (the main giveaways are disappearing). This is probably going to become a big problem in like half a year’s time. Does anyone know of any proof of legitimacy projects that are gaining traction? I can imagine news orgs being the first to be hit by this problem. Are they working on anything?
Being able to “prove” that something is AI generated usually means that:
A) The model the generated it leaves a watermark, either visually or hidden
B) The model is well known enough that you can deduce a pattern and reference what you’re checking with that pattern.
The problem with the former is that you are trusted this corporations (or individuals training their own models, or malicious actors) to do so.
There are also problems with the latter: The models are constantly iterating and being patched to fix issues that people notice (favoring certain words, not being able to draw glasses of liquids full to the brim, etc)
Also, if the image or work was made using a niche or not well-documented AI then it probably wouldn’t be a pattern that you’re checking.
Also also, theres a high false positive rate, because it’s just pattern matching mostly.