

Being able to “prove” that something is AI generated usually means that:
A) The model the generated it leaves a watermark, either visually or hidden
B) The model is well known enough that you can deduce a pattern and reference what you’re checking with that pattern.
The problem with the former is that you are trusted this corporations (or individuals training their own models, or malicious actors) to do so.
There are also problems with the latter: The models are constantly iterating and being patched to fix issues that people notice (favoring certain words, not being able to draw glasses of liquids full to the brim, etc)
Also, if the image or work was made using a niche or not well-documented AI then it probably wouldn’t be a pattern that you’re checking.
Also also, theres a high false positive rate, because it’s just pattern matching mostly.
I feel like I would treat my Togruta wife very well ;-;
Real talk tho, humans will eventually reach the stars, being negative/nihilist about it and saying it’s better if it doesn’t happen is dangerous because people like Elon/Donald will definitely do horrible things if people with remorse and morals aren’t involved/ already established there / the one’s initiating
Not saying you’re nihilist, but I go to Uni in SF and everyone is so anti-imperialism that they think any form of colonization (even on a dead planet like Mars) is bad and it’s pretty grating.
Elon should not be the one who decides how the land/living conditions are set up