It’s a plausible trap. Depending on the architecture, the image decoder (that “sees”) is bolted onto main model as a more discrete part, and the image generator could be a totally different model. So internally, if it’s not ingesting the “response” image, it has no clue they’re the same.
Of course, we have no idea, because OpenAI is super closed :/
did you actually get it to generate a picture and then ask if the same picture is generated and it said no?
It’s a plausible trap. Depending on the architecture, the image decoder (that “sees”) is bolted onto main model as a more discrete part, and the image generator could be a totally different model. So internally, if it’s not ingesting the “response” image, it has no clue they’re the same.
Of course, we have no idea, because OpenAI is super closed :/
“Open” AI. Yeah, open my ass.
This is a LemmyShitpost, so, yes absolutely.
Indeed.