• brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    14 hours ago

    It’s a plausible trap. Depending on the architecture, the image decoder (that “sees”) is bolted onto main model as a more discrete part, and the image generator could be a totally different model. So internally, if it’s not ingesting the “response” image, it has no clue they’re the same.

    Of course, we have no idea, because OpenAI is super closed :/