• taladar@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 days ago

    To be fair for a gish gallop style of bad faith argument the way religious people like to use LLMs are probably a good match. If all you want is a high number of arguments it is probably easy to produce those with an LLM. Not to mention that most of their arguments have been repeated countless times anyway so the training data probably has them in large numbers. It is not as if they ever cared if their arguments were any good anyway.