• stickly@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    I apologize if my phrasing is combative; I have experience with this topic and get a knee-jerk reaction to supporting AI as a literacy tool.

    Your argument is flawed because it implicitly assumes that critical thinking can be offloaded to a tool. One of my favorite quotes on that:

    The point of cognitive automation is NOT to enhance thinking. The point of it is to avoid thinking in the first place.

    (coincidentally from an article on the topic of LLM use for propoganda)

    You can’t “open source” a model in a meaningful and verifiable way. Datasets are massive and, even if you had the compute to audit them, poisoning can be much more subtle than explicitly trashing the dataset.

    For example, did you know you can control bias just by changing the ordering of the dataset? There’s an interesting article from the same author that covers well known poisoning vectors, and that’s already a few years old.

    These problems are baked in to any AI at this scale, regardless of implementation. The idea that we can invent a way out of a misinformation hell of our own design is a mirage. The solution will always be to limit exposure and make media literacy a priority.