• untorquer@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    4 days ago

    That’s a fair point when these LLMs are restricted to areas where they function well. They have use cases that make sense when isolated from the ethics around training and compute. But the people who made them are applying them wildly outside these use cases.

    These are pushed as a solution to every problem for the sake of profit with intentional ignorance of these issues. If a few errors impact someone it’s just a casualty in the goal of making it profitable. That can’t be disentwined from them unless you limit your argument to open source local compute.