This is both upsetting and sad.

What’s sad is that I spend about $200/month on professional therapy, which is on the low end. Not everyone has those resources. So I understand where they’re coming from.

What’s upsetting is that this user TRUSTED ChatGPT to follow through on the advice without critically challenging it.

Even more upsetting is this user admitted to their mistake. I guarantee you that there are thousands like OP who wasn’t brave enough to admit it, and are probably to this day, still using ChatGPT as a support system.

Source: https://www.reddit.com/r/ChatGPT/comments/1k1st3q/i_feel_so_betrayed_a_warning/

  • JohnnyEnzyme@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 days ago

    And unless I’m quite mistaken, any specific correction one might possibly contribute to the LLM project in question (i.e. software hallucination), is generally-speaking, roundly & enthusiastically embraced and even celebrated by the LLM, then immediately and completely ignored.

    I.e., they’re not programmed to listen to our feedback in a meaningful, educational way, only to keep munching on the databases their doggie-daddies have sicced them upon.

    EDIT: that cynicism / critique aside, ChatGPT in particular has been hugely useful in my language-learning, and there’s no question to me that it’s improved a lot, just across the last few months. FWIW

    • AFK BRB Chocolate@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      I.e., they’re not programmed to listen to our feedback in a meaningful, educational way

      Right, because “listen” and “educational” don’t apply to a software application like this. It has a model based on processing a truly huge amount of text. Your lone correctional prompt might tweak that model, but only slightly.

      And sure, as a tool, LLMs can be very useful. I managed a software engineering organization for an aerospace company for a lot of years, and I made a number of constraints about how the LLM could be used (the company had one inside the firewall, so there weren’t IP issues), but I for sure encouraged it to be used. Essentially I was concerned about our software engineers using it in any way where they counted on it to be correct, because it often wouldn’t be. But it was great for things like suggesting test cases to test a piece of code.