• JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 hours ago

    I think about this a lot. Let’s assume for a second all reviews are legitimate (I know they’re not, but not detection isn’t what I’m talking about). I hate when you sort by ratings you get 5.0 (1 review) at the top. Why isn’t there some semi objective agreed upon way to do this? It doesn’t need to be perfect. Search engines aren’t perfect but we use them all the time. Something like 4.9 (10 reviews) should be above 5.0 (1 review).

    • funkless_eck@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      16 hours ago

      In NPS (net promoter score) tracking - you know the survey “would you recommend this to your friends and colleagues”

      it’s worded and scored 0-10 so that you can assume everyone that scores 7+ is happy and anyone below isn’t.

      It’s a psychological thing that wouldn’t work if it was thumb up or thumb down.

      In 5 star systems 4 and 5 should be considered promoters and 3 and below should be considered detractors, but again you should bifurcate and dichotomize the output so you see how many scored 4>= and how many <=3