But just as Glaze’s userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze’s protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze’s protections could be “easily bypassed, leaving artists vulnerable to style mimicry.”

  • just another dev@lemmy.my-box.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    Agreed. It was fun as a thought exercise, but this failure was inevitable from the start. Ironically, the existence and usage of such tools will only hasten their obsolescence.

    The only thing that would really help is GDPR-like fines (based as a percentage of income, not profits), for any company that trains or willingly uses models that have been trained on data without explicit consent from its creator.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      That would “help” by basically introducing the concept of copyright to styles and ideas, which I think would likely have more devastating consequences to art than any AI could possibly inflict.