• 0 Posts
  • 61 Comments
Joined 2 years ago
cake
Cake day: July 29th, 2023

help-circle
  • I might be the only person here who thinks that the upcoming quantum bubble has the potential to deliver useful things (but boring useful things, and so harder to build hype on) but stuff like this particularly irritates me:

    https://quantumai.google/

    Quantum fucking ai? Motherfucker,

    • You don’t have ai, you have a chatbot
    • You don’t have a quantum computer, you have a tech demo for a single chip
    • Even if you had both of those things, you wouldn’t have “quantum ai”
    • if you have a very specialist and probably wallet-vaporisingly expensive quantum computer, why the hell would anyone want to glue an idiot chatbot to it, instead of putting it in the hands of competent experts who could actually do useful stuff with it?

    Best case scenario here is that this is how one department of Google get money out of the other bits of Google, because the internal bean counters cannot control their fiscal sphincters when someone says “ai” to them.




  • I was reading a post by someone trying to make shell scripts with an llm, and at one point the system suggested making a directory called ~ (which is a shorthand for your home directory in a bunch of unix-alikes). When the user pointed out this was bad, the llm recommended remediation using rm -r ~ which would of course delete all your stuff.

    So, yeah, don’t let the approximately-correct machine do things by itself, when a single character substitution can destroy all your stuff.

    And JFC, being surprised that something called “YOLO” might be bad? What were people expecting? --all-the-red-flags










  • For those of you who haven’t already seen it, r/accelerate is banning users who think they’ve talked to an AI god.

    https://www.404media.co/pro-ai-subreddit-bans-uptick-of-users-who-suffer-from-ai-delusions/

    There’s some optimism from the redditors that the LLM folk will patch the problem out (“you must be prompting it wrong”), but assume that they somehow just don’t know about the issue yet.

    As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it’s clear that they’re not aware of the issue enough right now.

    There’s some dubious self-published analysis which coined the term “neural howlround” to mean some sort of undesirable recursive behaviour in LLMs that I haven’t read yet (and might not, because it sounds like cultspeak) and may not actually be relevant to the issue.

    It wraps up with a surprisingly sensible response from the subreddit staff.

    Our policy is to quietly ban those users and not engage with them, because we’re not qualified and it never goes well.

    AI boosters not claiming expertise in something, or offloading the task to an LLM? Good news, though surprising.



  • Interesting (in a depressing way) thread by author Alex de Campi about the fuckery by Unbound/Boundless (crowdfunding for publishing, which segued into financial incompetence and stealing royalties), whose latest incarnation might be trying to AI their way out of the hole they’ve dug for themselves.

    From the liquidator’s proposals:

    We are also undertaking new areas of business that require no funds to implement, such as starting to increase our rights income from book to videogaming by leveraging our contacts in the gaming industry and potentially creating new content based on our intellectual property utilizing inexpensive artificial intelligence platforms.

    (emphasis mine)

    They don’t appear to actually own any intellectual property anymore (due to defaulting on contracts) so I can’t see this ending well.

    Original thread, for those of you with bluesky accounts: https://bsky.app/profile/alexdecampi.bsky.social/post/3lqfmpme2722w





  • When confronted with a problem like “your search engine imagined a case and cited it”, the next step is to wonder what else it might be making up, not to just quickly slap a bit of tape over the obvious immediate problem and declare everything to be great.

    The other thing to be concerned about is how lazy and credulous your legal team are that they cannot be bothered to verify anything. That requires a significant improvement in professional ethics, which isn’t something that is really amenable to technological fixes.