• blarghly@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    When people talk about AI taking off exponentially, usually they are talking about the AI using its intelligence to make intelligence-enhancing modifications to itself. We are very much not there yet, and need human coaching most of the way.

    At the same time, no technology ever really follows a particular trend line. It advances in starts and stops with the ebbs and flows of interest, funding, novel ideas, and the discovered limits of nature. We can try to make projections - but these are very often very wrong, because the thing about the future is that it hasn’t happened yet.

    • Clinicallydepressedpoochie@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 month ago

      I do expect advancement to hit a period of exponential growth that quickly surpasses human intelligence. Given it adapts the drive to autonmously advance. Whether that is possible is yet to be seen and that’s kinda my point.

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Here are all 27 U.S. states whose names contain the letter “o”:

          Arizona

          California

          Colorado

          Connecticut

          Florida

          Georgia

          Idaho

          Illinois

          Iowa

          Louisiana

          Minnesota

          Missouri

          Montana

          New Mexico

          New York

          North Carolina

          North Dakota

          Ohio

          Oklahoma

          Oregon

          Rhode Island

          South Carolina

          South Dakota

          Vermont

          Washington

          Wisconsin

          Wyoming

          (That’s 27 states in total.)

          What’s missing?

          • kescusay@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            Ah, did they finally fix it? I guess a lot of people were seeing it fail and they updated the model. Which version of ChatGPT was it?

        • Zexks@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          1 month ago

          No “they” haven’t unless you can cite your source. Chatgpt was only released 2.5 years ago and even openai was saying 5-10 years with most outside watchers saying 10-15 with real nay sayers going out to 25 or more

    • haui@lemmy.giftedmc.com
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Although i agree with the general idea, AI (as in llms) is a pipe dream. Its a non product, another digital product that hypes investors up and produces “value” instead of value.

      • kescusay@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Not true. Not entirely false, but not true.

        Large language models have their legitimate uses. I’m currently in the middle of a project I’m building with assistance from Copilot for VS Code, for example.

        The problem is that people think LLMs are actual AI. They’re not.

        My favorite example - and the reason I often cite for why companies that try to fire all their developers are run by idiots - is the capacity for joined up thinking.

        Consider these two facts:

        1. Humans are mammals.
        2. Humans build dams.

        Those two facts are unrelated except insofar as both involve humans, but if I were to say “Can you list all the dam-building mammals for me,” you would first think of beavers, then - given a moment’s thought - could accurately answer that humans do as well.

        Here’s how it goes with Gemini right now:

        Now Gemini clearly has the information that humans are mammals somewhere in its model. It also clearly has the information that humans build dams somewhere in its model. But it has no means of joining those two tidbits together.

        Some LLMs do better on this simple test of joined-up thinking, and worse on other similar tests. It’s kind of a crapshoot, and doesn’t instill confidence that LLMs are up for the task of complex thought.

        And of course, the information-scraping bots that feed LLMs like Gemini and ChatGPT will find conversations like this one, and update their models accordingly. In a few months, Gemini will probably include humans in its list. But that’s not a sign of being able to engage in novel joined-up thinking, it’s just an increase in the size and complexity of the dataset.

        • haui@lemmy.giftedmc.com
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          We’ll have to agree to disagree then. My hype argument perfectly matches your point of people wrongly perceiving llms as ai but my point goes further.

          AI is a search engine on steroids with all the drawbacks. It produces no more accurate results, has no more information, does not do anything else but take the research effort away which is proven to make people dumber. More importantly, llms gobble up energy like crazy and need rare ressources which are taken from exploited countries. In addition to that, they are a privacy nightmare and proven to systematically harm small creators due to breach of intellectual property, which is especially brutal for them.

          So no, there is no redeeming qualities in llms in their current form. They should be outlawed immediately and at most, locally used in specific cases.

  • CrayonDevourer@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Only the uneducated don’t see AI “taking off” right now.

    Every idiot who says this thinks that ChatGPT encompasses all of “AI”.

    • Jesus_666@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      AI isn’t taking off because it took off in the 60s. Heck, they were even working on neural nets back then. Same as in the 90s when they actually got them to be useful in a production environment.

      We got a deep learning craze in the 2010s and then bolted that onto neural nets to get the current wave of “transformers/diffusion models will solve all problems”. They’re really just today’s LISP machines; expected to take over everything but unlikely to actually succeed.

      Notably, deep learning assumes that better results come from a bigger dataset but we already trained our existing models on the sum total of all of humanity’s writings. In fact, current training is hampered by the fact that a substantial amount of all new content is already AI-generated.

      Despite how much the current approach is hyped by the tech companies, I can’t see it delivering further substantial improvements by just throwing more data (which doesn’t exist) or processing power at the problem.

      We need a systemically different approach and while it seems like there’s all the money in the world to fund the necessary research, the same seemed true in the 50s, the 60s, the 80s, the 90s, the 10s… In the end, a new AI winter will come as people realize that the current approach won’t live up to their unrealistic expectations. Ten to fifteen years later some new approach will come out of underfunded basic research.

      And it’s all just a little bit of history repeating.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        in the 60s. Heck, they were even working on neural nets back then

        I remember playing with neural nets in the late 1980s. They had optical character recognition going even back then. The thing was, their idea of “big networks” was nowhere near big enough scale to do anything as impressive as categorize images: cats vs birds.

        We’ve hit the point where supercomputers in your pocket are…

        The Cray-1, a pioneering supercomputer from the 1970s, achieved a peak performance of around 160 MFLOPS, it cost $8 million - or $48 million in today’s dollars, it weighed 5 tons

        Modern smartphones, even mid-range models, can perform significantly faster than the Cray-1. For example, a 2019 Google Pixel 3 achieved 19 GFLOPS

        19000/160 = over 100x as powerful as a Cray from the 1970s.

        I just started using a $110 HAILO-8 for image classification, it can perform 26TOPS, that’s over 160,000x a 1970s Cray (granted, the image processor is working with 8 bit ints, the Cray worked with 64 bit floats… but still… 20,000x the operational power for 1/436,000th the cost and 1/100,000th the weight.)

        There were around 60 Crays delivered by 1983, HAILO alone is selling on the order of a million chips a year…

        Things have sped up significantly in the last 50 years.

    • Tiger@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      There are AI experts (I could air quote that, but really, people who work in the industry) who are legitimately skeptical about the speed, power and real impact AI will have in the short term, so it isn’t a case of everyone who “really knows” thinks we’re getting doomsday AGI tomorrow.

  • NegentropicBoy@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    In the spirit of showerthoughts: I feel the typical LLM is reaching a plateau. The “reasoning” type was a big advance though.

    Companies are putting a lot of effort on how to handle the big influx of AI requests.

    With the huge resources, both academic and operational, going into AI we should expect unexpected jumps in power :)

  • Showroom7561@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    AI LLMs have been pretty shit, but the advancement in voice, image generation, and video generation in the last two years has been unbelievable.

    We went from the infamous Will Smith eating spaghetti to videos that are convincing enough to fool most people… and it only took 2-3 years to get there.

    But LLMs will have a long way to go because of how they create content. It’s very easy to poison LLM datasets, and they get worse learning from other generated content.

    • MiyamotoKnows@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Poisoning LLM datasets is fun and easy! Especially when our online intellectual property is scraped (read: stolen) during training and no one is being accountable for it. Fight back! It’s as easy as typing false stuff at the end of your comments. As an 88 year old ex-pitcher for the Yankees who just set the new world record for catfish noodling you can take it from me!

  • moseschrute@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    It has taken off exponentially. It’s exponentially annoying that’s it’s being added to literally everything

          • LostXOR@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            1 month ago

            People have been cheating on their homework as long as homework has existed. AI is just the latest method to do so. It’s easier to cheat with than previous methods, but that’s been true for every new method of cheating.

            • UnderpantsWeevil@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              not essays

              Yes, essays. Pre-written essays on subjects that you could plagiarize line for line.

              not free help like this

              Yes, free help like this, on message boards and blogs and YouTube channels and chat groups. Very likely better help, too, since you’re getting the information from subject matter experts rather than some random amalgamation of text shoved into a language model output template.

              • Aatube@kbin.melroy.org
                link
                fedilink
                arrow-up
                0
                ·
                1 month ago

                Pre-written essays on subjects that you could plagiarize line for line.

                In my time these were absolutely awful and vacuous circumlocution. Not to mention TurnItIn.

                Yes, free help like this, on message boards and blogs and YouTube channels and chat groups.

                You have to pay for the messageboards that are actually useful and already have your question like Chegg. The only comparable free platform I used only worked on Chinese homework. Anything else you’d have to post your question and wait about an hour. ChatGPT takes one minute. And its quality today is way better than you think. You can just take a photo of homework and it’d give you the right answers 90% of the time and with explanations.

  • CheeseNoodle@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Iirc there are mathematical reason why AI can’t actually become exponentially more intelligent? There are hard limits on how much work (in the sense of information processing) can be done by a given piece of hardware and we’re already pretty close to that theoretical limit. For an AI to go singulaity we would have to build it with enough initial intelligence that it could aquire both the resources and information with which to improve itself and start the exponential cycle.

  • conditional_soup@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Well, the thing is that we’re hitting diminishing returns with current approaches. There’s a growing suspicion that LLMs simply won’t be able to bring us to AGI, but that they could be a part of or stepping stone to it. The quality of the outputs are pretty good for AI, and sometimes even just pretty good without the qualifier, but the only reason it’s being used so aggressively right now is that it’s being subsidized with investor money in the hopes that it will be too heavily adopted and too hard to walk away from by the time it’s time to start charging full price. I’m not seeing that. I work in comp sci, I use AI coding assistants and so do my co-workers. The general consensus is that it’s good for boilerplate and tests, but even that needs to be double checked and the AI gets it wrong a decent enough amount. If it actually involves real reasoning to satisfy requirements, the AI’s going to shit its pants. If we were paying the real cost of these coding assistants, there is NO WAY leadership would agree to pay for those licenses.

    • thru_dangers_untold@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      Yeah, I don’t think AGI = an advanced LLM. But I think it’s very likely that a transformer style LLM will be part of some future AGI. Just like human brains have different regions that can do different tasks, an LLM is probably the language part of the “AGI brain”.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      What are the “real costs” though? It’s free to run a half decent LLM locally on a mid tier gaming PC.

      Perhaps a bigger problem for the big AI companies rather then the open source approach.

      • conditional_soup@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Sure, but ChatGPT costs MONEY. Money to run, and MONEY to train, and then they still have to make money back for their investors after everything’s said and done. More than likely, the final tally is going to look like whole cents per token once those investor subsidies run out, and a lot of businesses are going to be looking to hire humans back quick and in a hurry.

  • Xaphanos@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    A major bottleneck is power capacity. Is is very difficult to find 50Mwatts+ (sometime hundreds) of capacity available at any site. It has to be built out. That involves a lot of red tape, government contracts, large transformers, contractors, etc. the current backlog on new transformers at that scale is years. Even Google and Microsoft can’t build, so they come to my company for infrastructure - as we already have 400MW in use and triple that already on contract. Further, Nvidia only makes so many chips a month. You can’t install them faster than they make them.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    What do you consider having “taken off”?

    It’s been integrated with just about everything or is in the works. A lot of people still don’t like it, but that’s not an unusual phase of tech adoption.

    From where I sit I’m seeing it everywhere I look compared to last year or the year before where pretty much only the early adopters were actually using it.

    • capybara@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      What do you mean when you say AI has been integrated with everything? Very broad statement that’s obviously not literally true.

      • nucleative@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        True, I tried to qualify it with just about or on the way.

        From the perspective of my desk, my core business apps have AI auto suggest in key fields (software IDEs, ad buying tools, marketing content preparation such as Canva). My Whatsapp and Facebook messenger apps now have an “Ask meta AI” feature front and center. Making a post on Instagram, it asks if I want AI assistance to write the caption.

        I use an app to track my sleeping rhythm and it has an AI sleep analysis feature built in. The photo gallery on my phone includes AI photo editing like background removal, editing things out (or in).

        That’s what I mean when I say it’s in just about everything, at least relative to where we were just a short bit of time ago.

        You’re definitely right that it’s not literally in everything.

        • kadup@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          To be fair, smart background removal was a feature from Picasa over a decade ago. We just didn’t call everything “AI” to make shareholders happy.