I think one of the biggest mistakes we have made as an industry is conflating the words “AI” and “LLMs.” The irony is right there on the surface. Naming is one of the hardest things to do in software, and we’ve done it poorly for the primary tool of software.

  • arctanthrope@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    16 days ago

    it’s not a mistake, it’s marketing. the people in charge of tech companies don’t want people to understand the difference between AI and LLMs, because it makes their product seem more impressive and valuable than it really is

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      ·
      16 days ago

      AI is a term used in computer science to refer to any system that’s able to perform a task normally requiring human intelligence - like playing chess or generating natural-sounding language. LLMs are AI systems in the actual meaning of that term.

      • arctanthrope@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        16 days ago

        the point is public perception. when the general public hears “AI” they don’t think about all the lesser systems that are technically AI, they think of AGI

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          16 days ago

          Then the issue is with the uneducated public - not the use of correct terminology. Changing the definition of terms will just create more confusion, not less.

          • arctanthrope@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            16 days ago

            yep, that’s what I’ve been saying. the public is uneducated because the people selling AI want them to be. using the technically correct but often misunderstood umbrella term “AI” instead of the more specific term “LLM” is one of the ways that ignorance is maintained

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              3
              ·
              15 days ago

              Referring to a multimodal AI as an LLM isn’t technically accurate either, because it’s not just an LLM. There’s also the vision/audio encoders, diffusion model, and multimodal projectors.

        • TheV2@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          15 days ago

          AGI is just a type of AI. It’s a term coined long after AI that doesn’t define new goals and capabilities that weren’t already part of AI research, except that they were rather abandoned. If anything, “AGI” was the marketing term to revitalize and focus on those ideals. So there is nothing wrong with the general public to associate “AI” with what you would specifically describe as “AGI”.

          Furthermore, I don’t understand what this public perception has to do with your claim. If they thought of only narrow AI, then would it not make even more sense to call their LLM-based products AI…

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            3
            ·
            15 days ago

            AGI is a subcategory of AI just like LLMs and diffusion models are. That term has existed since 1997 and was coined by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’

            By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

  • Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    16 days ago

    AI isn’t any one thing. It’s an extremely broad term. It simply refers to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. Narrow AI can have superhuman cognitive abilities, but only within the specific task it was built for, like playing chess.

    A large language model like ChatGPT is also a narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. It often gets things right - not because it knows anything, but because its training data contains a lot of correct information. That accuracy is an emergent byproduct of how it works, not its intended function.

    What people expect from it, though, isn’t narrow intelligence - it’s general intelligence: the ability to apply cognitive ability across a wide range of domains, like a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but AGI and LLMs are not the same thing, even though both fall under the umbrella of AI.

    • staircase@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      16 days ago

      What is a “cognitive task”? At what point does fitting a straight line to data stop being a computer procedure and become cognitive? Is everything a computer does AI?

  • troed@fedia.io
    link
    fedilink
    arrow-up
    5
    ·
    16 days ago

    It’s Pattern Matching — Not Understanding

    … anyone who falls into this trap is welcome to study the very latest we know about human consciousness.

    The awakening is brutal. There’s not a single thing in science pointing to us being anything more than pattern matching machines ourselves …

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      16 days ago

      Yeah, well, at least we humans don’t ever hallucinate and just make stuff up, engage in sycophancy, act overly confident, have bias, or suffer from a short context window.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        Let me rebuff your points one by one… After I re-read your comment…

        (I gave up due to my short context window.)

    • staircase@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      16 days ago

      … anyone who falls into this trap is welcome to study the very latest we know about human consciousness.

      Isn’t this a statement about humans, rather than machines? Moreover, “It’s pattern matching not understanding” is essentially the message I got from this leading AI professor at Oxford https://www.youtube.com/watch?v=CyyL0yDhr7I

      There’s not a single thing in science pointing to us being anything more than pattern matching machines ourselves …

      I’m almost certain this is completely wrong. Iain McGilchrist’s work on brain hemispheres points, as I understand it, to a left hemisphere that manipulates the world in ways he compares with modern AI, while the right is capable of the implicit, art, nuance etc. There is nothing in what I’ve heard of his work that suggests the right hemisphere operates like a machine. Indeed, I think he is very explicit that that is the opposite of the truth.

      • troed@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        15 days ago

        Douglas Hofstadter’s “I am a strange loop” is great reading on the subject, as well as Susan Blackmore’s “Consciousness: An Introduction”.

        You have to start from physics & chemistry - our neurons aren’t doing “art & nuance”. They’re just mapping inputs to outputs.

        • staircase@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          15 days ago

          You have to start from physics & chemistry

          This is the view of scientific materialism. Scientific materialism - a view - comes into irreconcilable (I think) problems even within quantum physics, never mind philosophy. It is also a view that you have to start from the small and build up. This view also has problems.

          Thank you for bringing my attention to these books. I notice that the first two reviews on goodreads for “I am a strange loop” go to lengths to disagree with the message. Indeed one reviewer says

          I did not find that Hofstadter compellingly demonstrates that this strange loop is the entirety of consciousness

          I won’t put more weight on a reviewer than an author, but I do find the reviews interesting.

          I have not yet looked up Blackmore’s book.

          I am also not claiming we do not pattern match. I am saying there is very compelling literature that says we do more.

          But maybe I’m entirely wrong.

          • Corbin@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 days ago

            Materialism and QM have no conflict. What QM shows, via the KS theorem, is that reality cannot be objective; it must be contextual, arising from the participatory interactions between objects and subjects. Carroll 2021 is a fairly hard metaphysical barrier which prevents spurious anti-materialist claims by fully shifting the burden of proof to claimants; if you genuinely think that there are irreconcilable problems with materialism then you must give a physics experiment which violates the Standard Model.

            The reviewers for Hofstadter don’t understand the book, which I have on my shelf and highly recommend. The point of Strange Loop is that the caged-bird metaphor, that there is exactly one mind per one brain, is wrong in both directions: sometimes there’s more than one mind in a brain, and sometimes a mind is not wholly contained in a single brain. The only reason to skip Strange Loop is if you’re a computer scientist or mathematician, in which case you should definitely read GEB first and Strange Loop second.

            • staircase@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              14 days ago

              Thank you for the Carroll paper. I’m actually looking for stuff like that atm.

              In the paper, he caveats

              Everything we have said presumes from the start that the world is ultimately physical, consisting of some kind of physical stuff obeying physical laws. There is a long tradition of presuming otherwise, and if so, all bets are off. The well-known issue is then how non-physical substances or properties could interact with the physical stuff.

              so I’m very unclear how this paper can present a hard anti-materialist barrier, when he makes it clear that the paper presumes physicalism. I’ve only read half of it so far, will continue …

              • Corbin@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                14 days ago

                Well, the burden of proof doesn’t lie with Carroll. Instead, the entire point is that the non-materialist has the burden of evidence:

                Given a quantum state of the relevant fields, it accurately predicts how that state will evolve. Skeptics of the claim defended here have the burden of specifying precisely how that equation is to be modified. This would necessarily raise a host of tricky issues, such as conservation of energy and unitary evolution of the wave function.

                Otherwise I can rely upon Newton’s flaming laser sword; every time you ask about the possibility of non-materialism, I can ask you for the corresponding experiment which opens that possibility. Note that sometimes this is scientifically fruitful, as in the discovery of infrasound leading to many debunkings of hauntings as well as unlocking the secrets of elephant communication. (The more radical position of anti-materialism was conclusively refuted during the colonial era, so we cannot assume that the material world is only hypothetical.)

                This is all made stark in Figure 4, p15, which shows that any possible physical force not in the Standard Model would be so weak and subtle as to be undetectable by humans; when a human claims that they are sensitive to such a force, they have incorrectly implicitly assumed that their body is physically capable of interacting with such a force in a perceptible way. The argument goes much like the argument against electrosensitivity: if you really could sense the weak experimental force then you would be constantly sensing the much stronger ambient forces from the outside environment which we can’t mute.

                A common retort is that quantum states are merely our epistemic knowledge as humans about a fundamentally-unknowable micro-reality below our scale of perception. However, the PBR theorem rules that out by insisting that the quantum wavefunction is ontic. Leifer spent about two years struggling against this result in vain and eventually published Leifer 2014, which both serves as a great overview of the no-go theorems in ontological models and also as an example of how difficult it can be to unlearn previously-accepted beliefs.

                • bunchberry@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  14 days ago

                  A common retort is that quantum states are merely our epistemic knowledge as humans about a fundamentally-unknowable micro-reality below our scale of perception. However, the PBR theorem rules that out by insisting that the quantum wavefunction is ontic.

                  The PBR theorem assumes preparation independence which is a local assumption.

                • staircase@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  14 days ago

                  Well, the burden of proof doesn’t lie with Carroll. Instead, the entire point is that the non-materialist has the burden of evidence

                  How does the burden lie with the reader, rather than the author who has explicitly stated they are assuming physicalism. Why must we assume physicalism?

                  every time you ask about the possibility of non-materialism, I can ask you for the corresponding experiment which opens that possibility

                  You’re welcome to ask, but not all truths are experimentally verifiable. I read Newton’s flaming laser sword to mean that only science or logic can reveal truths, which isn’t at all the case.

                  I’ve enjoyed discussing this with you - you’ve been clear, and added some interesting references. I’m not sure this medium really lends itself to in-depth discussion. I think we both need more space to understand where the other is coming from, and I don’t see us progressing in that direction.

              • Corbin@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                14 days ago

                By the way, I really hope that you consider synthesizing concepts. As an exercise, Carroll concludes from his premises that:

                There is no life after death, as the information in a person’s mind is encoded in the physical configuration of atoms in their body, and there is no physical mechanism for that information to be carried away after death.

                But consider the following quote from Strange Loop at the end of Chapter 18, “The Blurry Glow of Human Identity”. Remember, Hofstadter is a physicist, arguably as influential as Carroll in quantum theory, and no less of an anti-dualist or materialist. So, as an exercise, synthesize for yourself an understanding of why Hofstadter says:

                In the wake of a human being’s death, what survives is a set of afterglows, some brighter and some dimmer, in the collective brains of all those who were dearest to them. And when those people in turn pass on, the afterglow becomes extremely faint. And when that outer layer in turn passes into oblivion, then the afterglow is feebler still, and after a while there is nothing left. This slow process of extinction I’ve just described, though gloomy, is a little less gloomy than the standard view. Because bodily death is so clear, so sharp, and so dramatic, and because we tend to cling to the caged-bird view, death strikes us as instantaneous and absolute, as sharp as a guillotine blade. Our instinct is to believe that the light has all at once gone out altogether. I suggest that this is not the case for human souls, because the essence of a human being — truly unlike the essence of a mosquito or a snake or a bird or a pig — is distributed over many a brain. It takes a couple of generations for a soul to subside, for the flickering to cease, for all the embers to burn out. Although “ashes to ashes, dust to dust” may in the end be true, the transition it describes is not so sharp as we tend to think.

                • staircase@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  14 days ago

                  By synthesizing concepts, do you mean combining them? I hope you’re not suggesting what that sounds like.

                  I will return to Carroll’s paper, but I still don’t see how it can prove anything, due to the paragraph I quoted.