13 Comments
User's avatar
Gerard Hundman's avatar

In todays world I'm most worried about the devide between the world of high end AGI and the dominant role stupidity will still be playing in how the world functions. I say stupidity because there are two tiers to distinguish. The first being: the high speed (short term gains) progression in the field of AI and the second being the growing group of people for whom the first is impossible to keep up with and it's gains will not benefit them.

Having an administration that is actively and strategically stimulating this growing devide, by controling/censoring both education and the media, the importance of truth and knowledge in general has never been greater.

Expand full comment
Cathie Campbell's avatar

It is intimidating for so many to operate smart machines these days and machine sophistication has outpaced the consumer comfort level.

Operational “stupidity” is rejection of the instruction booklet and a desire for “keep it simple, stupid”, which is retro longing for old school streamlined access versus the variety imposed on “start engine with these multiple steps…” From cars to coffee machines, strollers to car seats, ovens to online navigation, the hurdles for humanity are many and much higher.

This is a very well written summary with terms such as “wishful mnemonics”, “stochastic parrot”,

Expand full comment
William Lees's avatar

Here's some hopefully useful constructive criticism.

Your article is a wonderful piece of thought scaffolding. It really helps the discussion. Please continue to help us to think through this!

Considering: Is the term "fallacy" a loaded-term? Could it be used more carefully?

Using the word 'fallacy' in this piece is a bit tricky.

At this point, the 'fallacies' are claims she has made. They're so-called fallacies. Are they fallacies? Are her claims valid? Sound. Or do we agree now that what Mitchell talks about are "gimmickeries" - marketing promises - and consider why we use them even though we know they are such?

And you've chosen a framework that itself espouses a point of view. Have your cherry-picked your scaffold to reach a foregone conclusion?

Considering: What is the larger process at work?

Is the larger process at work, the boom-and-bust cycle, the same as the product lifetime lifecycle of a much-loved product category?

Let's use two examples: the typewriter and the video game console

Considering: When is the hype dangerous, about narrow AI is reaching true general intelligence?

I think the hype and the over-promise and the quick-to-market can be dangerous. What are some fair and reasonable protections?

Considering: Where did you hit the mark?

I think you really hit the nail on the head with terms like: messy middle, productive tension, fuels the market.

I think your cautions embrace of the term 'alchemy' really puts your finger on what we hope for and fear for at the same time.

What are these pointing at?

Considering: What are the counter-points?

If you were to write an article based on a counter-point framework to Mitchell - what counterpoint framework would you use?

In what ways is overpromising on AI a useful way to move the populace forward? What disingenuous marketing should we encourage?

In what ways is over-reliance on Mitchell a bad thing? Does it create fear, uncertainty and doubt about the future? Does Mitchell hold us back?

Considering: As pragmatists, how could the next article help more?

How do we accomplish this point: "infuse alchemy with principles of science" - examples? Tools?

What tooling do we have or do we need to "port" our intelligences (intellectual properties) from the world of Cognitive paradigm to the world of Computationalism paradigm?

Expand full comment
Bong Ripper's avatar

One of the most thoughtful and well-balanced pieces I've read on this divisive subject. However, you need to get a copy editor who can advise on replacing some commas with dashes or full stops (beware the comma splice!), and not starting sentences with conjunctions.

Expand full comment
Alejandro Piad Morffis's avatar

Noted, thanks!

Expand full comment
jklowden's avatar

What is intelligence, really? LLMs evince Turing Test intelligence: they fool us into thinking the machine is human. The fail utterly at Hemingway or Einstein intelligence. They never produce a new thought. They never add to human knowledge.

Expand full comment
Alejandro Piad Morffis's avatar

Indeed, part of the problem according to Melanie Mitchell (and I agree) may be our intuitions that there is but one kind of intelligence and that it’s something that can be measured with a single quantity on a linear scale. That mindset forces us to grapple with the seeming contradiction that LLMs are so smart for some things and so stupid for others, almost equivalent things.

Expand full comment
David Hsing's avatar

"At a certain level of complexity, a system's emergent behavior..." Complexity emergentism is just another fallacy. It's been shown in various ways.

- Bad metrics leading to emergence claims https://arxiv.org/abs/2304.15004

- 20/20 hindsight plus zero predictive utility equals garbage non-observation https://ykulbashian.medium.com/emergence-isnt-an-explanation-it-s-a-prayer-ef239d3687bf

- Common sense observations already indicate otherwise (see section "Emergentism via machine complexity" https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46/ )

"if we should choose the path of scaling or the path of cognitive science, but how we can weave them together" I can't be the only person in the entire darned world who realizes that those two "paths" are just two different sides of the same anthropocentric coin?

Scaling- Scaling what, "neural" networks? NNs are just imitations of anthropomorphic form.

Cognitive Science- Ah yes, functionalism and the project of "reverse engineering of the mind." GOOD LUCK (spoiler, it'll NEVER succeed)

The actual solution is to step away from the anthropomorphic paradigm of endless imitation ALTOGETHER. Surely, there are other who have seen this obvious path?? OF COURSE you'd end up with endless "scaling" with you "scale" with imitative algorithms that scale like shit, and when you lock-in your paradigm with imitation, OF COURSE you're stuck reverse engineering underdetermined entities forever... All the while, the real field that can lead to performance leaps is utterly ignored (huge list at end of following post: https://davidhsing.substack.com/p/what-the-world-needs-isnt-artificial )

Expand full comment
Dakara's avatar

That was an excellent piece focusing on the boundaries we observe between intelligence and machine and how framing can change the perspective.

I think this is a key statement:

> At a certain level of complexity, a system's emergent behavior becomes functionally indistinguishable from "understanding,"

Similarly:

“Any sufficiently advanced technology is indistinguishable from magic”

— Arthur C. Clarke

And Likewise, “Any sufficiently advanced pattern-matching is indistinguishable from intelligence” and importantly it isn't intelligence just like technology isn't magic. The difference is important because it tells us how the behavior will diverge.

FYI, something I've written on the topic that I think aligns with your piece, focusing on how to perceive the difference - https://www.mindprison.cc/p/intelligence-is-not-pattern-matching-perceiving-the-difference-llm-ai-probability-heuristics-human

Expand full comment
Peter Gaffney's avatar

This is well said and exactly on the mark.

Expand full comment
Wolstencroft on consciousness's avatar

Yet another insightful article that is germane to the situation.

I concur completely about the need for new words.

Expand full comment
Wyrd Smythe's avatar

Good read. With regard to the third fallacy, I think we're stuck with human language in all its complexity and ambiguity (and metaphor), but as your post illustrates, I also think we can use that language constructively and usefully.

As an aside, because I started doing assembly level programming back when the 6502 and Z80 were modern CPUs, I've never suffered Moravec's Paradox. I know exactly how powerful, and how limited, von Neumann architecture is. That general computational platform, as opposed to the architectural platform of living brains, may be the ultimate limit for deep-learning computation. (Or not, though I tend to align with the Cognitive view.)

Expand full comment
Devansh's avatar

Do you want to guest post this?

Expand full comment