In todays world I'm most worried about the devide between the world of high end AGI and the dominant role stupidity will still be playing in how the world functions. I say stupidity because there are two tiers to distinguish. The first being: the high speed (short term gains) progression in the field of AI and the second being the growing group of people for whom the first is impossible to keep up with and it's gains will not benefit them.
Having an administration that is actively and strategically stimulating this growing devide, by controling/censoring both education and the media, the importance of truth and knowledge in general has never been greater.
It is intimidating for so many to operate smart machines these days and machine sophistication has outpaced the consumer comfort level.
Operational “stupidity” is rejection of the instruction booklet and a desire for “keep it simple, stupid”, which is retro longing for old school streamlined access versus the variety imposed on “start engine with these multiple steps…” From cars to coffee machines, strollers to car seats, ovens to online navigation, the hurdles for humanity are many and much higher.
This is a very well written summary with terms such as “wishful mnemonics”, “stochastic parrot”,
That was an excellent piece focusing on the boundaries we observe between intelligence and machine and how framing can change the perspective.
I think this is a key statement:
> At a certain level of complexity, a system's emergent behavior becomes functionally indistinguishable from "understanding,"
Similarly:
“Any sufficiently advanced technology is indistinguishable from magic”
— Arthur C. Clarke
And Likewise, “Any sufficiently advanced pattern-matching is indistinguishable from intelligence” and importantly it isn't intelligence just like technology isn't magic. The difference is important because it tells us how the behavior will diverge.
Good read. With regard to the third fallacy, I think we're stuck with human language in all its complexity and ambiguity (and metaphor), but as your post illustrates, I also think we can use that language constructively and usefully.
As an aside, because I started doing assembly level programming back when the 6502 and Z80 were modern CPUs, I've never suffered Moravec's Paradox. I know exactly how powerful, and how limited, von Neumann architecture is. That general computational platform, as opposed to the architectural platform of living brains, may be the ultimate limit for deep-learning computation. (Or not, though I tend to align with the Cognitive view.)
In todays world I'm most worried about the devide between the world of high end AGI and the dominant role stupidity will still be playing in how the world functions. I say stupidity because there are two tiers to distinguish. The first being: the high speed (short term gains) progression in the field of AI and the second being the growing group of people for whom the first is impossible to keep up with and it's gains will not benefit them.
Having an administration that is actively and strategically stimulating this growing devide, by controling/censoring both education and the media, the importance of truth and knowledge in general has never been greater.
It is intimidating for so many to operate smart machines these days and machine sophistication has outpaced the consumer comfort level.
Operational “stupidity” is rejection of the instruction booklet and a desire for “keep it simple, stupid”, which is retro longing for old school streamlined access versus the variety imposed on “start engine with these multiple steps…” From cars to coffee machines, strollers to car seats, ovens to online navigation, the hurdles for humanity are many and much higher.
This is a very well written summary with terms such as “wishful mnemonics”, “stochastic parrot”,
That was an excellent piece focusing on the boundaries we observe between intelligence and machine and how framing can change the perspective.
I think this is a key statement:
> At a certain level of complexity, a system's emergent behavior becomes functionally indistinguishable from "understanding,"
Similarly:
“Any sufficiently advanced technology is indistinguishable from magic”
— Arthur C. Clarke
And Likewise, “Any sufficiently advanced pattern-matching is indistinguishable from intelligence” and importantly it isn't intelligence just like technology isn't magic. The difference is important because it tells us how the behavior will diverge.
FYI, something I've written on the topic that I think aligns with your piece, focusing on how to perceive the difference - https://www.mindprison.cc/p/intelligence-is-not-pattern-matching-perceiving-the-difference-llm-ai-probability-heuristics-human
This is well said and exactly on the mark.
Yet another insightful article that is germane to the situation.
I concur completely about the need for new words.
Good read. With regard to the third fallacy, I think we're stuck with human language in all its complexity and ambiguity (and metaphor), but as your post illustrates, I also think we can use that language constructively and usefully.
As an aside, because I started doing assembly level programming back when the 6502 and Z80 were modern CPUs, I've never suffered Moravec's Paradox. I know exactly how powerful, and how limited, von Neumann architecture is. That general computational platform, as opposed to the architectural platform of living brains, may be the ultimate limit for deep-learning computation. (Or not, though I tend to align with the Cognitive view.)