3 Comments
User's avatar
тна Return to thread
A.I. Fabler's avatar

A very lucid piece. Thanks for that. I have a real concern relating to semantics and how machine learning can cope with the deliberate alteration of meaning, as for instance in the Orwellian approach taken in Critical Theory. How do you think that can be factored?

Expand full comment
Alejandro Piad Morffis's avatar

Thanks! Interesting question. I'm not a linguist, but a computer scientist so my understanding of linguistics is limited to the computational viewpoint which is probably insufficient for a full account of how human language works, but one thing I can say is computers learn semantics (in the limited sense we can say so) entirely from usage, at least in the current paradigm. So if you manipulate the training data, accidentally or purposefully, the ML model will learn the semantics that are reflected in that training data. We can hope to detect concept drift for example by analyzing how models trained on past data behave on new data, so maybe that's part of the answer to your question. But it is fascinating question.

Expand full comment
Tezka Eudora Abhyayarshini's avatar

Hi!

Expand full comment