¿No posee una cuenta?
Universidad de la República
(UDELAR)
Montevide, Uruguay.
|
Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training
Agustin V. Startari.
En Dimuro, Juan Jose, Grammars of Power: How Syntactic Structures Shape Authority. Nassau (Bahamas): LeFortune.

Resumen
This article investigates the structural impossibility of semantic neutrality in large language models (LLMs), using GPT as a test subject. It argues that even under strictly formal prompting conditions—such as invented symbolic systems or syntactic proto-languages—GPT reactivates latent semantic structures drawn from its training corpus. The analysis builds upon prior work on syntactic authority, post-referential logic, and algorithmic discourse (Startari, 2025), and introduces empirical tests designed to isolate the model from known linguistic content. These tests demonstrate GPT’s consistent failure to interpret or generate structure without semantic interference. The study proposes a falsifiable framework to define and detect semantic contamination in generative systems, asserting that such contamination is not incidental but intrinsic to the architecture of probabilistic language models. The findings challenge prevailing narratives of user-driven interactivity and formal control, establishing that GPT—and similar systems—are non-neutral by design.
Texto completo
Dirección externa:

Esta obra está bajo una licencia de Creative Commons.
Para ver una copia de esta licencia, visite https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es.
Para ver una copia de esta licencia, visite https://creativecommons.org/licenses/by-nc-nd/4.0/deed.es.