ChatGPT and the End of Intelligence in Academia
If the academic production of ChatGPT is indistinguishable from academic work by students and scholars, it is because academia has become a ChatBot factory.
Academic administrators can look forward to the days when ChatGPT writes papers for student and then grades them for their instructor. Without any academic readers or writers involved, the quality of academic work is almost guaranteed to improve.
Students are turning in papers written by AI and there are reasons to be concerned but they pale in comparison to the possible fact that faculty and scholars across the board might be doing it in equal proportions.
Given the intellectual anonymity of students in the lecture hall and the anodyne presence of faculty who have built academic careers on the condemnation of technical and disciplinary merits and who, more often than not, are not reading academic material at all, one can imagine that the magnitude of the problem is vast.
Of course, this same dereliction of power and elective ineptitude, one must also assume, is finding a powerful handmaid in AI, which can be used to fashion syllabi, write lectures, design assignments and correct and grade them. But one needs not exercise one’s imagination much to suppose that AI is being used to write and referee articles, edit submissions, etc. We know that the use of the technology in the summarisation of scholarly literature is already being widely used and we can only imagine that that will further reduce scholarly reading.
In the most benign account of the pedagogic deployment of AI, Buzzfeed reports on the use of an AI tools to detect papers built using AI. But given the sorry state of academic pedagogy, I will submit to the concerned reader that the problem that this tool addresses is trivial, indeed, nothing more than the symptom of a much more profound and possibly, cataclysmic crisis: academia—and this is specially true of the Angloamerican sphere of influence—is in a state of advanced putrefaction.
If academia has arrived at the stage in which the patent possibility of doing entirely away not only with scholarly knowledge but also with rational deliberation, critical articulation and deliberative rumination seems like a clear and present opportunity—certainly in matters of scholarly pedagogy— it is because academic institutions have spent decades working at their eradication in favour of the tranquile pastures of client satisfaction, a land reform of sorts, if you will.
Particularly acute in the social sciences and humanities, by and large, academic exchange has been dominated for at least two decades now by ideological considerations not only above almost all else but, most critically, to the detriment of disciplinary rigour. There is hardly an area of academic life unaffected by the primacy of moral obligations construed politically. All difficult questions have a simple moral intuition have been resolved in advance into certain language formulas that can be deployed with slight modification in all sorts of disciplinary contexts. The moral language of propriety and impropriety, to which disciplines like literature, sociology, politology or philosophy were always naturally susceptible has made its way out of myriad studies sparing no academic field. Anointed by academic administrators with full disciplinary honours, everything and anything from Women to Fat studies with every other study in between, has been elevated to the parnassus of the Socratic project and their repetition of jingles and bywords has been allowed to reside on the same blackboards where Newtons’s equation resides.
Much like Ideological frameworks, ChatGPT is only a language model.
While the assault on hard sciences has been less successful or at least less immediate, it is noteworthy that even there, curricula has been subjected to the beatific power of the moral flames of feminism, anti-colonialism, anti-racism and of course queering.
But while these repetitions have the capacity to fool the immune systems of academia so as to penetrate and infect the scholarly economies of reasons, precisely as powerful iterations of moral intuitions, the fundamental claims that now dictate the moral and political identity of academic life by way of the ideological depuration of curricula, research and lecture are axiomatic and to that extent they change the fundamental mechanics of scholarly praxis, which one could arguably explain as optimisation through reason.
If indeed, vast portions of the output of the past 25 years of academic life looks like a repetition of propositions with slight syntactical and grammatical permutations to feign variety and diversity it is because that is precisely what it has become. Faculty and students who have been trained to transport and dispense pre-recorded ideological intuitions and are assessed at all levels of academic life by the accuracy of replication operate as no more and no less than chat bots in a vast network of decreasing informational quality. Increasingly resistant to correction and emendation, the individual statements have lost any function other than to establish by repetition the putative truth of the axiom. They do this by passing for parts of a deduction in which the axiom is often presented as their necessary logical conclusion when in fact, they are no more than just-so stories.
Large Language Models and not Artificial Intelligence
A large language model is an artificial intelligence system that uses deep learning techniques to process and analyze vast amounts of natural language data. It is designed to learn the patterns and structures of language from the data it is trained on, and then generate responses that are similar in style and content to human language.
Both large language models and predictive text use machine learning algorithms to generate text that is relevant and coherent based on the context of previous input. While predictive text is usually limited to suggesting the next word or phrase, large language models are capable of generating longer and more complex text based on the input they receive.
On the other hand, intelligence refers to a broad range of cognitive abilities that enable humans and other animals to adapt to their environment, solve problems, reason, learn, and communicate effectively. It encompasses a complex set of skills and processes that involve perception, memory, attention, language, decision-making, and many other functions.
While large language models can simulate certain aspects of human language, they are still far from being truly intelligent. They lack the ability to understand the meaning behind the words they generate, or to apply their knowledge to real-world situations. They are essentially advanced tools that can assist humans in various tasks, but they are not capable of independent thought or creativity.
In conclusion, while large language models represent a significant advancement in natural language processing, they are still far from achieving true intelligence. Intelligence is a complex and multifaceted phenomenon that encompasses many different cognitive abilities and cannot be reduced to language processing alone.
Language models as a litmus test
The entire last section was written by ChatGPT and it its clearly not presented in my voice. The most obvious hint should have been the prepositional clause that opens the last paragraph, generic and aesthetically offensive. But if you have read me for a while or if you know me and have been in conversation with me, the entire section should have immediately raised some suspicions. If it did not, it is because my voice to you, dear reader, is fungible. And it is here that much of the issue at hand resides.
I do not intend her to offer an encompassing theory of AI and its various forms and expressions. But there is however an important intuition that spawns from the positive academic assessment of Large Language Model papers that in its most mundane form can be put like this: Academia has been devoted to the production of platitudes that are indistinguishable from algorithmic iterations of linguistic patterns and this should be a profound source of shame.
The fact that an entire community and its intellectual commerce can be replaced by a chatbot capable of simultaneously mimicking various styles, tones and functions can be conceived as a reverse Turing test. It is not that chatbots have become so sophisticated that they can produce a form language indistinguishable from natural human languages. It is rather that the speech of students and faculty in form and substance has deteriorated to the point that Large Language Models can convincingly impersonate them.
It ought to be recognised that the problem at hand has little to do with students cheating. Outsmarting institutional supervisors is one of the most important and traditional activities of students across the world. It is only the tools that have become more sophisticated but ultimately the task of the cheater is to cheat and that has not change.
What has indeed change is the dereliction of power by academic institutions and their members, who have indeed not only enabled students to outsmart the assessment and optimisation mechanisms of education but has replaced the toils of reason with the comfort of the hearth and the motherly embrace. As a student put it to professor Nicholas Christakis at the top of her lungs as she was Christakis defense of students to dress in costumes “It is not about creating an intellectual space! It is not! Do you understand that? It’s about creating a home here! .
The home, its safety, the warm comfort, the quaint, the familiar, the safe space… the space of domesticity. The repetition of the old habits of love, of care and of thought. The ones that find the foreign, the other, dangerous and suppress it by declaring, without giving it a second thought, the supremacy of keen and kith. This song, as it has been pointed out an endless number of times in the last few years is a very old one. But this is of course a digression.
Perhaps the more relevant point to be made is that there is an imperious need of restituting subversive thought and its practice to academia and save from the confession of ideological and axiomatic faith. In this sense, Large Language Models, Art least for now, can be useful to show in glaring colours the shape and substance of academic and scholarly failure including intellectual ineptitude, pedagogic dereliction of power as well as administrative malfeasance.