This is a post I am writing to help clarify some issues related to AI, machine learning and especially the ChatGPT algorithms that are being currently deployed. Such programming is impressive from the point of view of results: the generation of intelligible and (often) sound full text including common and not so common knowledge and multi language capacity, make of ChatGPT a wonderful and unbelievably helpful tool. At the same time this has caused a lot of rumor first regarding the capacity of being conscious, and also, in academia, for its effects on students and faculty. I will not deal here with the consciousness of such a program because it has already been established that conscious it is not.
All neural nets provoke some sort of intelligence, or better said smartness. Such programs exhibit a real-life level of smartness and knowledge about all sorts of disciplines, in all cases where they are trained accordingly and purposefully. However, as it has been written abundantly, they are nothing else that a sort of “idiot savants”, or bullshitters who speak without a clue of what they are saying. Which is extraordinarily similar to what we humans often do, and certainly students do when composing essays or other academic work. That brings me to a curious question: isn’t the case that intelligence is overrated? I mean, that the fact of being aware of one own’s utterances and their meaning, is but a narrative, something that I build bit by bit every day? But that we leave to better philosophers.
Instead, I will talk here about the impact of such family of programs in academia. I am especially worried because the emphasis has recently been one of fear. Instead of welcoming such tools for what they are, useful tools for both students and teachers and researchers alike, some in academia have been fearful that they may help students plagiarize and create texts that are not of their doing. This is once again our pushing a wildly open question into a limited and stereotyped dimension.
So we have been recently talking about how to limit the dangers that such tools create in academia. However, in academia we ought to be flipping the coin and notice that such tools may be useful and not opposites to the academic discourse. In fact, consider the case where one student together with her professor want to generate a plan for a course. That is a fascinating exercise. Together they may see whether errors were generated by the bot and review the bot’s output. Moreover, both could see where fake information may have been introduced. At the same time, the instructor could use some of the bot’s information as many have found useful. I think this very example shows that tools like these may be really very helpful for teachers and students alike. The dangers of copying or of generating homework with low quality or even plagiarized already exist with text and images and have always happened in school and universities. Of course the Internet’s encyclopedias have generated an explosion of such problems whereby faculty are not able to understand or filter works created by the mind of the students versus works not created by them. Again, if we flip the coin we see that a student can have an excellent exercise by checking facts on encyclopedias! Moreover-at least with Wikipedia- the student can suggest edits and enhancements to one article. Or create a new one—perhaps aided by ChatGPT. I’m not going to deal with grading here because grades are, in the case of a student plagiarizing, essentially another problem common to all forms of evaluation: in fact many are proposing to do without grades, at least in the evaluation of plagiarism-susceptible work. The movement called ungrading is an example of that.
But let’s go back to that theme at hand here. I just want to make sure that we are going to have a saying and that a sound and rational debate without interference from fears or irrational situations may be had widely in academia. I believe, from a moral standpoint, that censorship, policing and the forbidding of tools must not exist in academia. Hence, once again our community must find a way to understand, integrate, and operationalize such tools. Let’s not forget that this is but a preview of what it is to come and we are called–together with our students–to give the shape we want to such a future.
OpenAi’s CEO Sam Altman recently said:
I understand why educators feel this way. There are ways we can help you detect if something is ChatGPT. But I don’t think institutions should rely on that. It’s impossible to make it perfect: people will know how much text to change [to avoid a detector]. We are in a new world. Generated text is something to live with. We will all have to adapt. And it’s OK. We have adapted to calculators. This is a more extreme version but the benefits will also be more extreme. We hear from teachers who are understandably very nervous about the impact of this on homework. Though others see it as an amazing personal tutor for each student. I have used it to learn things. There are things I’d rather learn with ChatGPT than from a textbook. We will adapt, we will be better and we will not want to go back. [Translated first into Spanish by El País, and translated into English by Google Translate.]
Finally, I have compiled a list of popular press references that surely help to understand this phenomenon (in EN & ES).
Vídeo | ChatGPT: La inteligencia artificial que inventa cuentos e imágenes imposibles (El País)
Lo bueno y lo preocupante de ChatGPT, la herramienta de inteligencia artificial accesible a todos (ENDI)
ChatGPT: la inteligencia artificial que no reemplazará a los humanos (El País)
ChatGPT: no todo lo que rima es verdadero (El País)
What are We Doing About AI Essays? (Faculty Focus)
Teaching Experts Are Worried About ChatGPT, but Not for the Reasons You Think (Chronicle – paywall, but open with Sagrado email)
Teaching: Will ChatGPT Change the Way You Teach? (Chronicle – paywall, but open with Sagrado email)
Why Banning ChatGPT in Class Is a Mistake (Campus Technology)
How Smart Are the Robots Getting? (NY Times)
The Turing test used to be the gold standard for proving machine intelligence. This generation of bots is racing past it.
Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach (NY Times)
With the rise of the popular new chatbot ChatGPT, colleges are restructuring some courses and taking preventive measures.
A.I. Is Not Sentient. Why Do People Say It Is? (NY Times)
Robots can’t think or feel, despite what the researchers who build them want to believe.
[Featured image: “MIT Museum: Kismet the AI robot smiles at you” by Chris Devers is licensed under CC BY-NC-ND 2.0.]