Editor’s note: This post was originally drafted in February, shortly after discussion around ChatGPT in the academic space was booming. The views of the author have generally remained unchanged.

I have, in the past, been accused of writing much like a robot. At the time, this was a major offense and I was rather hurt by this remark. In retrospect, they weren’t necessarily wrong. I was on internship at the time, and part of my role involved writing emails in response to client inquiries. In writing these emails, I wanted to acknowledge their questions, confirm their accounts, and either provide the answer to their query, or to politely inform them that their concerns were on hold while I forwarded their inquiry to the appropriate team. This method was very formulaic, and lacked personality due to the professional tone I used. Several of the questions had very straightfoward answers where we could copy paste the response from the FAQs or had a standard procedure to resolve. The language I used was neutral, with as few words with specific positive or negative connotations as possible. Several clients I interacted with were from different countries and time zones, so I used simplified terms and sometimes elaborated on key points or more technical phrases. I would end with a pleasantry and an open invite for further follow-up as need be. Indeed, this sounds exactly like the kind of chat bot I would want to deal with CS inquiries. I would be happy to interact with such a bot.
Speaking of bots, the recent discussion on the interwebs has been quite concentrated on the preview of ChatGPT (and various other similar productions from Google and Microsoft). I have also seen some concerns in the academic world in terms of increased rates of cheating or using ChatGPT to answer thinking questions or to write long form responses. While these concerns are legitimate (there is no doubt that some individuals out there are directly using the outputs for this purpose), it does not necessitate an institution wide email banning the use of AI in response shortly thereafter.
This (hypothetical) email sent to all of its students might state that the use of AI tools in part or entirety was not permissible for assignments, academic work, and coursework – unless they were explicitly asked to use them. Teaching Assistants did not receive guidance on this separately from the students. In this situation, if a TA meeting were to occur for a writing assignment with no further discussion regarding ChatGPT, I assume that the instructors did not receive additional advice in how to detect this form of cheating. This email highlighting ChatGPT as the primary offender and not including the names of some other commonly used sites and resources for cheating would be an interesting choice. Naming a specific tool and a less than polished email might suggest administration having a knee-jerk reaction.
Such a response did occur at my institution. For a variety of reasons, I found this rather disappointing. Part of it is in the name, AI-tools. Banning the use of new tools at an academic institution is an interesting choice, even if it does lead to the increased use of these tools for cheating or more efficient cheating. After all,
- it is safe to state that some portion of students will always cheat and some of them will get away with it.
- and, the use of an AI-tool does not necessitate the act of cheating.
- therefore, we should ban AI-tools?
That is an interesting chain of logic. While it is easier and more accessible to cheat in this fashion, I am not convinced that banning its use will greatly decrease the frequency of cheating. Rather, I feel that it might have been helpful to reiterate the goals and values one is meant to learn from doing work without cheating. Then there is some reward for choosing not to cheat, rather than a hint of a punishment for doing so. This requires meaningful course content and a consequence for failure to attain the minimum requirements.
I also found it disappointing that there was little acknowledgement to these new developments as interesting and potentially serving a good use in the near future. The first response I saw from the academic community was sarcasm and fear (though perhaps reddit is not the most ideal source for anything else). This seems rather contrary to the regular newsletter publishing of recent developments and relevant global news to students. While it makes sense that academic integrity is of high priority in this instance, I’m not sure the last time flat out banning something has worked if there isn’t enforcement. Some examples: we currently are required to mask on campus, yet the vast majority of individuals remain unmasked. Smoking is not allowed near entrances, but I often see security smoking there.
Now, to be frank, I don’t think this is all that novel of an AI-tool and the popularity of it for use in cheating will fall off as something “better” occurs. I imagine if the developers are aware of integrity concerns, they may eventually develop a method in which academics can request a comparison of submissions to responses tied to specific accounts (update: OpenAI has indicated they will integrate fingerprinting into their service). Or perhaps not, until some law comes to force for these companies, which judging by the speed of which they have been addressing companies leveraging personal data, may be a long ways away.
Perhaps the “newness” and “groundbreaking” level didn’t merit a whole news spread, but I think there is a fair amount of potential application for ChatGPT in various service areas in the future. As for the benefits to the academics, and other folk that make their living based on what comes out of their brains, see the points coming up.
So why the ban? Cheating has never been allowed, but it hasn’t been easy to prevent it. Thus far, my experience with reporting plagiarism shows that the reporting procedure is incredibly long and draining. I don’t even see most of the work that goes into reporting the cheating after I explain to the course supervisor why I think a student has cheated. Minor incidents are not always pursued in full and the typical work around is to provide very low marks to students. These method does not result in a mark on transcripts, and a student can often withdraw at this point unless the instructor is particularly vindictive and saves the reveal for the end of the course.
I wholeheartedly believe that cheating needs to be addressed more seriously, and more resources put into making sure that students with more privilege do not “get away” with it by being better cheaters. Cheating is a skill of sorts, just not one that is specifically meant to be fostered in the academic environment. The entire purpose of not cheating to do very fundamental things is to develop an appreciation for the rigour involved in research and effective communication. Using other people’s work while providing attributions for the individuals that did the work is another important skill that is hopefully developed prior to post-secondary, and comes as second nature. [There is a separate conversation to be had about the desire some individuals may feel about needing to take ownership of various ideas that can be apparent in the work place]. Feelings on cheating aside, here is an argument for why entirely “banning” AI is not the answer. Note that this is assuming that the students who were planning on cheating will do so using any resources available to them.
- there is a lot to be learned from these AI-tools. In a recent lab meeting, we discussed the potential for inspiration (specifically from art generation), feedback/editing (suggestions from text based tools), and code revision. Naturally, this is a dangerous path to go down if you never learn the fundamentals, but assuming you do have the skills to produce the final work, I see AI-tools as a potentially faster way to get feedback and break through art-blocks. Or one that can open paths that you might have not considered (despite it being trained by a number of like minded individuals, there is a possibility for some deviant ideas buried in there or different takes based on the aggregate answers during the derivation).
Have you ever stared at a document you had written until your eyes turned red and your head was swimming? You know there are minor improvements you can make, but all your co-workers are busy and the Writing Centre has been booked up due to poor planning. There is a free tool staring you in the face that can rewrite your sentences in several different styles. How many steps beyond a thesaurus and word editor is it? No doubt someone also panicked when students started using a wider range of vocabular they previously did not possess. Did anyone notice when their emails started getting written for them? Rarely does my gmail say precisely what I want it to, but when it does, it’s rather nice to not type up everything. Microsoft Word often wants to cut down on my excessively long-winded phrases too. I even accept the suggested changes sometimes.
1b. Turns out, ChatGPT is pretty decent at suggesting basic scripts and packages for processing without going through an entire blog post.
- a number of random generation tools exist on the internet that are not labelled as AI. For example, I have a few complex spreadsheets that have been shared on the internet that can fully generate various sized villages and towns, with the names of each individual, where they work, what their major belongings are, and what kind of personalities they have. Just about anything can be used for inspiration. Why should we discount AI? One argument I see for this is that it can potentially start spouting out fairly stale information unless it is continually trained with newer versions being released. Part of this is that the “inspiration” it provides soon becomes bog-standard and no different from a friend with strong opinions as an AI is trained to have “right” answers. But the interpretative value of the results is quite dependent on the user. Without any changes, the baseline response is “cheating” if it is used directly. Subtle modifications and revisions to adapt it So it becomes more of an iterative process. Say I am interested in creating a alternate timeline where two major events in our current timeline have slightly different results. I could plot the entire progression of these changes all the way to the modern world. Or I could ask for a well-rounded answer from a resource that has more knowledge of these events and their impacts for the subsequent changes and potential results on the current day. Even if they aren’t correct or plausible, it provides a framework in which to build upon
- pretending new technology doesn’t exist does us no favours in academia. While it’s not necessary to immediately jump upon the shiniest newest instrument or model, completely avoiding it and shutting it out is not usually a good approach either. The typical intermediate approach is to acknowledge it, and to cautiously integrate it. Jumping in head first tends to produce work where the results are not meaningful or well understood (this is evident in the machine learning space or the use of statistics in sciences). Ignoring it entirely seems to be somewhat antithetical to the values of innovation and integration of all relevant aspects of human life in research.
- it cannot be ignored. Cheating or not, AI is around. We might as well familiarize ourselves with it if we anticipate having to interact with it in the future. At the present, the main concern is cheating on written assignments (again, why image-based AI were specifically mentioned, I’m not sure as digital art assignments can be evaluated in a very straightforward manner for cheating). The work arounds have been reasonably straight forward, such as adding cheating detection tools, or handwritten assignments, or “scaffolding” assignments where the students integrate feedback and respond to the evaluations. That said, the amount of time to integrate these changes is non-trivial, and it can result in more work. I suspect this is part of the reason why academic institutions may attempt to ban chatGPT altogether instead of providing resources and support for teachers and TAs having to mark these assuming that the tools will be used regardless. I vaguely remember the dark times where students were forced to install invasive software on their computers to catch twitchy eyes or browser opening, rather than support and time being provided for professors to change their curriculums to be harder to cheat on or to focus more heavily on interactive work to demonstrate their knowledge. Students still cheated. Meanwhile, ChatGPT could make for an interesting instructional tool were it integrated in course content. For example, in a first year course, students could submit a prompt and identify the correct/incorrect aspects of the response and to supplement the response with specific relevant aspects for the course. A literature course could break down the stylistic writing choices based on various prompt triggers. A machine learning course could go ahead and train it based on specific sets of data. Prior to this email rolling out, we were encouraged to think of ways that ChatGPT could benefit, rather than hinder us as graduate students and researchers. I was personally hoping that it would be fairly good at parsing out cryptic and uncommented code by perhaps analyzing the structure and tracing variables. It can certainly read my spaghetti code and format it into a slightly more human reader friendly version, though I would caution testing any code that gets reformatted in this manner. The non-academic way in which I might use this bot is to see if it will retain memory about some world-building I want to do based on some real world information (that is preferably not quite accurate)! Seems like the perfect job for AI to be honest. I also noticed when I was trying to slowly tease out a proposal from the bot that…it does indeed write the way I do. Some of the specific aspects of my project were very easily identified, as well as some methods to apply them in. I asked for the following questions, modifying each question based on the response and which aspects of the response I wanted an elaboration on.
The Proposal
elisa
what does a research proposal look like?
can you give me an example research proposal on the martian atmosphere and cloud interactions with topography?
how does this change if we consider exclusively cloud interactions near craters?
what would an expanded methodology look like?
how would the proposal on craters change if this were in preparation for a phd dissertation?
Some observations I made were: references were not real, though the authors often were. ChatGPT was unable to generate real URLs when providing suggestions. ChatGPT was to some extent able to parse code from GitHub repositories. ChatGPT knows more about plants than I would have expected, but is entirely wrong on its stance of calatheas as an easy indoor plant to take care of when my apartment is subject to someone else’s whims for the temperature (and thus, humidity). When discussing pets, ChatGPT had a more “humane” slant and often had disclaimers about the advice it was issuing. The stylistic language is quite flexible. ChatGPT was able to provide purple prose, scientific report, and various other styles for its responses. ChatGPT in its free form is very confident and left leaning based on the questions I asked of it. The history of each conversation may influence the subsequent responses.
I have not had the chance to interact with versions of ChatGPT where the “confidence” can be dialed up or down. I think lower confidence (less safe answers) could provide some interesting discussion points when thinking about what is common in research and what the next steps should be.
The major downfall I noticed was, the necessity to create an account and likely a bunch of other information. Nothing is truly free after all.
