Artificial Intelligence & LLMs
A Slightly Opinionated Discussion about Education and Technology
You can find the formal AI policy for this course in the “Policies” section of the syllabus/website. This discussion is focused on helping you understand and use AI most productively on your work in this course, not on the formal policy.
AI and Learning
Artificial intelligence – in particular, large language models (LLMs) like ChatGPT – are incredibly useful tools. To get the mundane observation out of the way: they are likely to radically change the way we all work over the next several years, just like a variety of other technologies before them (see: the internet, the smartphone, the personal computer, electricity, the steam engine, the printing press, etc.) AI already has a ton of valuable use cases, including in education, data analysis, and research, and the possibilites are only growing.1 There are also lots of concerns about AI’s destructive potential, especially for higher education.2
I do not think AI is the end of higher eduation. However, I do think that, for you as a student to make most effective use of AI, it’s also important to understand exactly what AI is and how to use it in a way that is both (1) effective and (2) does not hamstring your learning. Current AI models are what is known as Large Language Models, or LLMs. An LLM, at its core, is a probabilistic model whose primary function is to produce the next token – which you can think of as a word – that is most likely to make sense, based upon some prompt. It is not an “intelligence” like a human is, and it does not care if this token represents truth, reality, or a functional line of code – it only cares if the next token seems to make sense. Frequently, what it produces will be incidentally true and/or functional. That is what makes it useful! Occasionally, it will also produce nonsense (or what we know as a “hallucination”). LLMs have advised people to eat rocks, hold cheese on pizza with glue, failed to perform simple arithmetic with decimals, and made up wholly invented bibliographic references, statistical libraries, and analysis results.
Furthermore, this incidental relationship to the real world is by design – it is inherent to the way LLMs work. They don’t verify what they say with base reality, they just spit out what sounds good to probabilistic algorithms.3
LLMs are, in a philosophical sense, bullshit of the first order. And yes: “bullshit” here is a technical academic term, meaning “language produced without regard for truth.”4
“The problem here isn’t that large language models hallucinate, lie, or misrepresent the world in some way. It’s that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.” (Hicks, Humphries, and Slater, 2024)
And yes, for those interested, the fact that this is from an academic research paper is what makes your normally proper and somewhat profanity-averse professor feel gleefully authorized to use the term “bullshit” in a course syllabus.5 There is a more diplomatic and professional seeming word that some use for the profusion of misguided output of AI: slop, which most use to mean the profusion and explosion of low-quality AI material. In short, they mean a profusion of what philosophy so artfully frames as bullshit. But with such a pithy and expressive alternative available to us in the English language, why limit ourselves to using the banal sounding “slop”?
This incidental relationship LLMs have with truth is also what makes relying on an LLM dangerous. You can’t reliably evaluate LLM outputs until you already know enough about a subject to judge good use of language from bad use of language. However, in the very beginning phases of learning how to write code, conduct data analysis, and interpret it, the problem is often that you don’t yet know enough about the subject to reliably evaluate the quality of AI outputs.
Furthermore, using LLMs early on in your learning can hamper your ability to understand the fundamental conceptual elements of programming and statistics.6 Many of your assignments for this class, for instance, are intentionally created to give you practice on beginning concepts - concepts that ChatGPT can answer in a trivially easy fashion. Practice is what gives you familiarity and comfort and working knowledge, but practice doesn’t happen if you just copy language or code that an AI gives you.
Even more problematically, it is these beginning concepts which, when will understood, will eventually enable your most effecive use of this new class of tools for statistical and data work! Accordingly, if you elect to use generative AI to help you with assignments, I strongly encourage you to use them intelligently, rather than carelessly. Use them to help you learn, not to replace your learning; think of them like an interlocuter or teaching assistant, not like an answer bot. Even if plugging an assignment prompt directly into ChatGPT gives you an immediate shortcut, it is likely put you at a long-term disadvantage, even when using AI, when compared to students that learned the old school way.7
Remember: whenever you use AI tools for a project you attach your name to, you are still taking implicit responsiblity for that project, product, or document. This means that it is incumbent upon you to to the followup work necessary to validate, verify, and confirm that the language and work produced by the machine that works without regard for truth is accurate and functional. AI is not a coauthor,8 and I promise you that OpenAI, Google, Anthropic, Mistral, etc won’t take responsibility for whatever you produce with their tools - that’s on you.9
Just like any other tool or source you use, all use of AI, in anything you turn in for this class, should be documented and identified in your work. If you used AI to help you write code, note you used AI to help you write code. If you use it to help you write analysis, not how you used it to help you write analysis.
Suggestions and Guidelines: AI usage in our course
For the reasons above, I would strongly encourage you to be skeptical and not overuse AI as you learn. My job, goal, and interest in this class is to help you, my human student, learn how to valuable things like write code; wrangle and visualize data; critically interpret information; and learn how to be a more effective citizen and human engaging in data analysis, management, and policy study. My interest is not in correcting AI slop,10 nor does my interest lie in trying to figure out if you are attempting to pass unaldeterated AI slop11 off as your own work.12 After all, you are allowed to use AI on assignments, if you wish, and I encourage you to do so in productive and valuable ways to enhance your learning, rather than as a way to cheat. Your education, as always, is only as valuable as what you put into it.
With all of that said, poor uses of AI are often easily detectable to the eye of me, your professor, and get bad grades regardless.13 Some examples:
- Students who turn in papers and problem sets that reference methodological tools or techniques way beyond the understanding of the material they show in person or in in-class tests and assignments. (I’ve seen many students propose using methods that would only come to them in their third or fourth semester of statistics in an introductory research methods class, for instance.)
- Students who successfully complete coding assignments, but do so in ways that are overly verbose and complicated relative to simpler and more straightforward methods and techniques covered in class.
- Students who authoritatively reference scholars who don’t exist, or authoritatively reference scholars that exist but articles or books that those scholars never wrote.
- Students who turn in assignments that match the style of documents that are well represented in the training set of an LLM (like a five paragraph essay, for instance) but do so for an assignment where the directions were clearly for something else (like, say, a Socratic dialogue.)
If you do such things in your work, you will find that I take points off not necessarily because you used AI, but because you’ve made a poor use of AI, and the work doesn’t meet the standards of the class.
I would also encourage you to not make the mistake of confusing assessement with learning.14 Assessment is an obligation of your professor to measure how well you know class material, but assessment is not the end goal of education any more than taking your blood pressure is the end goal of going to a physician’s office. The goal of an education is for you to learn and enhance your critical thinking in ways that will enrich your personal and professional lives. Don’t mistake the means for the end: I don’t care about assessing what ChatGPT “thinks” about something any more than your physician cares about measuring what ChatGPT’s blood pressure is, and I would suggest that you probably don’t care much about that, either.
Use AI like a tool, keep your thinking your own, and remember: your education is about your own enrichment, not about passing some tests.
Further Resources and References
If you’ve read this far, first: well done! I both appreciate and applaud your interest. Second, you might be interested in some further readings and resources on AI, its use, its societal implications, and how we might think about public and private policy surrounding AI. The first place I might point you to is to courses available to you at Maxwell and Syracuse, more broadly: I teach a course in AI Policy, Professor Himmelreich teaches a class on Data Ethics, and Professor Zhang teaches a class on AI Governance and Politics. There are also a broader set of other AI-related things happening at Syracuse, including events through the Autonomous Systems Policy Institute (ASPI), resources organized by ITS, and many more.
A few other things I’d recommend:
- Melanie Mitchell, professor at the Santa Fe Institute and one of the most nuanced thinkers about AI I know, has a series of articles that are well worth reading as well as an excellent podcast she hosted on AI. She also was interviewed for the excellently titled “How Your Cheese-Powered Baby Trounces AI”.
- Andrew Heiss, a professor in public management and policy at Georgia State’s Andrew Young School, has a digression on AI and education that’s worth looking through, including the sources and footnotes.
- Gary Marcus’ substack is generally critical of current AI technology, but a good place to keep up on current AI trends and hype. Luiza Jarovsky’s substack is a good place to read about AI governance.
- If you’re more interested in the business and tech industry sides of AI, Benjamin Evans and Ben Thompson are worth adding to your bookmarks-slash-RSS reader-slash-email inbox-slash-future mechanism for keeping track of internet news that hasn’t yet been invented.
Finally, as I’m always eager to know how far students actually read syllabi: if you at this point, and still reading about AI and my thoughts on it, send me an email with a recommendation for your favorite artificial intelligence related novel, movie, or TV-show. I’ll give you a bonus point on your next test.
Footnotes
I for instance, used AI tools to help me build this course website, and I would guess that I was able to do it about twice as quickly as I would have been able to without AI.↩︎
For just a small sampling, see here, here, here, and here.↩︎
Generally speaking, at least. If you’re paying attention to some of the details with AI, you know about something called an LRM - or Large Reasoning Model. To a first approximation, they try to introduce better and more complex reasoning into the base prediction process, sometimes going so far to do things like generate python code to do math analysis. These “reasoning” models, however, still often fall short on relatively simple reasoning tasks (see Apple, 2025, The Illusion of Thinking).↩︎
Frankfurt, 2005, On Bullshit; Hicks, Humphries, and Slater, 2024, ChatGPT is Bullshit. For those interested in a full taxonomy, Frankfurt helpfully helps us distinguish between bullshit and any number of additional different not-quite-truthisms, including humbug, hot air, bluff, bravado, and lying.↩︎
For another excellent example of profanity being used to elegantly make forceful points in academic settings, see Healy, 2017, “Fuck Nuance”, which genuinely contains very good advice on writing. (Although to be fair, you can also get most of the point of that article just from the title . . . which is also the point.)↩︎
See, for instance, Lehmann, Cornelius, and Sting, 2024↩︎
Insert proper “value of doing the tough work” metaphor here: eating your vegetables, walking uphill both ways to school in a snowstorm, learning to program in assembly before C, etc.↩︎
Many publications, for instance, have explict and clear instructions that prevent you from cheekily listing ChatGPT or Claude as a coauthor, in fact - even if they allow for the use of ChatGPT in the production of the manuscript. See, for instance, this discussion in regards to journals published by Springer-Nature.↩︎
A good parallel here is Microsoft Office: if you write some nonsensible hot garbage in Word, Microsoft is never going to take responsiblity for any of that business. They’ll (correctly) note that it was just some dumb human that wrote it using their tool.↩︎
An arduous, unending, and unenviable task, like Sisyphus with his rock.↩︎
Accordingly, I will generally not spend time trying to guess if your assignments are improperly AI generated, nor will I typically engage with technology that purports to do this for me. The technical tools that try to detect LLM writing are imperfect, anyways, resulting in a lot of uncertainty and false positives.↩︎
Good, and effective, AI usage often has the property of being largely undetectable, even to a trained eye – that’s often part of what makes it effective use of the tool, actually!↩︎
An all too common mistake made today by many people, for the record. Much of the pearl-clutching around AI in education is really focused on how we can know know or detect whether or not students are “cheating” on test and assignments, and less about how we can actually integrate AI meaningfully into the learning process.↩︎