Homer’s Illiad dramatically opens with Achilles’ anger and its destructive wake: Sing, O goddess, the anger of Achilles son of Peleus, that brought countless ills upon the Achaeans. Many a brave soul did it send hurrying down to Hades, and many a hero did it yield a prey to dogs and vultures…
In light of its destructive outcomes, Stoic philosophers, such as Seneca, urge us to mitigate and avoid anger. Seneca emphasizes the harm to oneself and others, comparing an angry person to a “falling rock which breaks itself to pieces upon the very thing which it crushes” (On Anger, Part 1). Martha Nussbaum emphasizes the relative superiority of other emotions (generosity and love) as responses to injustice and wrongdoing, warning of the moral pitfalls that anger might lead us to. In further considering the moral implications of anger, philosopher Agnes Callard observes that “there are two problems with anger: it is morally corrupting, and it is completely correct” (here). Myisha Cherry offers some suggestions about how to categorize and understand different kinds of anger and rage, and recommends instances where rage is not only appropriate, but also instructive and important. Please listen to this episode of Philosophy Bites where Cherry highlights some of the different internal and external implications of anger and rage (if you would prefer an alternative to listening, here is a New Yorker piece by Cherry).
As you listen, here are some questions to consider:
What kinds of mental states are emotions, and what are they for?
Some philosophers treat emotions as ways to understand and relate to the world and our evidence about it, and further as a means to instigate actions that further our interests. In this framework emotions can be understood as appropriate and fitting (and not just predictable and inevitable given certain conditions). How does that cohere with your own experiences of emotions and the emotions of others?
Do you think of anger and rage as something to be managed, mitigated, and avoided? What kinds of things do you think it is appropriate to be angry about? Or angry at? When has anger served you poorly? When has it served you well? How does anger help or harm humans more generally?
Third, Myisha Cherry makes the case for importance of Lordean rage, a response to oppression that spurs productive and effective action and is rooted in community solidarity. In outlining this type of rage, she focuses on four important aspects of emotion: the target, action tendency, aims and perspectives informing the emotion. How can using this framework help us understand rage (specifically), or other emotions more generally?
Please join us Monday, September 26, at 4:30 p.m. in the Strange Lounge of Main Hall for the second installment of the Strange Philosophy Thing of the 22-23 academic year.
The author of this article considers a couple of reasons one might think that linguistics and/or universal grammar isn’t a science. One concerns falsifiability. Karl Popper famously argued that a necessary condition for a hypothesis to be scientific is that it is falsifiable, i.e., it must generate predictions that can be refuted by observation. The challenge for linguistics concerning falsifiability is particularly acute for Noam Chomsky’s universal grammar, according to which human beings have a common innate linguistic capacity which guarantees that all human languages will share certain traits, most notably, recursion, i.e., the ability to form arbitrarily long expressions by repeatedly applying one or more rules of grammar.
Chomsky’s view faces a challenge due to the existence of languages like Pirahã, which do not appear to feature recursion. The author notes that, in response to being questioned about the falsifiability of universal grammar, Chomsky himself has claimed that universal grammar shouldn’t be understood as a scientific hypothesis but instead as a field of study, like physics, with the claim of innate universality being a guiding assumption, akin perhaps to the presumption in physics that physical objects exist. So, if asked about the implications a language like Pirahã has for universal grammar, rather than trying to argue that it does in fact contain recursion, or to explain it away as irrelevant data, Chomsky would presumably reject the assumption implicit in the question that universal grammar is a scientific hypothesis.
The author of the article points out another reason one might think that linguistics isn’t a science: the peculiarity of the data from which linguists draw many of their conclusions. In particular, the sorts of sentences linguists use to draw certain conclusions are heavily contrived. In many cases, neither the grammatically incorrect nor grammatically correct sentences relied upon are sentences one would hear in normal discourse. The author makes connections between the contrivance of these sentences and idealization in other scientific domains we might be familiar with, e.g., in the formulation of various laws, like v = 9.8m/s2 × d, in which we abstract away from friction produced by air resistance.
The author’s discussion of these issues raises a few clusters of questions:
1. Must a science be falsifiable? Is universal grammar falsifiable, given the approach notable adherents use to deal with Pirahã? Is linguistics in general falsifiable?
2. Should universal grammar be understood as a field of study rather than a scientific hypothesis? Should it be taken as a methodological assumption or does it need to be confirmed empirically? If the former, then what should one make of the existence of Pirahã?
3. What is the role of idealization in science? Does the respect in which idealization figures into linguistic data, as the author describes, the same (or at least relevantly similar to the) way in which it figures in elsewhere, e.g., in the abstraction away from friction in, for example, v = 9.8m/s2 × d?
See you on Monday for discussion of these questions and refreshments!
Please join us Monday, September 19, at 4:30 p.m. in the newly renovated Strange Lounge of Main Hall for the return of the Strange Philosophy Thing.
Language models are computational systems designed to generate text by, in effect, predicting the next word in a sentence. Think of the text-completion function on your cell phone. Large language models (LLMs) are language models of staggering complexity and capacity. Consider, for example, OpenAI’s GPT-3, which Tamkin and Ganguli note, “has 175 billion parameters and was trained on 570 gigabytes of text”. This computational might allows GPT-3 an as yet unknown number of capabilities—unknown, because they are uncovered when users type requests for it to do things. Among its known capabilities are summarizing lengthy blocks of text, designing an advertisement based on a description of the product, writing code to accomplish a desired program effect, participating in a text chat, and writing a term paper or a horror story. Early users report that GPT-3’s results are passably human. And LLM’s are only destined to improve. Indeed, artificial intelligence researchers expect LLMs to serve a central role in attempts to create artificial general intelligence. Our discussion on Monday (9/19) will focus on two aspects of this research:
Are LLMs genuinely intelligent?
The issues we will discuss this week are alluded to in John Symon’s recent article in the Return. To frame our discussion about the general intelligence of LLMs, we might consider the following thought experiment, as discussed by Symons:
Alan Turing had originally conceived of a text-based imitation game as a way of thinking about our criteria for assigning intelligence to candidate machines. If something can pass what we now call the Turing Test; if it can consistently and sustainably convince us that we are texting with an intelligent being, then we have no good reason to deny that it counts as intelligent. It shouldn’t matter that it doesn’t have a body like ours or that it wasn’t born of a human mother. If it passes, it is entitled to the kind of value and moral consideration that we would assign to any other intelligent being.
LLMs either already do (or, it is plausible to suppose, they will soon) pass the Turing Test. Are we comfortable with the conclusions Symons derives from this fact—that LLMs “count as intelligent” and are “entitled to…moral consideration”? Perhaps we should rather reappraise the Turing Test itself, as a recent computer science paper suggests we should do. If LLMs can pass the test by merely reflecting the intelligence of the tester, then perhaps the true test is to have LLMs converse with one another and see if we judge them as intelligent from outside of the conversation.
Are LLMs making us less intelligent?
A second—and the more important—theme in Symons article is the role that LLMs might play in making us less intelligent. Symons’ claims here are built on his own observations about LLMs. As he writes:
I tried giving some of these systems standard topics that one might assign in an introductory ethics course and the results were similar to the kind of work that I would expect from first-year college students. Written in grammatical English, with (mostly) appropriate word-choice and some convincing development of arguments. Generally, the system accurately represented the philosophical positions under consideration. What it said about Kant’s categorical imperative or Mill’s utilitarianism, for example, was accurate. And a discussion of weaknesses in Rawlsian liberalism generated by GPT-3 was stunningly good. Running a small sample of the outputs through plagiarism detection software produced no red flags for me.
Symons notes—rightly it seems—that as the technology progresses students will be tempted to use LLMs to turn in assignments without any effort. Instructors, including college professors, will be unable to detect that LLMs, rather than students, generated fraudulent assignments. But, while this might seem a convenient way to acquire a degree, Symons argues that it would undermine the worth of the education that the degree is meant to convey. For—or at least as Symons maintains—learning to write is learning to think, and deep thought is possible only when scaffolded by organized prose. Students completing writing assignments, then, are learning to think, and when they rely on LLMs (or other means) to avoid writting assignments, they are cheating themselves of future competent thought. Focusing on the discipline of philosophy in particular, Symons writes:
…most contemporary philosophers aim to help their students to learn the craft of producing thoughtful and rationally persuasive essays. In some sense, the ultimate goal of the creators of LLMs is to imitate someone who has mastered this craft. Like most of my colleagues who teach in the humanities, philosophers are generally convinced that writing and thinking are connected. In some sense, the creators of GPT-3 share that view. However, the first difference is that most teachers would not regard the student in their classroom as a very large weighted network whose nodes are pieces of text. Instead, the student is regarded as using the text as a vehicle for articulating and testing their ideas. Students are not being trained to produce passable text as an output. Instead, the process of writing is intended to be an aid to thinking. We teach students to write because unaided thinking is limited and writing is a way of educating a student in the inner conversation that is the heart of thoughtful reflection.
These passages might constitute a jumping off point for our second topic of discussion. Ancillary questions here might include :1) What is the purpose of writing assignments in college? 2) To what extent is complex thought possible without language? 3) How can we design education in such a way as to produce the best possible thinkers and writers?
I look forward to seeing you all on Monday at 4:30 for a fun, informal discussion, open to everyone! Philosophy professors and snacks provided.