Creativity: What is it? Can we improve it?

Join us for the Strange Philosophy Thing today (11/14), at 4:30 p.m. for refreshments and some fun, informal philosophy talk. Today’s topic: Creativity.

Creativity is among the most valued attributes in a person. Corporations rank creativity as the most sought after attribute in potential employees. And each of us values conversation with a witty, creative interlocuter. But what is creativity? How is it realized by the mind? How can we develop it? And what aspects of contemporary life impede it?

In a short, recent article, the philosopher of mind Peter Caruthers distinguishes two kinds of creativity: online and offline. “Online creativity,” Caruthers writes, “happens ‘in the moment,’ under time pressure.” Importantly, online creativity is the kind of creativity exhibited in a task with which one is currently engaged. It is exhibited, for example, when the person with whom you’re talking says something witty or unexpected. It is also exhibited when the jazz musician offers a brilliant and unforeseen elaboration on her bandmate’s musical motif. Offline creativity, on the other hand, “happens while one’s mind is engaged with something else”. Here is a memorable example of offline creativity from the 20th century physicist and public intellectual Carl Sagan:

I can remember one occasion, taking a shower with my wife while high, in which I had an idea on the origins and invalidities of racism in terms of Gaussian distribution curves. It was a point obvious in a way, but rarely talked about. I drew curves in soap on the shower wall and went to write the idea down. One idea led to another, and at the end of about an hour of extremely hard work I had found I had written eleven short essays on a wide range of social, political, philosophical, and human biological topics…I have used them in university commencement addresses, public lectures, and in my books.

Caruthers’ primary focus is on offline creativity of the sort Sagan describes, which, Carruthers writes, “…correlates with real-world success in most forms of art, science, and business innovation”. Caruthers claims that this sort of creativity is realized by an interplay between two distinct mental capacities for attention. One of these capacities is described by psychologists as a top-down system. Sometimes metaphorically characterized as a spotlight, top-down attention is the active, deliberate attending by an individual to her occurring mental states. Such attention is sufficient to bring mental states to consciousness. Think about deliberately attending to the distant, percussive sound of a jack hammer that you’d been successfully ignoring—suddenly the sound, which had been heretofore unconscious, rises to consciousness. The other capacity is a bottom-up process, which Caruthers characterizes as the ‘relevance system’. In addition to our own willful direction of attention, it seems that processes within our minds constantly assess the relevance of non-conscious mental states to “current goals and standing values”. This bottom-up system puts forward those mental states it deems relevant as candidates for top-down attentional focus. So, for example, you may have a standing value to be polite to people you know. Attuned to this value, the bottom-up system promotes your perceptions of familiar faces in a crowd for top-down attentional focus. 

Caruthers contends that this interplay between bottom-up and top-down attention is essential to offline creativity, writing that: 

If you also have a background puzzle or problem of some sort that you haven’t yet been able to solve, chances are the goal of solving it will still be accessible, and will be one of the standards against which unconsciously-activated ideas are evaluated for relevance. As a result, the solution (or at least, a candidate solution) may suddenly occur to you out of the blue. Creativity doesn’t so much result from a Muse whispering in your ear as it does from the relevance system shouting, “Hey, try this!”.

But Caruthers contends that such offline creativity is impeded by activities that focus our attention. The best route to offline creativity, he contends, is to let your mind be offline sometimes. But contemporary life—encompassing, for example, the ever-presence of smartphones and social media—makes this difficult. Caruthers characterizes the situation as follows: “Since there is no room for minds to wander if attention is dominated by attention-grabbing social-media posts, that means there is less scope for creativity, too.”

What do we think? Are there really two types of creativity? Are they “anti-correlated” as Caruthers suggests? Has creativity really been declining over recent centuries? How is creativity impeded and can we devise ways to make it flourish? Are there any trade-offs to a more creative society?  

Ethics, Aesthetics, and Ontology in Video Games

Join us for the Strange Philosophy Thing tomorrow, (11/07), at 4:30 p.m. for refreshments and some fun, informal philosophy talk. We’ll be discussing three aspects of video games: the ethics of virtual actions (do ethical codes differ in games?), the aesthetics of video games (can a video game be interactive art?), and the ontology of video games (Heraclitus might have said that you can’t play the same video game twice. We’ll consider the identity conditions for games, and the status of their objects.).

Here is a 1000-word essay that tackles each of these questions: https://1000wordphilosophy.com/2021/09/26/videogames-and-philosophy/

The Paradox of Horror

Join us for a SpOoKy edition of the Strange Philosophy Thing tomorrow, Halloween (10/31), at 4:30 p.m. for refreshments and some fun, informal philosophy talk. Grab a treat from our candy stash and get ready to discuss:

Topic: The Paradox of Horror


Consider the following three claims:
1. Horror stories and movies cause negative emotions in the audience (e.g., fear, disgust).
2. We want to avoid things that cause negative emotions.
3. We do not want to avoid horror stories and movies.

Though each of these claims seems intuitively plausible, they cannot all be true. For example, if claim 1 is true and horror causes negative emotions, and if claim 2 is true and we want to avoid things that cause negative emotions, then it seems to follow that we would want to avoid horror. But that’s inconsistent with claim 3. On the other hand, if, as claim 2 has it, we want to avoid things that cause negative emotions, and if, as claim 3 has it, we do not want to avoid horror, then, presumably, horror doesn’t cause negative emotions. But that’s inconsistent with claim 1. So, it seems that, in order to avoid inconsistency, we must reject (or modify) one of these claims. Which should we reject?

You need not prepare in anyway in order to attend the Strange Thing discussion. However, if you’re interested, there’s a wealth of info on the paradox of horror available from across the web. Here’s a short video on the topic by, perhaps, the premier philosopher of horror, Noël Carroll. We discussed this topic back in 2017 and Professor Armstrong provided a great write-up here. And, finally, for a look at the neuroscience of horror, check out this recent audio interview with Nina Nesseth, author of the book, Nightmare Fuel: The Science of Horror Films.

See you tomorrow! And, why not bring a friend, to kick off All Hallows’ Eve in style, with philosophy!

Post-Theory Science

Join the philosophy faculty and other interested parties for the next installment of the 22–23 the Strange Philosophy Thing. The Strange Thing meets in the Strange Lounge (room 103) of Main Hall. We gather there most Mondays of the fall and winter terms from 4:30–5:30 for refreshments and informal conversation around a given topic. It’s a lot of fun!

I take my inspiration for this week’s Strange Thing from this article from The Guardian, by Laura Spinney. Reading it is optional. I’ll hit the most important points, and highlight the most important questions the article raises, quoting some useful passages along the way.

A traditional and familiar strategy for carrying out scientific research is to explain a type of phenomenon by formulating a theory—a hypothesis—which predicts some particulars of that type of phenomenon, and run experiments to confirm those predictions are accurate. Newton, for example, hypothesized that gravity operated in the absence of air resistance according to his famous inverse square law. And sure enough, he was able to derive Kepler’s empirically well-confirmed laws of planetary motion from it. It’s easy to miss the fact that a lot of scientific practice—especially recently, with the use of AI—differs drastically from this picture. As Spinney notes,

“Facebook’s machine learning tools predict your preferences better than any psychologist [could]. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.”

The reason they are silent on why they work is due to the opaqueness of what is going on inside the neural nets which generate the predictions from ever-growing datasets.

Spinney grants that some science is, at this stage at least, merely assisted by AI. She notes that Tom Griffiths, a psychologist at Princeton, is improving upon Daniel Kahneman and Amos Tversky’s prospect theory of human economic behavior by training a neural net on “a vast dataset of decisions people took in 10,000 risky choice scenarios, then [comparing it to] how accurately it predicted further decisions with respect to prospect theory.” What they found was that people use heuristics when there are too many options for the human brain to compute and compare the probabilities of. But people use different heuristics depending on their different experiences (e.g., a stockbroker and a teenage bitcoin trader). What is ultimately generated via this process doesn’t look much like a theory but, as Spinney describes, ‘a branching tree of “if… then”-type rules, which is difficult to describe mathematically’. She adds,

“What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so – the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.”

Of course there is a concern to be noted concerning bias in AI, particularly those that are fed small or biased data sets. But can’t we expect the relevant data sets eventually to become so large that bias isn’t an issue, and thus accuracy of their predictions abounds? Will science look more and more like this in the future? What does that mean for the role of human beings in science?

This last question can be addressed with the help of the notion of interpretability. Theories are interpretable. They involve the postulation of entities and relations between them, which can be understood by human beings, and that explain and predict the phenomena under investigation. But AI-based methods of prediction preclude interpretability due to the opaqueness of the AI processes by which they are generated. Human’s don’t observe the process of the weights of the nodes of the neural net changing as new data is fed in. AI-based science appears to prevent us from having an explanation of why scientific predictions are accurate, and thus presumably precludes us from providing one another with explanations of the phenomena under investigation. There are at least two sorts of reasons to be leery of non-interpretability. One is practical, the other theoretical.

On the practical side, Bingni Brunton and Michael Beyeler, neuroscientists at the University of Washington, Seattle, noted in 2019 that “it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry”. Spinney notes a good example of this:

‘Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data – and hence scanning time – to produce such an image, which isn’t necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopra’s group has done so. But radiologists and patients remain wedded to the image. “We humans are more comfortable with a 2D image that our eyes can interpret,” he says.’

On the theoretical side, is such understanding important for its own sake, in addition to or solely because of practical considerations like the one mentioned above? Spinney suggests that AI-based science circumvents the need for human creativity and intuition.

“One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.”

She adds,

‘In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step “the core of the creative process”. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights – new generalisations – in the mathematics of knots.’

Is this reason in itself to be leery of AI-based science? Because it will reduce the opportunity for creative expression and the exercise of human intuition? Would it do this?

Hope to see you Monday for discussion with the philosophy department and refreshments!

Fetal Personhood

Join the Philosophy Department tomorrow,. Monday (10/17) in the Strange Lounge of Main Hall (103), 4:30-5:30pm for refreshments and informal conversation.  This Monday, October 3, we will discuss debates surrounding fetal personhood laws.

A Person Seed

For this Strange Philosophy Thing, we’ll be discussing the recent trend of fetal personhood laws. Specifically, we’ll look at possible legal and policy implications, and discuss the ethics of those policies. Here are a few examples of the kind of thing I want to discuss:

Georgia’s state department of revenue has extended the state’s tax exemptions for dependents to include gestating fetuses.  

A Texas woman claimed the fetus she was gestating counted as a second passenger, allowed her to drive in the carpool lane.

Women are charged under child abuse or endangerment statutes, for behaviors including drug use while pregnant. (This specific example, however, does not depend on a fetal personhood law.)


Evaluating the ethics of law and policy is complicated, and doesn’t easily connect to the ethics of the behavior being regulated. Breaking promises is typically wrong, but it would be a mistake to legally require everyone keeping their promises. We only do this for special cases, such as contracts. Driving on the left side of the road is not in itself wrong, but legally prohibiting it is completely appropriate.

Sometimes the ethical status of a law or policy depends on its relation to other things in the hierarchy of laws, such as compatibility with a State’s or the federal Constitution. But if this is all there is to it, then the problem may not be the otherwise sensible subordinate law, but the but the superior law that it conflicts with. Certain gun control regulations are (arguably) an example of this.

Sometimes a law or policy is intrinsically wrong, such as those that allow partisan gerrymandering. This is wrong by its very nature, since its nature is to disenfranchise equal citizens because of their political viewpoints.

Sometimes a law or policy is instrumentally or contingently wrong. This is when there is nothing intrinsically wrong with the policy, but the effects it has in the world are unacceptable. An example of this is voter ID laws. There is nothing wrong in principle with requiring identification to vote. Voter fraud is worth preventing because each case of voter fraud functionally removes one valid vote. But in practice, these laws have several negative effects. First, they prevent more eligible voters from voting than they prevent cases of voter fraud. So when we look at the rationale for such laws (the ethically acceptable rationale, whether or not it’s the actual motivating rationale), it is inappropriate for being self-defeating. It’s worse if more valid voters are removed through ID laws than if fewer are neutralized through fraud. Second, the discouraging or preventing effects are concentrated on already marginalized groups of people (racial and ethic minorities, lower economic classes).

Sometimes a policy that would be otherwise problematic is actually fine because of the outcomes it has. One example might be the pandemic relief funds that were sent out to individuals (oversimplified to serve the example). It’s problematic (regressive economic policy) to send the same amount of money to people at all points on the economic spectrum, because a large portion of that money is not needed and so is wasted. It would be better to target the relief to those who would benefit from it the most, since this would help them more. But this targeting would be costly, difficult, and slow, so the people we want to help avoid harm will be harmed while we try to figure it out. So, it is better overall to give the economic help faster but less efficiently targeted.

Or, it might not be a single law or policy that is problematic, but a cluster of them that are together problematic. Consider: while the right to abortion was constitutionally protected, Louisiana passed a law requiring abortion providers to have admitting (patient transfer) privileges at a hospital within 30 miles. While that’s a single law (deemed unconstitutional at the time for lack of medical benefit, violating the undue burden requirement on abortion restrictions), we could imagine passing a second law that prohibited hospitals from giving admitting privileges to abortion providers. Even if we didn’t find either law objectionable in isolation, we might find the conjunction objectionable. (This isn’t the cleanest example, but it was the easiest (realistic) one for me to think of at this point.)

I want to consider implications of fetal personhood laws in this light (or, the last four lights). Or, the implications of other laws and policies in light of fetal personhood laws.

What policies might have intrinsically problematic effects?
What are the points and purposes of these laws? How do the effects of these policies (or the ones we should expect them to have) relate to those rationales?
Are there any effects that are problematic, independent of how they relate to the policy’s rationale?

Take the suggestion that driving pregnant makes you eligible for carpool lanes. The point of these laws is to reduce traffic congestion and auto emissions by encouraging people going to and from the same or close places to travel in fewer vehicles. They do this by giving access to a special lane that is (supposed to be) less crowded, allowing you to get where you’re going faster. And so they specify “high occupancy” thresholds, and (from my quick research) these typically specify the number of seats filled rather than the number of persons in the vehicle.

This excludes fetal persons from counting towards HOV requirements for carpool lanes. Is there something intrinsically problematic for excluding this class of person? It’s hard to see what that would be. Is there something contingently problematic about excluding them? By looking at the point of carpool lanes, it seems like there would instead be something problematic about including them. Pregnant people cannot travel separately from the fetuses they gestate, and so having them travel together does not reduce the numbers of cars on the road (congestion, emissions). This adds a car to the carpool lane (slowing it down), without removing an extra car from the other lanes. So counting them towards HOV eligibility would work against the purpose of the law.

We also have to think about similar cases. What about family or friends who would be driving together even without carpool incentives? It makes sense to allow these passengers because if we did not, sorting out the good cases from the bad cases would be unduly invasive for the purposes of the law. Similarly, if we did count fetuses as passengers for HOV purposes, having HOV levels about two might be problematic because sorting out which people have single pregnancies and which have twins (or more!) would be unduly invasive. (I think this further counts against considering fetuses as separate passengers, rather than counting in favor of capping HOV requirements at two.)

So, I want to talk about policies like the dependent tax exemption and gestational child abuse examples above, but also “safe surrender” (“infant relinquishment”) allowances, and adoption and surrogate pregnancy. The article on the Georgia law also raises questions about child support policies. We might also discuss in vitro fertilization and unused zygotes.

If there are any policies that you think might have interesting (problematic or just unexpected) implications given a fetal personhood law, we can talk about that as well.

“Is a Fetus a Person? The Next Big Abortion Fight Centers on Fetal Rights,” Kelsey Butler and Patricia Hurtado, Bloomberg (linked at Washington Post)

Georgia says ‘unborn child’ counts as dependent on taxes after 6 weeks,” María Luisa Paúl, Washington Post

Pregnant woman given HOV ticket argues fetus is passenger, post-Roe,” Timothy Bella, Washington Post

Pregnant women were jailed over drug use to protect fetuses, county says,” Marisa Iati, Washington Post

Safe Surrender’ or ‘Infant Abandonment’ Legislation,” ACLU

Indigenous People’s Day

The philosophy department will be taking a break from the Strange Thing next week in honor of LU’s Indigenous People’s Day! Please join the celebration at Warch Campus Center 324 (Somerset Room) between 5-7 p.m. on Monday, October 10th. The event will include song, dance, food, and local Native American guest speakers and leaders. It’s open to the entire Fox Valley community, so bring your friends!

(Str)ANGER philosophy thing?

Homer’s Illiad dramatically opens with Achilles’ anger and its destructive wake:  Sing, O goddess, the anger of Achilles son of Peleus, that brought countless ills upon the Achaeans. Many a brave soul did it send hurrying down to Hades, and many a hero did it yield a prey to dogs and vultures…

In light of its destructive outcomes, Stoic philosophers, such as Seneca, urge us to mitigate and avoid anger.  Seneca emphasizes the harm to oneself and others, comparing an angry person to a “falling rock which breaks itself to pieces upon the very thing which it crushes” (On Anger, Part 1).  Martha Nussbaum emphasizes the relative superiority of other emotions (generosity and love) as responses to injustice and wrongdoing, warning of the moral pitfalls that anger might lead us to.  In further considering the moral implications of anger, philosopher Agnes Callard observes that “there are two problems with anger: it is morally corrupting, and it is completely correct” (here).  Myisha Cherry offers some suggestions about how to categorize and understand different kinds of anger and rage, and recommends instances where rage is not only appropriate, but also instructive and important.  Please listen to this episode of Philosophy Bites where Cherry highlights some of the different internal and external implications of anger and rage (if you would prefer an alternative to listening, here is a New Yorker piece by Cherry). 

As you listen, here are some questions to consider:

  • What kinds of mental states are emotions, and what are they for?

Some philosophers treat emotions as ways to understand and relate to the world and our evidence about it, and further as a means to instigate actions that further our interests.  In this framework emotions can be understood as appropriate and fitting (and not just predictable and inevitable given certain conditions).  How does that cohere with your own experiences of emotions and the emotions of others?

  • Do you think of anger and rage as something to be managed, mitigated, and avoided?  What kinds of things do you think it is appropriate to be angry about? Or angry at?  When has anger served you poorly?  When has it served you well?  How does anger help or harm humans more generally?

  • Third, Myisha Cherry makes the case for importance of Lordean rage, a response to oppression that spurs productive and effective action and is rooted in community solidarity.   In outlining this type of rage, she focuses on four important aspects of emotion: the targetaction tendencyaims and perspectives informing the emotion. How can using this framework help us understand rage (specifically), or other emotions more generally?

Is linguistics a science?

Please join us Monday, September 26, at 4:30 p.m. in the Strange Lounge of Main Hall for the second installment of the Strange Philosophy Thing of the 22-23 academic year.

The author of this article considers a couple of reasons one might think that linguistics and/or universal grammar isn’t a science. One concerns falsifiability. Karl Popper famously argued that a necessary condition for a hypothesis to be scientific is that it is falsifiable, i.e., it must generate predictions that can be refuted by observation. The challenge for linguistics concerning falsifiability is particularly acute for Noam Chomsky’s universal grammar, according to which human beings have a common innate linguistic capacity which guarantees that all human languages will share certain traits, most notably, recursion, i.e., the ability to form arbitrarily long expressions by repeatedly applying one or more rules of grammar.

A Syntax Tree Featuring Recursion

Chomsky’s view faces a challenge due to the existence of languages like Pirahã, which do not appear to feature recursion. The author notes that, in response to being questioned about the falsifiability of universal grammar, Chomsky himself has claimed that universal grammar shouldn’t be understood as a scientific hypothesis but instead as a field of study, like physics, with the claim of innate universality being a guiding assumption, akin perhaps to the presumption in physics that physical objects exist. So, if asked about the implications a language like Pirahã has for universal grammar, rather than trying to argue that it does in fact contain recursion, or to explain it away as irrelevant data, Chomsky would presumably reject the assumption implicit in the question that universal grammar is a scientific hypothesis.

The author of the article points out another reason one might think that linguistics isn’t a science: the peculiarity of the data from which linguists draw many of their conclusions. In particular, the sorts of sentences linguists use to draw certain conclusions are heavily contrived. In many cases, neither the grammatically incorrect nor grammatically correct sentences relied upon are sentences one would hear in normal discourse. The author makes connections between the contrivance of these sentences and idealization in other scientific domains we might be familiar with, e.g., in the formulation of various laws, like v = 9.8m/s2 × d, in which we abstract away from friction produced by air resistance.

The author’s discussion of these issues raises a few clusters of questions:

1. Must a science be falsifiable? Is universal grammar falsifiable, given the approach notable adherents use to deal with Pirahã? Is linguistics in general falsifiable?

2. Should universal grammar be understood as a field of study rather than a scientific hypothesis? Should it be taken as a methodological assumption or does it need to be confirmed empirically? If the former, then what should one make of the existence of Pirahã?

3. What is the role of idealization in science? Does the respect in which idealization figures into linguistic data, as the author describes, the same (or at least relevantly similar to the) way in which it figures in elsewhere, e.g., in the abstraction away from friction in, for example, v = 9.8m/s2 × d?

See you on Monday for discussion of these questions and refreshments!

Large language models: Genuinely intelligent or just making us less so?

Please join us Monday, September 19, at 4:30 p.m. in the newly renovated Strange Lounge of Main Hall for the return of the Strange Philosophy Thing. 

Language models are computational systems designed to generate text by, in effect, predicting the next word in a sentence. Think of the text-completion function on your cell phone. Large language models (LLMs) are language models of staggering complexity and capacity. Consider, for example, OpenAI’s GPT-3, which Tamkin and Ganguli note, “has 175 billion parameters and was trained on 570 gigabytes of text”. This computational might allows GPT-3 an as yet unknown number of capabilities—unknown, because they are uncovered when users type requests for it to do things. Among its known capabilities are summarizing lengthy blocks of text, designing an advertisement based on a description of the product, writing code to accomplish a desired program effect, participating in a text chat, and writing a term paper or a horror story. Early users report that GPT-3’s results are passably human. And LLM’s are only destined to improve. Indeed, artificial intelligence researchers expect LLMs to serve a central role in attempts to create artificial general intelligence. Our discussion on Monday (9/19) will focus on two aspects of this research:

  • Are LLMs genuinely intelligent?

The issues we will discuss this week are alluded to in John Symon’s recent article in the Return. To frame our discussion about the general intelligence of LLMs, we might consider the following thought experiment, as discussed by Symons:

Alan Turing had originally conceived of a text-based imitation game as a way of thinking about our criteria for assigning intelligence to candidate machines. If something can pass what we now call the Turing Test; if it can consistently and sustainably convince us that we are texting with an intelligent being, then we have no good reason to deny that it counts as intelligent. It shouldn’t matter that it doesn’t have a body like ours or that it wasn’t born of a human mother.  If it passes, it is entitled to the kind of value and moral consideration that we would assign to any other intelligent being.

LLMs either already do (or, it is plausible to suppose, they will soon) pass the Turing Test. Are we comfortable with the conclusions Symons derives from this fact—that LLMs “count as intelligent” and are “entitled to…moral consideration”? Perhaps we should rather reappraise the Turing Test itself, as a recent computer science paper suggests we should do. If LLMs can pass the test by merely reflecting the intelligence of the tester, then perhaps the true test is to have LLMs converse with one another and see if we judge them as intelligent from outside of the conversation. 

  • Are LLMs making us less intelligent?

A second—and the more important—theme in Symons article is the role that LLMs might play in making us less intelligent. Symons’ claims here are built on his own observations about LLMs. As he writes:

I tried giving some of these systems standard topics that one might assign in an introductory ethics course and the results were similar to the kind of work that I would expect from first-year college students. Written in grammatical English, with (mostly) appropriate word-choice and some convincing development of arguments. Generally, the system accurately represented the philosophical positions under consideration. What it said about Kant’s categorical imperative or Mill’s utilitarianism, for example, was accurate. And a discussion of weaknesses in Rawlsian liberalism generated by GPT-3 was stunningly good. Running a small sample of the outputs through plagiarism detection software produced no red flags for me.

Symons notes—rightly it seems—that as the technology progresses students will be tempted to use LLMs to turn in assignments without any effort. Instructors, including college professors, will be unable to detect that LLMs, rather than students, generated fraudulent assignments. But, while this might seem a convenient way to acquire a degree, Symons argues that it would undermine the worth of the education that the degree is meant to convey. For—or at least as Symons maintains—learning to write is learning to think, and deep thought is possible only when scaffolded by organized prose. Students completing writing assignments, then, are learning to think, and when they rely on LLMs (or other means) to avoid writting assignments, they are cheating themselves of future competent thought. Focusing on the discipline of philosophy in particular, Symons writes:

…most contemporary philosophers aim to help their students to learn the craft of producing thoughtful and rationally persuasive essays. In some sense, the ultimate goal of the creators of LLMs is to imitate someone who has mastered this craft. Like most of my colleagues who teach in the humanities, philosophers are generally convinced that writing and thinking are connected. In some sense, the creators of GPT-3 share that view. However, the first difference is that most teachers would not regard the student in their classroom as a very large weighted network whose nodes are pieces of text. Instead, the student is regarded as using the text as a vehicle for articulating and testing their ideas. Students are not being trained to produce passable text as an output. Instead, the process of writing is intended to be an aid to thinking. We teach students to write because unaided thinking is limited and writing is a way of educating a student in the inner conversation that is the heart of thoughtful reflection. 

These passages might constitute a jumping off point for our second topic of discussion. Ancillary questions here might include :1) What is the purpose of writing assignments in college? 2) To what extent is complex thought possible without language? 3) How can we design education in such a way as to produce the best possible thinkers and writers?

I look forward to seeing you all on Monday at 4:30 for a fun, informal discussion, open to everyone! Philosophy professors and snacks provided.

The (Real) Good Place Season 4 Episode 12 “Patty”

Please join us for the final installment of the Strange Thing this Wednesday March 9th at 4:30 in the Strange Lounge of Main Hall.

In Season 4 Episode 12 “Patty” the gang finally arrives in the Good Place.  However, there is a problem.  Everyone’s eternal happiness makes them complacent, bored, happiness zombies.  Eleanor has a suggestion though, give people an exit from the Good Place, and let them decide to leave.  I picked this episode because I wanted to think more about what the good place would be like if carefully designed.   Do you think that eternal happiness is a problem for humans?  What do you think of Eleanor’s fix?  With the new set-up, deceased humans can test into the Good Place—why is it important to be morally good before you get into the Good Place?  What’s the value of eternal reward if not to motivate good behavior?

We can also discuss the two-episode finale—if you decide to watch it. Please spread the word to and invite fans of either The Good Place or philosophy!