The Paradox of Horror

Join us for a SpOoKy edition of the Strange Philosophy Thing tomorrow, Halloween (10/31), at 4:30 p.m. for refreshments and some fun, informal philosophy talk. Grab a treat from our candy stash and get ready to discuss:

Topic: The Paradox of Horror


Consider the following three claims:
1. Horror stories and movies cause negative emotions in the audience (e.g., fear, disgust).
2. We want to avoid things that cause negative emotions.
3. We do not want to avoid horror stories and movies.

Though each of these claims seems intuitively plausible, they cannot all be true. For example, if claim 1 is true and horror causes negative emotions, and if claim 2 is true and we want to avoid things that cause negative emotions, then it seems to follow that we would want to avoid horror. But that’s inconsistent with claim 3. On the other hand, if, as claim 2 has it, we want to avoid things that cause negative emotions, and if, as claim 3 has it, we do not want to avoid horror, then, presumably, horror doesn’t cause negative emotions. But that’s inconsistent with claim 1. So, it seems that, in order to avoid inconsistency, we must reject (or modify) one of these claims. Which should we reject?

You need not prepare in anyway in order to attend the Strange Thing discussion. However, if you’re interested, there’s a wealth of info on the paradox of horror available from across the web. Here’s a short video on the topic by, perhaps, the premier philosopher of horror, Noël Carroll. We discussed this topic back in 2017 and Professor Armstrong provided a great write-up here. And, finally, for a look at the neuroscience of horror, check out this recent audio interview with Nina Nesseth, author of the book, Nightmare Fuel: The Science of Horror Films.

See you tomorrow! And, why not bring a friend, to kick off All Hallows’ Eve in style, with philosophy!

Post-Theory Science

Join the philosophy faculty and other interested parties for the next installment of the 22–23 the Strange Philosophy Thing. The Strange Thing meets in the Strange Lounge (room 103) of Main Hall. We gather there most Mondays of the fall and winter terms from 4:30–5:30 for refreshments and informal conversation around a given topic. It’s a lot of fun!

I take my inspiration for this week’s Strange Thing from this article from The Guardian, by Laura Spinney. Reading it is optional. I’ll hit the most important points, and highlight the most important questions the article raises, quoting some useful passages along the way.

A traditional and familiar strategy for carrying out scientific research is to explain a type of phenomenon by formulating a theory—a hypothesis—which predicts some particulars of that type of phenomenon, and run experiments to confirm those predictions are accurate. Newton, for example, hypothesized that gravity operated in the absence of air resistance according to his famous inverse square law. And sure enough, he was able to derive Kepler’s empirically well-confirmed laws of planetary motion from it. It’s easy to miss the fact that a lot of scientific practice—especially recently, with the use of AI—differs drastically from this picture. As Spinney notes,

“Facebook’s machine learning tools predict your preferences better than any psychologist [could]. AlphaFold, a program built by DeepMind, has produced the most accurate predictions yet of protein structures based on the amino acids they contain. Both are completely silent on why they work: why you prefer this or that information; why this sequence generates that structure.”

The reason they are silent on why they work is due to the opaqueness of what is going on inside the neural nets which generate the predictions from ever-growing datasets.

Spinney grants that some science is, at this stage at least, merely assisted by AI. She notes that Tom Griffiths, a psychologist at Princeton, is improving upon Daniel Kahneman and Amos Tversky’s prospect theory of human economic behavior by training a neural net on “a vast dataset of decisions people took in 10,000 risky choice scenarios, then [comparing it to] how accurately it predicted further decisions with respect to prospect theory.” What they found was that people use heuristics when there are too many options for the human brain to compute and compare the probabilities of. But people use different heuristics depending on their different experiences (e.g., a stockbroker and a teenage bitcoin trader). What is ultimately generated via this process doesn’t look much like a theory but, as Spinney describes, ‘a branching tree of “if… then”-type rules, which is difficult to describe mathematically’. She adds,

“What the Princeton psychologists are discovering is still just about explainable, by extension from existing theories. But as they reveal more and more complexity, it will become less so – the logical culmination of that process being the theory-free predictive engines embodied by Facebook or AlphaFold.”

Of course there is a concern to be noted concerning bias in AI, particularly those that are fed small or biased data sets. But can’t we expect the relevant data sets eventually to become so large that bias isn’t an issue, and thus accuracy of their predictions abounds? Will science look more and more like this in the future? What does that mean for the role of human beings in science?

This last question can be addressed with the help of the notion of interpretability. Theories are interpretable. They involve the postulation of entities and relations between them, which can be understood by human beings, and that explain and predict the phenomena under investigation. But AI-based methods of prediction preclude interpretability due to the opaqueness of the AI processes by which they are generated. Human’s don’t observe the process of the weights of the nodes of the neural net changing as new data is fed in. AI-based science appears to prevent us from having an explanation of why scientific predictions are accurate, and thus presumably precludes us from providing one another with explanations of the phenomena under investigation. There are at least two sorts of reasons to be leery of non-interpretability. One is practical, the other theoretical.

On the practical side, Bingni Brunton and Michael Beyeler, neuroscientists at the University of Washington, Seattle, noted in 2019 that “it is imperative that computational models yield insights that are explainable to, and trusted by, clinicians, end-users and industry”. Spinney notes a good example of this:

‘Sumit Chopra, an AI scientist who thinks about the application of machine learning to healthcare at New York University, gives the example of an MRI image. It takes a lot of raw data – and hence scanning time – to produce such an image, which isn’t necessarily the best use of that data if your goal is to accurately detect, say, cancer. You could train an AI to identify what smaller portion of the raw data is sufficient to produce an accurate diagnosis, as validated by other methods, and indeed Chopra’s group has done so. But radiologists and patients remain wedded to the image. “We humans are more comfortable with a 2D image that our eyes can interpret,” he says.’

On the theoretical side, is such understanding important for its own sake, in addition to or solely because of practical considerations like the one mentioned above? Spinney suggests that AI-based science circumvents the need for human creativity and intuition.

“One reason we consider Newton brilliant is that in order to come up with his second law he had to ignore some data. He had to imagine, for example, that things were falling in a vacuum, free of the interfering effects of air resistance.”

She adds,

‘In Nature last month, mathematician Christian Stump, of Ruhr University Bochum in Germany, called this intuitive step “the core of the creative process”. But the reason he was writing about it was to say that for the first time, an AI had pulled it off. DeepMind had built a machine-learning program that had prompted mathematicians towards new insights – new generalisations – in the mathematics of knots.’

Is this reason in itself to be leery of AI-based science? Because it will reduce the opportunity for creative expression and the exercise of human intuition? Would it do this?

Hope to see you Monday for discussion with the philosophy department and refreshments!

Fetal Personhood

Join the Philosophy Department tomorrow,. Monday (10/17) in the Strange Lounge of Main Hall (103), 4:30-5:30pm for refreshments and informal conversation.  This Monday, October 3, we will discuss debates surrounding fetal personhood laws.

A Person Seed

For this Strange Philosophy Thing, we’ll be discussing the recent trend of fetal personhood laws. Specifically, we’ll look at possible legal and policy implications, and discuss the ethics of those policies. Here are a few examples of the kind of thing I want to discuss:

Georgia’s state department of revenue has extended the state’s tax exemptions for dependents to include gestating fetuses.  

A Texas woman claimed the fetus she was gestating counted as a second passenger, allowed her to drive in the carpool lane.

Women are charged under child abuse or endangerment statutes, for behaviors including drug use while pregnant. (This specific example, however, does not depend on a fetal personhood law.)


Evaluating the ethics of law and policy is complicated, and doesn’t easily connect to the ethics of the behavior being regulated. Breaking promises is typically wrong, but it would be a mistake to legally require everyone keeping their promises. We only do this for special cases, such as contracts. Driving on the left side of the road is not in itself wrong, but legally prohibiting it is completely appropriate.

Sometimes the ethical status of a law or policy depends on its relation to other things in the hierarchy of laws, such as compatibility with a State’s or the federal Constitution. But if this is all there is to it, then the problem may not be the otherwise sensible subordinate law, but the but the superior law that it conflicts with. Certain gun control regulations are (arguably) an example of this.

Sometimes a law or policy is intrinsically wrong, such as those that allow partisan gerrymandering. This is wrong by its very nature, since its nature is to disenfranchise equal citizens because of their political viewpoints.

Sometimes a law or policy is instrumentally or contingently wrong. This is when there is nothing intrinsically wrong with the policy, but the effects it has in the world are unacceptable. An example of this is voter ID laws. There is nothing wrong in principle with requiring identification to vote. Voter fraud is worth preventing because each case of voter fraud functionally removes one valid vote. But in practice, these laws have several negative effects. First, they prevent more eligible voters from voting than they prevent cases of voter fraud. So when we look at the rationale for such laws (the ethically acceptable rationale, whether or not it’s the actual motivating rationale), it is inappropriate for being self-defeating. It’s worse if more valid voters are removed through ID laws than if fewer are neutralized through fraud. Second, the discouraging or preventing effects are concentrated on already marginalized groups of people (racial and ethic minorities, lower economic classes).

Sometimes a policy that would be otherwise problematic is actually fine because of the outcomes it has. One example might be the pandemic relief funds that were sent out to individuals (oversimplified to serve the example). It’s problematic (regressive economic policy) to send the same amount of money to people at all points on the economic spectrum, because a large portion of that money is not needed and so is wasted. It would be better to target the relief to those who would benefit from it the most, since this would help them more. But this targeting would be costly, difficult, and slow, so the people we want to help avoid harm will be harmed while we try to figure it out. So, it is better overall to give the economic help faster but less efficiently targeted.

Or, it might not be a single law or policy that is problematic, but a cluster of them that are together problematic. Consider: while the right to abortion was constitutionally protected, Louisiana passed a law requiring abortion providers to have admitting (patient transfer) privileges at a hospital within 30 miles. While that’s a single law (deemed unconstitutional at the time for lack of medical benefit, violating the undue burden requirement on abortion restrictions), we could imagine passing a second law that prohibited hospitals from giving admitting privileges to abortion providers. Even if we didn’t find either law objectionable in isolation, we might find the conjunction objectionable. (This isn’t the cleanest example, but it was the easiest (realistic) one for me to think of at this point.)

I want to consider implications of fetal personhood laws in this light (or, the last four lights). Or, the implications of other laws and policies in light of fetal personhood laws.

What policies might have intrinsically problematic effects?
What are the points and purposes of these laws? How do the effects of these policies (or the ones we should expect them to have) relate to those rationales?
Are there any effects that are problematic, independent of how they relate to the policy’s rationale?

Take the suggestion that driving pregnant makes you eligible for carpool lanes. The point of these laws is to reduce traffic congestion and auto emissions by encouraging people going to and from the same or close places to travel in fewer vehicles. They do this by giving access to a special lane that is (supposed to be) less crowded, allowing you to get where you’re going faster. And so they specify “high occupancy” thresholds, and (from my quick research) these typically specify the number of seats filled rather than the number of persons in the vehicle.

This excludes fetal persons from counting towards HOV requirements for carpool lanes. Is there something intrinsically problematic for excluding this class of person? It’s hard to see what that would be. Is there something contingently problematic about excluding them? By looking at the point of carpool lanes, it seems like there would instead be something problematic about including them. Pregnant people cannot travel separately from the fetuses they gestate, and so having them travel together does not reduce the numbers of cars on the road (congestion, emissions). This adds a car to the carpool lane (slowing it down), without removing an extra car from the other lanes. So counting them towards HOV eligibility would work against the purpose of the law.

We also have to think about similar cases. What about family or friends who would be driving together even without carpool incentives? It makes sense to allow these passengers because if we did not, sorting out the good cases from the bad cases would be unduly invasive for the purposes of the law. Similarly, if we did count fetuses as passengers for HOV purposes, having HOV levels about two might be problematic because sorting out which people have single pregnancies and which have twins (or more!) would be unduly invasive. (I think this further counts against considering fetuses as separate passengers, rather than counting in favor of capping HOV requirements at two.)

So, I want to talk about policies like the dependent tax exemption and gestational child abuse examples above, but also “safe surrender” (“infant relinquishment”) allowances, and adoption and surrogate pregnancy. The article on the Georgia law also raises questions about child support policies. We might also discuss in vitro fertilization and unused zygotes.

If there are any policies that you think might have interesting (problematic or just unexpected) implications given a fetal personhood law, we can talk about that as well.

“Is a Fetus a Person? The Next Big Abortion Fight Centers on Fetal Rights,” Kelsey Butler and Patricia Hurtado, Bloomberg (linked at Washington Post)

Georgia says ‘unborn child’ counts as dependent on taxes after 6 weeks,” María Luisa Paúl, Washington Post

Pregnant woman given HOV ticket argues fetus is passenger, post-Roe,” Timothy Bella, Washington Post

Pregnant women were jailed over drug use to protect fetuses, county says,” Marisa Iati, Washington Post

Safe Surrender’ or ‘Infant Abandonment’ Legislation,” ACLU

Indigenous People’s Day

The philosophy department will be taking a break from the Strange Thing next week in honor of LU’s Indigenous People’s Day! Please join the celebration at Warch Campus Center 324 (Somerset Room) between 5-7 p.m. on Monday, October 10th. The event will include song, dance, food, and local Native American guest speakers and leaders. It’s open to the entire Fox Valley community, so bring your friends!