A philosophical puzzle of rational artificial intelligence | MIT News

To what extent can an artificial system make sense?
A new MIT course, 6.S044/24.S00 (AI and Rationality), does not seek to answer this question. Rather, it challenges readers to explore these and other philosophical issues through the lens of AI research. For the next generation of scholars, the concepts of rationality and agency may prove important to AI decision-making, especially when influenced by how people understand their own mental limits and their implicit, subjective views of what is reasonable or wrong.
This investigation is based on the deep relationship between computer science and philosophy, which has long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions to achieve one’s goals.
“You might think that computer science and philosophy are very far apart, but they always intersect. The technical aspects of philosophy really overlap with AI, especially early AI,” said lecturer Leslie Kaelbling, Panasonic Professor of Computer Science and Engineering at MIT, remembering Alan Turing, who was a computer scientist and philosopher. Kaelbling himself has an undergraduate degree in philosophy from Stanford University, noting that computer science was not available as a major at the time.
Brian Hedden, a professor in the Department of Languages and Philosophy, who holds a position in the MIT Schwarzman College of Computing and the Department of Electrical Engineering and Computer Science (EECS), who teaches a class with Kaelbling, notes that the two disciplines overlap more than people might think, adding that “the difference is emphasis and perspective.”
Tools for more theoretical thinkingg
Awarded for the first time in the fall of 2025, Kaelbling and Hedden developed AI and Rationality as part of Common Ground for Computing Education, a cross-departmental initiative of the MIT Schwarzman College of Computing to develop and teach new courses and introduce new programs that integrate computing with other disciplines.
With more than a dozen students enrolled, AI and Rationality is one of two regular philosophy-based classes, the other being 6.C40/24.C40 (Ethics of Computing).
While the Ethics of Computing examines concerns about the social implications of rapidly developing technologies, AI and Rationality examines the contested definition of rationality by considering several aspects: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the interpretation of beliefs and desires in these systems.
Because AI is so broad in its application and each use case raises different problems, Kaelbling and Hedden cover topics that may provide a fruitful discussion and interaction between the two perspectives of computer science and philosophy.
“It’s important when I’m working with students who are learning machine learning or robotics that they take a step back and examine the ideas they’re making,” said Kaelbling. “Thinking about things from a philosophical perspective helps people to back up and better understand how to put their work into reality.”
Both teachers emphasize that this is not a course that provides concrete answers to the questions of what it means to act as a rational agent.
Hedden says, “I view the course as building their foundations. We don’t give them a body of knowledge to learn and memorize and then apply. We equip them with the tools to think about things in a critical way as they go into their chosen careers, whether they are in research or industry or government.”
The rapid progress of AI also presents a new set of challenges in education. Predicting what students will need to know five years from now is something Kaelbling sees as an impossible task. “What we have to do is to give them tools at a higher level – mental habits, ways of thinking – that will help them approach things that we cannot expect now,” he said.
Integrating disciplines and inquiry ideas
So far, the class has drawn students from a variety of fields — from those strictly computer-based to others interested in exploring how AI intersects with their fields of study.
Throughout the semester’s readings and discussions, students were confronted with different definitions of rationality and how they deviated from thinking in their fields.
In her surprise with this lesson, Amanda Paredes Rioboo, a senior in EECS, says, “We were taught that mathematics and logic are the gold standard or truth. This class showed us various examples where people act in accordance with these mathematical and logical frameworks. Is it mathematics and logic itself?”
Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, is excited about the challenges of the classroom and the ways in which the definition of a rational agent can change depending on behavior. “Representing what each field is saying logically in a formal framework, makes it very clear which views should be shared, and which were different, across fields.”
Co-teaching, the collaborative framework of the course, as in all Common Ground initiatives, provided opportunities for students and teachers to hear different perspectives in real time.
For Paredes Rioboo, this is his third Common Ground course. He says, “I really like the interdisciplinary aspect. They always feel like a good mix of theory and take advantage of the fact that they need to cross fields.”
According to Okoroafor, Kaelbling and Hedden showed clear collaboration between disciplines, saying it felt like they were interacting and learning with the class. The way computer science and philosophy can be used to inform each other allowed him to understand their similarities and useful perspectives on cross-cutting issues.
He adds, “philosophy also has a way of surprising you.”



