One morning as you walk through town, you see a limping woman bite the arm of a passer-by. He falls, screams and passes out. Wondering what has happened, you come closer. The limping figure is now reaching other individuals, snapping at them, trying to bite. Suddenly, you see her milky eyes, the flaky, fallen skin and teeth marks on her arms. ‘This is a Zombie’, you think. You run back home, seeking shelter whilst trying to stay away from limping figures. If one of these limping figures bites you, you too might develop a craving for brains…
Luckily, the Zombie apocalypse has not happened, yet. However your ability to fantasise and conclude the consequences of this scenario is interesting to many psychologists. Bonan Zhao, a PhD student in the psychology department at the University of Edinburgh, studies causal generalisation. While there is no disease that turns people into brain-eating entities, you can already imagine the consequence of a zombie biting you. In this example, we can see the generalisation causes and effects of diseases. Covid-19 is transmitted through respiratory particles and Zombie-disease, possibly through direct contact. Even though both diseases are quite different, cause and effect are inferred easily as we know disease. Causal generalisation is everywhere around us, and all of us are very fast in making probabilistic assumptions on what causes what.
Bonan is interested in building computational models of human cognition, especially looking at causal generalisation. In her work, she takes a hybrid approach combining symbolic representations, Baysian approximate inference, and online interactive experimentation. With this, Bonan tries to answer some of the fundamental concerns around human intelligence. When do we choose to generalise causal relations? What factors do we pay most attention to, when constructing meaning in interactions between different entities? Can we build a model of human cognition in order to understand how people causally infer?
To put it into more logical terms. If you observe an interaction between A and B. Can you predict an interaction between A and C? Or even B and C? We constantly make predictions across these sorts of relations. However, it seems we are not very aware of what properties of these interactions we pay attention to. Is the properties of A that determine the interaction of A and B. Or is the form of interaction of A and B itself that causes their relations?
The Causal Model Theory (CMT) is a dominant framework Bonan works with. It tries to explain human causal-based reasoning, including categorization and inference. Previous work has shown how people reason about probabilistic events in terms of causal models verbally. Meaning, you tell them a story and ask about the consequences about a relationship you were describing. However, Bonan makes use of visual storytelling in order to see how people respond with causal inference to different scenarios.
In some of her recent work, Bonan uses geometric shapes to explore how people generalise causal relations across shapes. An agent object (in this case the blue square) moves towards the recipient object B (red diamond). The diamond, in this case, turns blue as a consequence of them touching.
The individuals partaking in these experiments were exposed to, either one or multiple different scenarios. After, individuals were asked to guess the hidden property of one of the shapes shown, and make a prediction on future interactions themselves. The results showed that people paid more attention to the features of the agent object rather than the recipients. In order to explain the accounts, Bonan and her colleagues developed a computational model. The generalisations of participants could then be explained by a hypothesis generation process that favours simple rules between the objects and a few categories. People seem to focus on essentials around the objects and their own first perceptions. Meaning, that people take their first judgement into account when forming new opinions about other causal interactions.
Read more about Bonan’s work here – https://zhaobn.github.io/