top of page

MENU

Thick Description #3 - The Storytelling Methodology

  • Writer: Henry Mulhall
    Henry Mulhall
  • Jan 29
  • 8 min read
Detail taken from The Meaningful Measurement Playbook. Original illustration by Zuhura Plummer
Detail taken from The Meaningful Measurement Playbook. Original illustration by Zuhura Plummer

Part of the reason thick description is interesting to me is its connection to themes found in ordinary language philosophy, that is to say, interpreting words and actions through their contextual manifestations - the moments and cultures within which they occur - rather than using overlaid, heavy theories to interpret meaning. This philosophical approach to analysis and interpretation allows for what I think of as bottom-up ethics. We can build ethics from the ground up, not by thinking ethics is something abstract and beyond us, but by paying attention to how people actually speak about the things they care about. With this conception of ethics, we can “better grasp political problems tied to our public discourses and ideology, discrete acts of speech, and the gendered aspects of our embodied language” (Laugier et al., 2022, p.4). Political and public issues become a practical issue as much as a theoretical one. For someone working in public art and culture as an evaluator, these concerns translate to a practical problem of understanding cultural value from the ground up. Thick description can be a frame through which we can understand what’s valuable to the people delivering, practising, and participating in culture, rather than those who fund and create policy around culture. 


I work with a group that makes Cards on the Table, and this was included in the Disrupt Toolkit. At an event at the Barbican to launch/celebrate aspects of the toolkit, I came across the Old Fire Station (OFS), and something they developed called the Storytelling Evaluation Method (SEM). This was the first time (but not the last) I saw the director of OFS,  Jeremy Spafford, and Sarah Cassidy eloquently describe a method for evaluating cultural work in a way that has significant cross-overs with thick description. For me, although the method itself is great, I’m equally interested in how Jeremy, Sarah and other OFS staff use the method to thickly describe endemic issues in how culture and value are measured and judged by funders, and how this affects the practice of evaluation.  


I want to use SEM as a point of departure to explore issues related to value and the practices of valuing, particularly in relation to J. K. Gibson-Graham and Community Economies conceptition of thick description as a method “to rethink the economy as a site of ethical interdependence is to abandon the structural imperatives and market machinations of capitalocentric discourses of economy” (Gibson-Graham, 2014, p.152). Instead of orthodox economic justifications filtering what cultural activities are of value (framing how they are valued), SEM might allow the voices and perspectives stemming from culture to influence economic thinking. More interestingly for me, through reflection on their context and their invention of a methodology, OFS have also engaged in a nuanced and detailed description of what it is like to work with people, culture and funders. The method has become a way to take action against (or with) the current cultural policy climate. 


OFS does a great job of making the process of creating its method public - take a look at the evaluation of the storytelling method and the Meaningful Measurement Playbook. They tell the story of how OFS and many others working in the social impact sector had become frustrated with conventional evaluation demands placed on them by funders. The demands for a metrics-based evaluation felt “impersonal, disempowering, and insensitive to lived experience” (Spafford et al., 2025, p.5). This is a position commonly shared by many working practically and academically in cultural analysis, for example, look at Failspace or the work of Kirsty Sedgman. OFS’s experience of a metrics-based form of value speaks to a certain kind of heterodox economic rationality, the kind of strong economic theories J. K. Gibson-Graham and Community Economies try to counter. In their article on thick description, they say such methods can resist the “gravitational pull toward strong theories of economic behaviour and unidirectional change” (Gibson-Graham, 2014, p.148). With the storytelling methodology, OFS are implicitly working on similar lines. These frustrations led them to develop a different approach that centred the experiences of those closest to the activities as the central theme of any evaluation. They wanted to thickly describe what happened during the delivery of their projects, to construct a narrative built around participant experience, in the words of those participants. As they say, 


The central premise of meaningful measurement is that in order to learn ‘what works’, an organisation’s leadership and management need to go to the people closest to the work, capture their experiences, and then share those experiences with a broad set of stakeholders, including funders (Spafford et al., 2025, p.5)


This is a perspective that does not take value as given, or that the impact of culture is about some kind of delivery versus money-spent equation, where the number of participants for every pound will offer a picture of success. Rather, it offers an ever-evolving view of cultural participation, evaluation and delivery. It’s not worth my going into the finer details of how the method is described because, as I say, it's very well documented. The six key stages of the method are as follows: 


1. Recruit story collectors

2. Identify storytellers 

3. Storytellers and story collectors have a conversation 

4. Transcribe and edit the conversations 

5. Hold a facilitated discussion

6. Share learnings. 


In 2024, I did a training with OFS. It became clear very quickly that the strength of this method is its simplicity, but implementing these simple steps isn’t necessarily easy. At times, it was strange to be part of a workshop designed to train its attendees while also advocating for a methodology's value. I was already deeply invested in what the method was trying to do, so listening to recordings of voice actors telling their stories that had been played to funders almost felt like an impostition, like I was listening in on something that wasn’t for me. Of course, it was for anyone who wanted (or had to) listen, but I got frustrated by the idea of stories being inspiring, which was a common reaction in the group. It seemed that for some, there’s a pendulum with metrics at one end, and inspirational individuals at the other (I do not think this is the intention of the method). The concept of success being based on inspirational stories makes me queasy because it suggests a saviour complex. I accept that this concern is probably more about my own cynicism than the other people in the course. 


Stages 4 and 5 of the method are the most interesting because of their emphasis on collective analysis. They construct a feedback loop between the participant, story collector and editor, which has the potential to disrupt a probable power dynamic between those delivering, those taking part, and those assessing culture. This also highlights a distinction between this method and thick description, which is more about describing a context in enough detail to offer analytic insights regarding what it was like to be there; Storytelling, on the other hand, offers personal perspectives (maybe several) to show the value of an event or context in a participant's own words. 


During the course, the collecting stage was framed as a conversation rather than an interview. Conversation entails exchange, not collection. It's not clear to me what the interviewer is offering in terms of content; they are listening (an important and difficult skill). I wondered how often describing one's experiences is coherent, and if there is something problematic in requiring a coherent story, or in editing a coherent one from incoherent material. Would presenting slightly incoherent stories to funders ever be an option? Interestingly, we were told that finding clarity can be a way to identify what’s significant in a story, a way to frame the value of the experience. When collecting a story, there was an emphasis on significance and clarity. But how do we know what’s significant at the time if we don't have preconceived ideas about what we want from the story in the first place? This poses the risk of directing the conversation. There’s a difficulty in not directing or looking for what you already find significant. 


Perhaps to counter people bringing preconceived ideas of what’s important, we were told that it's good if the story collector and editor (not the same person) are slightly removed from the process. Some familiarity is helpful, but as with many conversations, if we don’t already know the answer, we will ask real questions - ones we don’t already know the answer to. In this sense, the method is less about interpreting an experience and more about drawing out meaning from the experience. That is to say, the story is partly constructed through the conversation, rather than a predefined narrative someone is being told. This leads to my central concern, one that is recognised by those behind Storytelling. When telling and/or editing a story, there is an implicit emphasis on the positive, successful aspects of an experience. It is a well-recognised feature of cultural evaluation that failure is difficult to talk about (see FailSpace). When funding decisions are based on things like excellence and value for money (in ACE’s case), then admitting failure, indifference, or nuance is difficult; there isn’t space, and there isn’t impetus. J. K. Gibson-Graham say that thick description can be a “method that directs interpretive attention not only to material practices but to the nuances, affects, multiple codes of meaning, silences, jokes, parodies, and so on, that accompany them” (Gibson-Graham, 2014, p.148). Due to a potential skew towards the positive, Storytelling will not always deliver this, but it gets me thinking how boredom, indifference or messy incoherence can be part of a narrative that supports a program to receive funding. Those features of experience don't mean something isn't worthwhile. 


The changes captured through this method offer a person-centred approach to understand what happened in a project, and the set-up encourages feedback between a group to develop “learnings” from stories. This refines and amplifies the voice and perspectives of those participating and working on these projects. As the creators say, 


It helps the people closest to the work understand and improve their experience. Second, it helps those who lead, manage, and fund the work understand the processes that create positive impact and more effectively support those processes. (Spafford et al., 2025, p.5)


It is this second point, around advocacy and how it is enacted, that is actually closer to thick description. It is used by the staff at OFS to describe what it's like to work in this field - producing culturally meaningful and impactful contexts for people to participate in. 


OFS staff use SEM to critique the cultural policy and funding climate by describing what it's like to work in it. It is used to tell the story of the funding and policy climate, through the experiences of those who take part in (as producers and participants) cultural activities. In doing so, such a method has the potential to create a context for funders to “expand a workable economic vocabulary” (Gibson-Graham, 2014, p.149). Spafford and others construct an ethnography of what it is like to put more nuanced person-centred evidence in front of funders, which, incidentally, they often appreciate. This suggests that funders might be more open to hearing about the “different regime of value that people juggle” (Gibson-Graham, 2014, p.151). Spafford and others thickly describe their context through focusing on a method that is based on people (participants) describing their context and the reactions of funders, academics and policy makers who hear about their work.

 
 
 

Comments


bottom of page