Analytic Process, Part 1

This messy part of the PhD research is like putting together a 1000 piece puzzle without the picture and all the pieces appear to be different sizes, shapes, colours and patterns. It’s a process of looking at each of these pieces from arm’s length, then reexamining them with a magnifying glass to find salient features, trying to figure out how they fit together. I’m in the process of holding several pieces at the same time, putting them down, and picking up new ones that appear to fit. I frequently need to stand up to look at the whole mess of pieces, then back away to see if something catches my eye. My biases, the limitations of my eyesight, the lighting that illuminates or shadows the pieces, and my preference for a particular piece that appears to hold promise all interfere or, in some ways, aid the process. This puzzle is no where near completion at this point in time.

Now that I’ve somewhat drawn a picture of this part of the process, I turn to the guidance of Braun and Clarke (2022) who remind me to tell my analytic story. I know that this moment in the analysis phase is fluid and that this story will shift and change as I write and progress into the thematic and data analysis. I acknowledge that this is the first time I am doing this type of work alone, without consequential others to guide my thinking. I recognize that my thinking and seeing may be flawed in what I pay attention to (Lather, 1993) and what I notice. I rely on my experiences and ground my thinking on the research done by others. The story is emergent, yet needs to be captured in the here-and-now. This is the work done in my research journal and “a gift to others, so they have the opportunity to learn from how you did your TA” (Braun & Clarke, 2022, p. 126).

Braun and Clarke present six phases of thematic analysis, which I will illuminate in another post and link back here when it is written. I will describe my actions and thinking, as I move through some of these phases, as I work with the research data I’ve collected from the fourteen participants in my research.

Phase 1: Familiarization with the dataset

The first phase is familiarizing myself with the dataset, which includes the sum total of all the information I have collected from each participant. This dataset includes the video recording of the interview, the transcription from the interview, the edited and notated version of the transcript, the word cloud image and screen recording of the transcript that I created following the interview, the digital artifacts prepared by the participants, my notations for artifacts that required ‘translation’ (e.g. Twine creation, sketchnote, graphic, audio recording), and my journaling notes as I prepared for and completed each interview, which includes links to web productions the participants mentioned or shared during the interview. This body of work is diverse and multimodal. While the individual elements in the dataset comprise the pieces to this puzzle I’m trying to construct, the picture is framed by my research questions.

I’ve been doing this familiarization since beginning the interview process, as I have described in previous blog posts.. Before each interview, I would familiarize myself with the current work of the participant, as they posted or shared on social media (e.g. tweets, blog posts, images) and any recent research they may have published. In this way, I was able to bring these actions and activities into the conversations to focus on their critical media and digital literacies (CMDL) and open educational practices (OEPr). Immediately following the interviews, I listened to the video recording while reading the transcript, to ensure that the Otter.AI software captured the conversation. At this point, I did not make extensive notes or annotations. I immediately created the word cloud of the transcript, as well as captured the key words from each interview, as determined by the Otter.AI algorithms. As soon as possible following this initial work, I would import the transcript into NVivo software and begin coding the interview. While coding as I completed the interviews helped me make sense of what I was seeing early in the process, the disadvantage is that the earlier interviews may not have capture the depth of codes that emerged with later interviews. At this point, as I write up the codes and descriptions, I am identifying the code frequency and number of files in which the codes are found, which is not necessarily an accurate description of what is happening in each individual interview. I recognize that my coding collection is a flawed document, not to be relied on for accuracy or an exact representation of the dataset.

Once all the interviews were completed, I gathered all the word clouds from each transcript into one curated collection [Spiders and Clouds]. I created another word cloud from the keywords. In this way, I can revisit and refine my awareness of the content of the interviews with these renderings from the text and video formats. Once all the interviews were completed, I blocked time to go through a second viewing of the video recordings. Seeing these interviews in a temporally tight schedule allowed me to make connections between and among the conversations, which helped clarify the issues with the semantic coding (Braun & Clarke, 2022) I had already completed. These semantic codes “capture explicitly-expressed meaning; they often stay close to the language of the participants or the overt meanings of data” (Braun & Clarke, 20222, p. 57). This coding is deductive in nature (Braun & Clarke, 2022). As I reviewed the transcripts for the second time, I made notes and annotations, while asking myself questions about what the conversations mean in relation to the research questions and how these codes reflect CMDL and OEPr. At this point, I created a one page poster with my research questions that I keep at my side for quick reference.

Phase 2: Developing codes, code labels, and the process of coding

It was also at this point in the process that I felt a need to go back to the research on critical media and digital literacies and open educational practice frameworks, to see if I could gain understanding and bring latent coding (Braun & Clarke, 2022) into the collection of code labels. In this way I might positioned to better understand what the data was sharing about the lived experiences of the participants.. Latent codes “focus on a deeper, more implicit or conceptual level of meaning” (Braun & Clarke, 2022, p. 57). This round of coding became more inductive in nature since I found and read several published articles that enhanced my thinking and revised my views of critical media and digital literacies frameworks:

  1. Mirra, N., Morrell, E., & Filipiak, D. (2018). From Digital Consumption to Digital Invention: Toward a New Critical Theory and Practice of Multiliteracies. Theory Into Practice, 57(1), 12–19.
  2. Mirra, N. (2019). From connected learning to connected teaching: Reimagining digital literacy pedagogy in English teacher education. English Education, 51(3), 261–291.
  3. Mirra, N., & Garcia, A. (2021). In search of the meaning and purpose of 21st‐century literacy learning: A critical review of research and practice. Reading Research Quarterly, 56(3), 463–496.
  4. Martínez-Bravo, M. C., Sádaba Chalezquer, C., & Serrano-Puche, J. (2022). Dimensions of digital literacy in the 21st century competency frameworks. Sustainability, 14(3), 1867.
  5. Morren Lopez, M. (2020). Linking community literacies to critical literacies through community language and literacy mapping. Teaching and Teacher Education, 87, 1–9.
  6. Falloon, G. (2020). From digital literacy to digital competence: the teacher digital competency (TDC) framework. Educational Technology Research and Development, 68(5), 2449–2472.

In my thinking, these readings became juxtaposed against the frameworks I had shared with the participants as part of my post-interview feedback:

Also coming into play were the 10 Dimensions of Open Education framework, the revisions to the DigCompEdu framework, and the MDL/CDL frameworks I personally use in my teaching (the AML media triangle as linked above and the 5 resources model by Hinrichsen & Coombs, 2013).

In order to try an address the issue I identified and attempt to correct my perception of the incompleteness of the first round of semantic coding, with earlier interviews not being accurately coded to reflect codes developed in later interview transcripts, I reviewed and recoded the first five interview transcripts to reflect the final set of codes that I captured from NVivo. In this process, I revised and updated the word cloud images of all the codes and key words to see what this rendering would catalyze in my thinking. I continued to recode and make notes on all of the transcripts with an inductive focus on the underlying meaning I could derive from the telling of these lived experiences with CMDL and OEPr shared by the participants. Thus, I began to craft an initial themes and sub-themes from the dataset, transitioning into the third phase of thematic analysis (Braun & Clarke, 2022).

Phase 3: Initial and Conditional Themes, Crystallizing in Process

As I’ve written before about crystallization as part of my research methodology, I’m brining this back into focus for this part of the analytic process. Crystallization, in my thinking, is a process of liquid and fluid elements slowly becoming hardened and building, one idea on another, until the final breathtaking moment when the magnificent beauty of the whole crystallized creation is revealed. This can be a very granular and slow process, or at times, happen with a rapidity that defies understanding. This part of the TA process, for my research dataset is a fast, fast, slow process. There are moments of explicit clarity, rapidly followed by ‘what was I thinking’ puzzlement.

The evolution of the conditional and emergent thematic maps I’ve drawn are demonstration of this uncertain and iterative building of bits and pieces, fragments of the crystals yet to come. Reading through Braun and Clarke (2022) assures me that I am not wrong in my frustrations and re-emphasizes that I’m not right in thinking that I may have the answers to the research questions I’ve posed. Braun and Clarke liken this to the creative process of sculpting whereby the artist applies “creative thinking and craft skills, and engages with the potential of the ‘raw’ materials (data), making choices and working to shape the final product” (p. 78). It is at this point in the process that I heed Braun and Clarke’s cautions about early theme development:

  1. not everything needs to be captured – not every minor character has a speaking part in a theatric production
  2. find an organizing concept for themes – this helps determine the essence of the theme; these build the compelling narrative
  3. don’t get attached to themes or codes – let go of those that don’t fit the story you need to tell
  4. there’s no right or wrong theme, no perfect number of themes or sub-themes
  5. avoid Q & A orientations to the codes and themes – remember this is a story to be told not a panel discussion.

“Your role as analyst is to tell the reader what the data and your themes mean and why they matter. A key mantra for analysis is ‘data do not speak for themselves’ – alongside ‘themes don’t emerge’ of course!”

Braun & Clarke, 2022, p. 91

This reflection will continue in the next post – Analytic Process, Part 2

References

Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide (1st ed.). Sage.

Hinrichsen, J., & Coombs, A. (2013). The five resources of critical digital literacy: a framework for curriculum integration. Research in Learning Technology, 21. https://doi.org/10.3402/rlt.v21.21334

Lather, P. (1993). Fertile obsession: Validity after poststructuralism. The Sociological Quarterly, 34(4), 673–693.