Frameworks have been a recurring theme for me over the last few months. In my current research practice, I’ve been building on the conceptual and methodological frameworks that I established while working with the late Ciaran Trace on preserving AI and other complex algorithmic systems. I’ve also been thrilled to see some other researchers picking up and building on the framework I introduced for studying online disinformation and conspiracy movements last year.
My most recent article, “Algorithmic futures: the intersection of algorithms and evidentiary work,” cowritten with Ciaran, and it came out a few months ago in Information, Communication, and Society. It’s the second part of a longer series of publications we wrote exploring the ways that archival knowledge can help to make automated systems more transparent and accountable. The series will conclude with our next publication, “The Role of Paradata in Algorithmic Accountability” in a book coming out via Springer sometime soon.
When Ciaran asked me to start this project, I was initially skeptical, because I had such distaste for the current wave of AI hype. Just a few weeks ago, the RAND corporation published a report saying that 80% of commercial AI projects are basically failures, and that they have wasted billions of dollars in the process. This, combined with the ongoing environmental impact of computing-intensive AI systems, makes plenty to be skeptical about. But Ciaran believed in the value of archivists’ knowledge when it comes to making these systems make sense. Whether we want to shape their impact now or look back at it accurately decades from now, archivists are uniquely positioned to help.
Our intervention in the AI hype was one of grounded practicality, which I try to bring to all my research: identify the material traces of an object or process and highlight often-overlooked ways to analyze them. This can be tough in a field, like AI, which is simultaneously technically complicated and willfully obscured by many of its most prominent proponents.
Ciaran passed away in the spring, and it’s been sad, to say the least, working to finalize these publications without her. I owe her a debt of gratitude for bringing me onto the project and for acting as a mentor more broadly during my postdoc at UT Austin. She was extremely generous with her advice but always careful to let me chart my own course. It was an excellent balance that I hope to maintain in my own mentorship as I work with more and more grad students myself.
The AI research I did with Ciaran also had downstream effects on my other research projects—namely, the work I started on environmental data curation last year. I didn’t realize it when I started, but that project has come to involve a bit of reverse-engineering on the predictive systems that major cities use to alert the public of adverse events like flash flooding and sewage overflow. I presented the second installment of that research at 4S/EaSST in Amsterdam this summer and I think we may finally see some of it in print this academic year.
Building a framework and applying it to new cases is a recurring theme for me at the moment. In addition to the algorithmic systems work that Ciaran and I did, last year also saw publication of my enumerative-bibliography-informed approach to studying online disinformation movements. That study used early QAnon threads from 4chan as a case study, but this summer at SHARP, I presented a newer work that applied the same method to studying a much earlier online conspiracy movement—the Ong’s Hat urban legend/alternate reality game/conspiracy narrative.
More recently, Katie Greer and Stephanie Beene published an article with Frontiers in Communication that touches on this framework while analyzing the social media content generated by QAnon participants later on in the movement’s development. It’s wonderful to see that my methods and findings were of value to researchers who built on them to generate new findings in a new case study.
Over the next few months, I’ll be working to get the newest installments of these three projects out in journal articles, but I’ll also be on the lookout for what comes next. I first started all three of these research streams during my postdoc, which happened to coincide with the pandemic. Finishing my years-long dissertation work at the start of the pandemic, I had to start new projects during a time characterized by intense social upheaval, restricted access to traditional archives, and multiple cross-country moves. Now that I’m a bit more settled, and I’ve built some methodological frameworks that have proven applicable in a variety of settings, I’m eager to relate all this research back to the computer-history work that defined my dissertation. Stay tuned for more updates on how that goes.