I spent the first two weeks of 2026 visiting the University of Alberta (UofA) Digital Scholarship Centre (DSC) for an informal exploratory period focused on knowledge exchange and relationship-building. The goal was to share Kula & UVic Libraries’ work integrating and teaching GenAI in academic library/archive contexts, learn from UofA’s staff, researchers, and students, and identify opportunities for future collaboration.
A big part of my role with Kula is moving from “what’s possible” with AI to prototyping, evaluation, and pipeline integration. At UofA my work was more about outreach and agenda discovery. I shared current OCR developments (including multimodal/vision-language approaches) as a prompt for conversations about what questions people would pursue if these tools were usable and reliable for academic libraries.
Events
Two main activities shaped the visit. Both were collaborative and conversational by design, and both were meant to build shared vocabulary across disciplines and roles.
1) A seminar on OCR: from the early 1900s to multimodal OCR tools
Working closely with Peter Binkley, Digital Scholarship Technologies Librarian, I helped develop a seminar that traced the progression of OCR across roughly a century. It covered early mechanized and statistical approaches, then moved through rule-based/classical OCR, and finished with contemporary deep-learning OCR and vision-large language model (VLLM) tools. We will be presenting a refined version of this workshop in February as part of their seminar series on AI (along with UVic’s Corey Davis).
2) An introductory GenAI prompting workshop
The UofA DSC requested I give a UVic Digital Scholarship Commons’ introductory GenAI prompting workshop to complement their more technical sessions on open-source local LLM implementations. This was a very direct knowledge-sharing opportunity—taking something we already do well at UVic Libraries and making it useful in another setting.
Outcomes
The most meaningful outcome was collaborative inquiry with librarians, archivists, and faculty. Conversations surfaced in common areas of concern where AI technical work could help with evaluation, provenance, and workflow fit.
One example is how to evaluate VLLM OCR beyond “word error rate,” and how to represent provenance and uncertainty in outputs. Given the short timeframe of my visit, the value wasn’t in deliverables so much as building shared language, comparing assumptions, and identifying collaboration pathways grounded in real needs of libraries and archives.
The visit reinforced that bottlenecks in content extraction from digitized materials, like OCR, often aren’t model capability alone, but evaluation, framing, and workflow design, especially where provenance, trust, and accountability matter.
Next Steps
The immediate next step is synthesis. I will be turning the strongest emergent themes into concrete research questions that can be explored jointly, potentially with shared datasets, evaluation frameworks, and follow-on conversations between institutions. We will be making future posts about these.
More broadly, this visit helped clarify an approach I want to carry forward: when working with libraries and archives, the most productive starting point is not “what can AI do?” but “what do you need to know, and what obligations shape that need?” Tools come later, if at all.
A Closing Note on Collaboration
I’m grateful to the staff at the UofA DSC’s Peter Binkley and Harvey Quamen for their time, generosity, and willingness to engage deeply with the questions we at Kula are grappling with. I also appreciate everyone who made time to attend the seminars, participate in the workshop, or meet with me individually.
The visit made clear that the boundary of what memory institutions can now accomplish is now not restricted by the technical capacities we’ve been accustomed to, but by how emerging technology aligns with values of memory institutions, and how far institutional or personal resources go to making the once impossible, possible. Those problems can only be addressed well through collaborative, cross-context work.
If you’re working in a library/archive setting and navigating OCR or GenAI questions, I’d be glad to compare notes. The most useful thing about these conversations is that they keep the work grounded in what access actually requires, not just what models can produce.

