Monday June 8th
Michaela Mahlberg (FAU Erlangen-Nürnberg & University of Birmingham)
Lecture: TBA
Workshop: TBA
Tuesday June 9th – Morning Session
Lynne Bowker (Université Laval)
Lecture: Exploring the potential and pitfalls of AI for multilingual scholarly publishing
Since the mid twentieth century, English has become embedded as the central language for scholarly communication, creating inequities for non-Anglophone scholars, as well as for science and society more broadly. AI tools such as neural machine translation and large language models have the potential to foster a more multilingual scholarly communication ecosystem, but they are not a panacea. This lecture weighs some of the gains and losses, as well as the challenges and opportunities, that arise when we apply AI to multilingual scholarly publishing.
Suggested pre-reading:
(1) Amano, T. et al. 2023. The manifold costs of being a non-native English speaker in science. PLoS Biology 21(7): e3002184. https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3002184
(2) Bowker, L., M. Laakso, and J. Pölönen. 2025. Making the case for multilingual scholarly communication. Canadian Journal of Information and Library Science 48(1): 112-116. https://doi.org/10.5206/cjils-rcsib.v48i1.22292
Workshop: Developing a (mini) business case for multilingual scholarly events
This workshop issues a challenge to participants. Imagine that you are going to organize a multilingual one-day student event in Digital Humanities. Some of the aspects that you need to consider include how many (and which) languages will be included, how to manage a multilingual website and call for proposals, conducting multilingual peer review of submissions, supporting the delivery of posters and/or presentations in different languages, and publishing multilingual proceedings. Working in groups, participants will consider which types of AI tools could be used to support these tasks, where AI may be less desirable, and what other solutions could be used instead. How or where might AI tools be integrated into the workflow? What AI-related policies might need to be developed or implemented? What justifications can you provide for your decisions?
Suggested pre-reading:
(1) Burton-Jones, Andrew, et al. 2025. This article is not just in English: Making science more inclusive and impactful with artificial intelligence translation. Australasian Journal of Information Systems 29. https://doi.org/10.3127/ajis.v29.5875
(2) Warburton, Kara. 2024. Developing a business case for managing terminology. http://termologic.com/wp-content/uploads/2024/05/roi-article-warburton.pdf
Tuesday June 9th – Afternoon Session
Marcus Müller (TU Darmstadt)
Lecture: Natural Meaning and Artificial Intelligence. A View from Corpus Linguistics
Workshop: Natural Meaning and Artificial Intelligence. A View from Corpus Linguistics
Wednesday June 10th – Morning Session
Josef Schmied (TU Chemnitz)
Lecture: AI Ethics for Scholarly Journals: Dos and don‘ts in detail and explicitly
This presentation looks at Artificial Intelligence (AI) applications from a publishing perspective. It shows concrete examples and answers ethical questions that are related to all steps of scholarly research, from brainstorming and literature research to data publishing and proofreading, always asking What is not permitted? What is to be declared? What is even expected? The focus is on concrete AI uses from the perspective of the International Association of Scientific, Technical & Medical Publishers, which is particularly relevant for PhD students who want to publish in scholarly journals. New conventions demand explicit indications, which contribute to a professional impression of journal articles and a successful submission.
Workshop: Practising AI declarations: authors’ responsibilities and choices
The related practical part asks students to write explicit declarative statements that have to accompany modern journal articles to signal the permitted uses of AI tools. It also discusses concrete examples of academic writing trends that are influenced by current AI usage and emphasises authors responsibility to adjust their style to readers’ perspectives and not technical conventions.
Thursday June 11th – Morning Session
Nataliia Laba (University of Groningen)
Lecture: Text-image collapse: the challenge of multimodal generative AI
One of the challenges of multimodal generative AI concerns how we create and make sense of images. Although AI-generated images have attracted widespread public attention only relatively recently, decades of theoretical and artistic work have already laid the groundwork for understanding the image as a computational object – technical (Flusser), algorithmic (Somaini), networked (Dewdney & Sluis), or operational (Parikka). Building on these genealogies, this lecture addresses the new conceptual and methodological challenge of text-image collapse introduced by multimodal generative AI. Our focus is on three questions:
- What can platform affordances teach us about multimodal generative AI?
- What can user prompting practices reveal about how people engage with this technology?
- How can AI-generated images be analyzed as part of visual generative communication?
Two kinds of examples are used: aesthetic remediation, exploring how artistic styles are absorbed and repurposed by generative AI models, and political communication, examining how war and conflict are represented and mediated through multimodal generative AI.
Workshop:Text-image collapse: resolving some of the challenges of multimodal generative AI
In this hands-on workshop, we will examine how text-image collapse in multimodal generative AI applies to actual data, and what kinds of insights can be found through the study of generative platform affordances, prompts, and AI-generated images themselves. We will build system networks to map the low-level affordances of selected image generators and will distinguish between different kinds of prompting strategies. We will also discuss the relationships between prompt specificity and generative renderings, considering how different levels of textual detail shape visual outputs.
Suggested pre-reading:
(1) Bajohr, H. (2024). Operative ekphrasis: The collapse of the text/image distinction in multimodal AI. Word & Image, 40(2), 77–90. https://doi.org/10.1080/02666286.2024.2330335
(2) Laba, N., & Bouko, C. (2026). Introduction: Making sense of AI-generated images. In C. Bouko & N. Laba (Eds.), Six critical lenses on AI-generated images (pp. 1–24). CRC Press. https://doi.org/10.1201/9781003740261-1
(3) Weatherby, L., & Justie, B. (2022). Indexical AI. Critical Inquiry, 48(2), 381–415. https://doi.org/10.1086/717312
Thursday June 11th – Afternoon Session
Wu Ping (Beijing Language and Culture University)
Lecture: Large Language Models for Emotion Analysis in Literary Translation: A Case Study of Yu Hua’s “To Live”
This study investigates how large language models (LLMs) can support theory-guided emotion analysis in literary translation. Drawing on Appraisal Theory, we develop an annotation workflow that operationalises evaluative meaning across the Affect, Engagement, and Graduation subsystems and apply it to Yu Hua’s novel To Live. The analysis compares the Chinese source text, Michael Berry’s English translation, and the translations generated by contemporary LLMs. We adopt a theory-guided prompting strategy to produce structured evaluative annotations and calibrate the procedure against a
human-annotated gold-standard subset to assess annotation reliability and the effects of prompt design. The results show that theory-informed prompting improves the stability of appraisal-based annotation and enables systematic comparison of evaluative patterns across translation conditions.
Suggested pre-reading:
(1) Rebora, S. (2023). Sentiment analysis in literary studies. a critical survey. Digital Humanities Quarterly, 17(2): 1-17.
(2) Martin, J. R. &White, P. R. (2005). The Language of Evaluation: Appraisal in English. Palgrave Macmillan, London &New York: Palgrave Macmillan.
Panel:
Sun Hongbo, Mitigating AI’s Pseudo-Understanding: A UG-Based Approach to Sinitic Comparative Syntax
Abstract: This paper is anchored in Noam Chomsky’s Universal Grammar (UG) framework and Ian Roberts’ parametric theory of comparative syntax, while engaging their joint critique of AI’s “pseudo-understanding”. Drawing on the diachronic syntactic corpus of 7–14th century Classical and Vernacular Chinese texts (e.g., Song Dynasty colloquial records, Yuan Dynasty zaju scripts) and a synchronic corpus of 8 Sinitic dialects (Mandarin, Wu, Yue, Min, etc.), I demonstrate that while AI tools efficiently annotate large-scale syntactic features (e.g., topic-fronting alternations, classifier system evolution, and resultative compound formation), their statistical pattern-matching fails to capture the parametric variation and path-dependent logic of Sinitic language evolution. I propose an explanation-first collaborative framework: AI automates corpus annotation, but human scholars retain authority in interpreting why parametric shifts (e.g., the grammaticalization of classifiers or divergence in topic prominence across dialects) occurred, rather than merely identifying correlations. This approach mitigates AI’s risks of reducing comparative syntax to superficial pattern recognition, safeguards the explanatory depth of UG-based Sinitic linguistics, and advances the goal of aligning AI with humanistic scholarly rigor.
Qian Li, From Passive Retrieval to Critical Curation: Redefining the Literature Review in the Age of AI
Abstract: The rapid integration of Generative AI (GenAI) into academic workflows has transformed the literature review from a labor-intensive process of manual discovery into a high-speed exercise in automated synthesis. However, this shift presents a “double-edged sword” for Digital Humanities: while AI offers unprecedented efficiency in mapping vast scholarly corpora, it risks fostering intellectual laziness by replacing deep cognitive engagement with passive retrieval. As scholars, we face a critical juncture: is AI expanding our research horizons, or is it hollowing out the analytical foundations of our work?
The presenter observes a unique cultural and linguistic dimension to this debate. For non-native English-speaking (L2) researchers, AI serves as a powerful bridge, lowering the linguistic barriers to international publication and academic discourse. Yet, this reliance introduces the risk of “Linguistic Homogenization.” Because most Large Language Models are trained on Western-centric datasets, they tend to standardize the unique rhetorical voices and cultural nuances of Chinese scholarship into global academic tropes. When the AI “refines” the language, it may inadvertently erase the original perspective of the researcher, leading to a loss of epistemic diversity in the global humanities.
To manage these risks, this presentation proposes a transition from “Passive Retrieval” toward a “Critical Curation” framework. This model moves away from treating AI as an interpretive replacement and instead positions it as a structural assistant within a “human-in-the-loop” system. I introduce practical management strategies, including “Triangulated Verification”—cross-referencing AI outputs against multiple models and primary sources—and the “Audit Trail” method for transparent AI disclosure. By redefining the scholar’s role as a curator of synthesized knowledge rather than a consumer of automated summaries, we ensure that digital communication remains a rigorous, human-centered endeavor.
Cao Yanli, The Limits of Mechanical Reasoning in the Generation of Embodied Metaphors by Large Language Models
Abstract: Embodied cognition theory posits that abstract concepts are rooted in sensorimotor experience. This study investigates whether disembodied Large Language Models (LLMs) can authentically simulate such embodied metaphors. Utilizing GPT-5.4, we conducted generative probe tasks covering seven core body parts: heart, head, eye, mouth, hand, feet, and body. These body parts cover multiple phenomenological dimensions, including emotional experience, proprioception, and sensorimotor processes, which comprehensively reflect the scope of embodied experience. For the analytical approach, this study conducts a qualitative analysis of the generated results from the dimensions of embodiment intensity, richness of perceptual details, dynamicity of experience, and compatibility with cultural contexts, etc. Results indicate that the model demonstrates a sophisticated command of the functional mappings between physical attributes and abstract domains. However, qualitative analysis suggests a potential disconnect between structural logic and visceral intuition. Specifically, while metaphors involving muscle tension appear natural, those describing cognitive states often exhibit traits of “mechanical reasoning.” These findings imply that GPT-5.4 has likely acquired the syntactic logic of embodied language but may not yet fully possess the experiential semantics of sensation. The model functions more as a rational observer of physical rules than as an authentic embodied experiencer.
Friday June 12th – Morning Session
Niall Curry (Manchester Metropolitan University)
Lecture: A critical reflection on GenAI use in applied linguistics research
The role of Generative AI (GenAI) in the research process has emerged as a key topic of critical debate in corpus linguistics. For every proposed boon that the use of GenAI heralds, there is a complementary bane, and while both the body of research on Gen AI use and research in corpus linguistics using GenAI continues to grow, we do not appear to have arrived at any clear consensus surrounding its affordances and limitations. In this talk, I draw on some recent work that addresses the issue of GenAI use in corpus linguistics research. The talk spotlights some of the key ideas emerging from this work, addressing questions of GenAI literacy, ethics, knowledge-making, and the relevance of large language models for corpus linguistics research. Through this exploration of emergent key issues, I reflect on the ‘goodness of the fit’ of GenAI for our research activity and consider the research areas in which the application of GenAI may be a) ineffective, b) antithetical to our research agenda, or c) pose some opportunity for research and knowledge-making.
Workshop: Applying a human perspective to GenAI use in the research process
Building on the Lecture, I will provide a case study demonstrating the application of the established framework to language teaching and learning. I will then ask participants to reflect on the application of this framework to research of their own design. This will afford participants an opportunity to localise this critical reflection within their own research paradigms and determine the affordances and limitations of GenAI therein. Participants will have an opportunity to present their reflections and get feedback on their proposed research and engagement with GenAI.
Suggested pre-reading:
(1) Curry, N., McEnery, T., & Brookes, G. (2025). A question of alignment – AI, GenAI and applied linguistics. Annual Review of Applied Linguistics, 45, 315-336. https://doi.org/10.1017/S0267190525000017
(2) Pérez-Paredes, P., Curry N., & Aguado Jiménez, P. (2025). Integrating critical corpus and AI literacies in applied linguistics: A mixed-methods study. Computer Assisted Language Learning, 1-27. https://doi.org/10.1080/09588221.2025.2569351
(3) Pérez-Paredes, P., Curry N., & Ordoñana-Guillamón, C. (2025). Critical AI literacy for applied linguistics and language education students. Journal of China Computer-Assisted Language Learning, 5(1), 1-40. https://doi.org/10.1515/jccall-2025-0005
(4) Curry, N., Baker, P., & Brookes, G. (2024). Generative AI for corpus approaches to discourse studies: A critical evaluation of ChatGPT. Applied Corpus Linguistics, 4(1), 1-9. https://doi.org/10.1016/j.acorp.2023.100082

