Michaela Mahlberg (FAU Erlangen-Nürnberg & University of Birmingham)
Lecture: TBA

Workshop: TBA


Lynn Bowker (Université Laval)
Lecture: TBA

Workshop: TBA


Marcus Müller (TU Darmstadt) 
LectureTBA

Workshop: TBA


Josef Schmied  (TU Chemnitz)
LectureAI Ethics for Scholarly Journals: Dos and don‘ts in detail and explicitly
This presentation looks at Artificial Intelligence (AI) applications from a publishing perspective. It shows concrete examples and answers ethical questions that are related to all steps of scholarly research, from brainstorming and literature research to data publishing and proofreading, always asking What is not permitted? What is to be declared? What is even expected? The focus is on concrete AI uses from the perspective of the International Association of Scientific, Technical & Medical Publishers, which is particularly relevant for PhD students who want to publish in scholarly journals. New conventions demand explicit indications, which contribute to a professional impression of journal articles and a successful submission.

Workshop: Practising AI declarations: authors’ responsibilities and choices
The related practical part asks students to write explicit declarative statements that have to accompany modern journal articles to signal the permitted uses of AI tools. It also discusses concrete examples of academic writing trends that are influenced by current AI usage and emphasises authors responsibility to adjust their style to readers’ perspectives and not technical conventions.


Nataliia Laba  (University of Groningen)
LectureText-image collapse: the challenge of multimodal generative AI
Multimodal generative AI is challenging how we create and make sense of images. While AI-generated images have captured widespread public attention relatively recently, decades of theoretical and artistic work have already laid the groundwork for understanding the image as a computational object – technical (Flusser), algorithmic (Somaini), networked (Dewdney & Sluis), or operational (Parikka). Building on these genealogies, this lecture addresses conceptual and methodological challenges brought about by multimodal generative AI. Concretely, I focus on two shifts:
– AI image production, representation, and circulation;
– collapse of the text-image distinction in multimodal generative AI.
Two kinds of examples are used: political communication, focusing on how war and conflict are visually represented and mediated through AI, and aesthetic remediation, exploring how traditional artistic styles are absorbed and repurposed by generative AI models.

Suggested pre-reading: Laba, N., & Bouko, C. (2026). Introduction: Making sense of AI-generated images. In C. Bouko & N. Laba (Eds.), Six critical lenses on AI-generated images (pp. 1–24). CRC Press. https://doi.org/10.1201/9781003740261-1 (to be published in May)

Workshop: Text-image collapse: resolving some of the challenges of multimodal generative AI
In this hands-on workshop, we will focus on how text-image collapse in multimodal generative AI applies to actual data, and what kinds of answers can be found through a systematic study of image and prompt corpora. For image data, I demonstrate how to develop an annotation pipeline with attribute-value pairs to document and interpret representational patterns in AI-generated images.

Suggested pre-reading: Laba, N., Roman, N., & Parmelee, J. H. (2025). Memory of the multitude and representation in AI-generated images of war. Memory, Mind & Media, 4(e14). https://doi.org/10.1017/mem.2025.10011


Wu Ping (Beijing Language and Culture University)
LectureLarge Language Models for Emotion Analysis in Literary Translation: A Case Study of Yu Hua’s “To Live”
This study investigates how large language models (LLMs) can support theory-guided emotion analysis in literary translation. Drawing on Appraisal Theory, we develop an annotation workflow that operationalises evaluative meaning across the Affect, Engagement, and Graduation subsystems and apply it to Yu Hua’s novel To Live. The analysis compares the Chinese source text, Michael Berry’s English translation, and the translations generated by contemporary LLMs. We adopt a theory-guided prompting strategy to produce structured evaluative annotations and calibrate the procedure against a
human-annotated gold-standard subset to assess annotation reliability and the effects of prompt design. The results show that theory-informed prompting improves the stability of appraisal-based annotation and enables systematic comparison of evaluative patterns across translation conditions.

Suggested pre-reading:
(1) Rebora, S. (2023). Sentiment analysis in literary studies. a critical survey. Digital Humanities Quarterly, 17(2): 1-17.
(2) Martin, J. R. &White, P. R. (2005). The Language of Evaluation: Appraisal in English. Palgrave Macmillan, London &New York: Palgrave Macmillan.

Panel:

Sun Hongbo, Mitigating AI’s Pseudo-Understanding: A UG-Based Approach to Sinitic Comparative Syntax
Abstract: This paper is anchored in Noam Chomsky’s Universal Grammar (UG) framework and Ian Roberts’ parametric theory of comparative syntax, while engaging their joint critique of AI’s “pseudo-understanding”. Drawing on the diachronic syntactic corpus of 7–14th century Classical and Vernacular Chinese texts (e.g., Song Dynasty colloquial records, Yuan Dynasty zaju scripts) and a synchronic corpus of 8 Sinitic dialects (Mandarin, Wu, Yue, Min, etc.), I demonstrate that while AI tools efficiently annotate large-scale syntactic features (e.g., topic-fronting alternations, classifier system evolution, and resultative compound formation), their statistical pattern-matching fails to capture the parametric variation and path-dependent logic of Sinitic language evolution. I propose an explanation-first collaborative framework: AI automates corpus annotation, but human scholars retain authority in interpreting why parametric shifts (e.g., the grammaticalization of classifiers or divergence in topic prominence across dialects) occurred, rather than merely identifying correlations. This approach mitigates AI’s risks of reducing comparative syntax to superficial pattern recognition, safeguards the explanatory depth of UG-based Sinitic linguistics, and advances the goal of aligning AI with humanistic scholarly rigor.

Qian Li, From Passive Retrieval to Critical Curation: Redefining the Literature Review in the Age of AI
Abstract: The rapid integration of Generative AI (GenAI) into academic workflows has transformed the literature review from a labor-intensive process of manual discovery into a high-speed exercise in automated synthesis. However, this shift presents a “double-edged sword” for Digital Humanities: while AI offers unprecedented efficiency in mapping vast scholarly corpora, it risks fostering intellectual laziness by replacing deep cognitive engagement with passive retrieval. As scholars, we face a critical juncture: is AI expanding our research horizons, or is it hollowing out the analytical foundations of our work?
The presenter observes a unique cultural and linguistic dimension to this debate. For non-native English-speaking (L2) researchers, AI serves as a powerful bridge, lowering the linguistic barriers to international publication and academic discourse. Yet, this reliance introduces the risk of “Linguistic Homogenization.” Because most Large Language Models are trained on Western-centric datasets, they tend to standardize the unique rhetorical voices and cultural nuances of Chinese scholarship into global academic tropes. When the AI “refines” the language, it may inadvertently erase the original perspective of the researcher, leading to a loss of epistemic diversity in the global humanities.
To manage these risks, this presentation proposes a transition from “Passive Retrieval” toward a “Critical Curation” framework. This model moves away from treating AI as an interpretive replacement and instead positions it as a structural assistant within a “human-in-the-loop” system. I introduce practical management strategies, including “Triangulated Verification”—cross-referencing AI outputs against multiple models and primary sources—and the “Audit Trail” method for transparent AI disclosure. By redefining the scholar’s role as a curator of synthesized knowledge rather than a consumer of automated summaries, we ensure that digital communication remains a rigorous, human-centered endeavor.

Cao Yanli, The Limits of Mechanical Reasoning in the Generation of Embodied Metaphors by Large Language Models
Abstract: Embodied cognition theory posits that abstract concepts are rooted in sensorimotor experience. This study investigates whether disembodied Large Language Models (LLMs) can authentically simulate such embodied metaphors. Utilizing GPT-5.4, we conducted generative probe tasks covering seven core body parts: heart, head, eye, mouth, hand, feet, and body. These body parts cover multiple phenomenological dimensions, including emotional experience, proprioception, and sensorimotor processes, which comprehensively reflect the scope of embodied experience. For the analytical approach, this study conducts a qualitative analysis of the generated results from the dimensions of embodiment intensity, richness of perceptual details, dynamicity of experience, and compatibility with cultural contexts, etc. Results indicate that the model demonstrates a sophisticated command of the functional mappings between physical attributes and abstract domains. However, qualitative analysis suggests a potential disconnect between structural logic and visceral intuition. Specifically, while metaphors involving muscle tension appear natural, those describing cognitive states often exhibit traits of “mechanical reasoning.” These findings imply that GPT-5.4 has likely acquired the syntactic logic of embodied language but may not yet fully possess the experiential semantics of sensation. The model functions more as a rational observer of physical rules than as an authentic embodied experiencer.


Niall Curry (Manchester Metropolitan University) 
LectureTBA

Workshop: TBA