AI & Creativity: Reflections on Language, Authorship, and the Future of Art

Mural depicting AI themes: a typewriter consuming a book, with sci-fi elements in a South American city setting. Woman walks past street art mural.
Reading Time: 20 minutes

Earlier this month, I had the pleasure of attending a hybrid symposium hosted by the University of Leeds, focusing on AI and creativity. This event was serendipitously brought to my attention by Dr. Emily Middleton, whom I’d previously connected with regarding my BARD409 project. She kindly pointed me toward the symposium, organized by Dr. Mel Evans. Given my long-standing interest in computers and their creative potential, this topic was right up my street.

The symposium kicked off at 10:30 AM UK time on Friday, June 6th, which for me in Japan was much later at 6:00 PM JST. As I joined from the other side of the world, there was a charming sense of surprise from many participants, who mostly seemed to be gathered in person or knew each other from the vibrant academic scene in Leeds. All participants were based in Northern universities, mostly in Yorkshire – my birthplace and old stomping ground! It was genuinely refreshing to find such a focused group of people deeply interested in this field.

Panel 1: Mimicry, False Profits, and Creative Imitation

The first speaker was Serge Sharoff from the University of Leeds, with his talk titled: ‘From Mimicry to Meaning: Investigating differences between AI and Human Language’. Serge immediately highlighted several stylistic quirks that often give away AI-generated text. A particularly striking one was what linguists might call negative-positive parallelism or antithesis – that recurring pattern where the AI negates a negative and then asserts a positive. Think “It’s not about what you can’t do, it’s about what you can achieve,” or “This isn’t a setback; it’s a stepping stone to growth.”

His presentation included a detailed chart illustrating the frequencies of negation across various text types. For instance, content categorized as ‘Fiction’ showed a notably high proportion of clauses with negation (26.32% mean, 17.10% median), while ‘Promote’ and ‘Inform’ texts had very low percentages. He provided compelling examples of this AI stylistic feature: “Air could not freeze her, fire could not burn her, water could not drown her, earth could not bury her[RP1] ,” (which seems to be from Maria Metsalu’s Mademoiselle X performance, which looks amazing) alongside more conversational examples typical of chatbots.

This specific cadence, once noticed, can become quite cloying, as I can personally attest from my work editing outputs for BARD409 and other writing. It’s uncanny, because just a few days after Serge’s talk, a popular post on Reddit’s r/ChatGPT thread received almost 700 upvotes, titled “Make it stop!“. Users there were venting about this exact “not X, but Y” pattern, echoing Serge’s observations perfectly.

Serge’s analysis delved deep into corpus and linguistic methods. He brought up Halliday’s 1978 model of meaning-making in society, presenting a diagram that visually mapped the process from “language as system” through various contextual layers – “context of culture,” “cultural domain,” “situation type,” “register,” and “text type” – ultimately leading to “language as text” and “meaning making.” The diagram underscored the semiotic nature of human societies and their cumulative culture. This discussion particularly stressed the concept of intentionality. This was a pivotal point in his presentation: while humans inherently exhibit intentionality in their language use, AI models, at least currently, do not. This distinction, the presence or absence of true intentionality, formed a significant core of his argument regarding the fundamental differences between human and machine language.

Next up was a truly compelling talk by Professor Claire Hardaker of Lancaster University, titled ‘Bot or Not: New False Profits?’. As a leading figure in forensic linguistics, Professor Hardaker’s research at Lancaster delves into deceptive, manipulative, and aggressive language in online environments, making her perfectly positioned to explore the nuances of AI-generated content. Her talk revolved around the fascinating “Bot or Not” project, a unique resource (available at https://www.lancaster.ac.uk/linguistics/news/bot-or-not-audio-edition-can-you-tell-whos-talking) designed to test how well people can distinguish between human-produced content and that generated by large language models or voice cloning.

Claire presented striking findings from the “Bot or Not” experiment. One slide visually explained the “Bot or Not” challenge, depicting an interface where users would listen to audio and then decide if it was human or AI. Her subsequent slides revealed the challenging results, often showing that participants, despite their confidence, frequently struggled significantly to differentiate between human and AI output, highlighting the alarming sophistication of current AI. She then moved to real-world, high-profile cases of AI misuse. A dedicated slide laid out the chilling February 2024 incident in Hong Kong where a finance employee at a multinational firm was duped into transferring US$25.6 million (HK$200 million). The sophisticated scam involved an entire deepfake video conference call, where AI-generated likenesses and voices of the CFO and other colleagues convinced the employee to make the transfers, shattering the illusion of trusted communication. The growing criminal aspects of AI, particularly with deepfake audio and text, are a serious concern that her work actively investigates, examining topics from online abuse to human trafficking.

However, Claire also offered a crucial counter-argument: not all AI is inherently “bad.” Her slides explored the concept of AI’s potential to democratize creation. She pointed out that “dominant entities” in the music industry *cough cough, Sony* are notoriously Exploitative, Exclusionary and Oppressive.

My hero Steve Albini talks about that in his 2014 keynote at Face The Music, which you can watch here. AI tools could lower barriers to entry for creators, while also boosting efficiency gains. This idea deeply resonates with me and my work at Hungry Wolf Press. Many of the authors I collaborate with, often not full-time writers, leverage AI tools to boost their productivity and streamline their creative process. This isn’t about laziness or simply churning out AI-generated content; it’s about intelligent application of tools to accelerate their output, allowing them to focus on the higher-level creative elements. My own process for writing this very article, dictating raw notes to be processed by an LLM trained on my writing, then refined through my editing, is another example of using AI to produce (hopefully!) higher-quality work more efficiently.

One scary thing was that most humans perform “worse than chance (7.5/15)” at accurately identifying if a text is written by a Bot. More concerning still, is that one of her current students has done a study that shows detection accuracy drops from 40% to 4% depending on whether we are primed or not to be looking for AI or fakes. Perhaps this allows us some greater empathy for the guy who lost his company that 20 million in Hong Kong.

Professor Hardaker’s insights were truly captivating, and I plan to incorporate “Bot or Not” into my own teaching. I’m currently leading a writing workshop where we grapple with the complexities of fake news, misinformation, disinformation, and malinformation – a perfect context to explore the ethical dimensions of AI writing with my students.

Following Claire’s insightful presentation, Charles Lam took the floor, focusing on ‘Imitation in human writing: an argument against incompatibility between machine and creativity’. As an EAP (English for Academic Purposes) instructor, Charles’s talk resonated strongly with practical applications of language and learning.

Charles opened by inviting us to consider fundamental questions: “Can machines think (or write)?” and “Can we tell?” These queries led to further thought-provoking points, exploring whether it truly matters, when it matters, and how “organic, ‘wetware’ computers” (humans) are mechanistically structured. He drew parallels with Levinson’s (2025) concept of “The Interaction Engine,” and the intricate connections highlighted in Gödel, Escher, Bach: An Eternal Golden Braid, hinting at deeper parallels between language and other symbolic systems.

A significant portion of his talk revisited Turing’s (1950) seminal work, “Computing Machinery and Intelligence,” and “The Imitation Game.” Charles presented Turing’s original proposition: “I PROPOSE to consider the question, ‘Can machines think?'” and then adapted it to contemporary AI: “What will happen when a machine takes the part of A [a human, man participant] in this game?” This reframed the classic Turing Test to explore AI’s role in creative imitation.

He then delved into Chomsky’s perspective on “Creativity,” specifically referencing his “Creative Aspects of Language Use (CALU)”. Charles emphasized that human speakers create novel sentences spontaneously – sentences we’ve never heard ourselves produce. This ability stems from learning grammatical rules and patterns, rather than simply memorizing individual phrases, as famously demonstrated by the ‘wug-test’ in morphology.

Charles illustrated how human creativity, even in seemingly spontaneous acts like humour, often follows patterns. In a slide titled “Jokes,” he explored the distinction between formulaic jokes and those demonstrating true novelty. He provided a delightful example from Philomena Cunk: “School in Shakespeare’s day and age was vastly different to our own. In fact, it was far easier, because he didn’t have to study Shakespeare.” This highlighted how humour, while typically seen as creative, can be constrained, goal-oriented, and productive through imitation, often mimicking specific styles.

Bringing this back to writing pedagogy, Charles drew strong similarities between the way students acquire academic writing style through imitation of conventions and how large language models mimic specific genres. He argued that AI, when utilized thoughtfully, could serve as a valuable teaching tool, empowering students and others to more effectively master academic writing and various creative forms by understanding and leveraging imitation.

Q&A: Data and Ethical AI

The Q&A session that followed this first panel was very engaging. I posed a question to Serge, keen to understand the optimal approach to training language models, particularly for stylistic imitation. My own experience with creating a custom GPT to rewrite Shakespeare in the style of specific authors taught me that retraining the model precisely on the target author was crucial; otherwise, it pulled in too many disparate styles from its general dataset. I wondered if this principle of focused, specific data held true across the board, or if large, generalized datasets were actually better.

Serge’s answer was illuminating, especially concerning ethical considerations. He argued that, counter-intuitively, it’s actually better to have a larger and more generalized training dataset, particularly when addressing issues like refusing to produce sexist or racist content. The only way many large language models (LLMs) can effectively identify and filter out such problematic content is if they have actually been trained on data containing those very elements. So, while an AI will, in most cases, refuse to generate harmful content (unless jailbroken or a custom, locally-tailored model), it needs to be exposed to that content during training to develop the capacity to recognize and subsequently avoid it. It reinforces the idea that understanding the “bad” is essential for producing the “good” or, at least, the ethically responsible.

This reminded me of something I recently read about an artist who experimented with an image-generating AI by removing all its training data. What it produced was akin to a minimalist Rothko meets BBC Test Card G – a testament to how crucial comprehensive data is. You can read more about that fascinating experiment here .

Keynote: (L)imitations of AI and Creative Writing

After a short break, we were treated to the keynote address by Nassar Hussain, a Senior Lecturer in Creative Writing and a poet from Leeds Beckett University. His talk, ‘(L)imitations: some notes on AI and Creative Writing,’ explored the boundaries and possibilities when machines venture into the realm of poetic creation. On a personal note, Nassar clearly had strong connections within the symposium’s in-person community, underscoring the tight-knit network of creatives exploring experimental literature with technology. It made me wish I was there in person.

Nassar began by discussing what’s often cited as the very first book ever written by a computer: “The Policeman’s Beard is Half Constructed” (1984). Spoiler, it’s NOT the first – but it is, according to Leah Henrickson “one of the first algorithmically authored books – if not the first – to be printed and marketed for a mass readership.” This peculiar and intriguing book was generated by a program named Racter, developed by William Chamberlain and T. Etter. It was uncanny, as I had literally just ordered my own copy of this increasingly collectible and unique item that very day, having noticed its price steadily climbing. Nassar pointed out that the original version of the book included a floppy disk, allowing users to interact directly with Racter. This seems unlikely, however, as the software (INRAC) was made commercially available in 1985 and retailed between 244.95 to $349 USD, which is, I am pleased to report, less than what I paid for my collectible first edition. Whatever the details, despite the program’s output, it quickly became clear when interacting with Racter’s software that “The Policeman’s Beard” itself had been heavily edited by human hands, highlighting that it wasn’t a pure, unadulterated machine creation.

He moved on to illustrate different facets of machine-generated text, including a compelling generative poetry installation from around 2004 – a project he described as producing a staggering 18,000 pages of poetry. As part of this unique exhibition, attendees could walk around, pick up any poem they liked, and take away as many bits of paper as they wished. He recalled one such algorithmic poem titled “Institution in the Middle of Mine,” exemplifying the distinct style of such computer-generated works. This hands-on, take-away approach emphasized the sheer volume and accessibility of machine-generated text.

Nassar then showed us specific textual examples of how machines can be “inspired by” or imitate human works. One slide displayed a dense, highly experimental text, filled with the repeating conjunction “andor” and referencing various literary and conceptual figures like “finnegans wake,” “Barrett Watten,” “Bruce Andrews,” and “Lyn Hejinian.” He also discussed excerpts from “The Tapeworm Foundry: And or the Dangerous Prevalence of Imagination” by Darren Wershler-Henry (2004). This is a pivotal work in conceptual writing, exploring the limits of language and meaning, which fits perfectly into a discussion of imitation and limitation in creative writing. Another slide presented a segment of Dylan Thomas’s iconic poem, “Altarwise by owl-light in the half-way house.” Nassar used this to specifically highlight how a machine might imitate or draw inspiration from human poetic phrasing, pointing to the line “The atlas-eater with a jaw for news” as a powerful example of such algorithmic ‘learning’.

His talk also touched upon bpNichol, the renowned Canadian avant-garde poet. Given Nassar Hussain’s own academic work critically engaging with bpNichol’s writings, this connection further deepened the keynote’s exploration of experimental literary forms that often blur the lines between human and algorithmic creativity. The overall message was a nuanced exploration of what AI can, and cannot, do in the constantly evolving landscape of creative writing.

Panel 2: Stochastic Poetry, Commonwealth Narratives, and Emergent Effects in Film

Following the thought-provoking keynote, the second panel began with J.R. Carpenter, a distinguished artist, writer, and researcher specializing in creative practice at the University of Leeds. Known for her pioneering work using the internet as a medium for experimental writing since 1993, her talk delved into her captivating stochastic poetry projects and uneasy human-machine collaborations. Her presentation echoed much of the detail found in her recent article, “Text Generation and Other Uneasy Human-Machine Collaborations”.

Carpenter began by situating contemporary digital literary practices within a broader historical context, noting that “experimentation with generative, permutational, and combinatory text began long before digital computers came into being”. She traced this lineage from classical rhetorical figures like ‘permutatio’ in the fourth century to Jonathan Swift’s satirical machine in Gulliver’s Travels, designed to mechanically generate knowledge. Her own practice-led research, she explained, involves creative experiments with text generation, informed by these earlier human and machine generators.

While my notes recalled a commission about The Waste Land, Carpenter’s discussion broadly explored her stochastic text experiments, drawing parallels to pioneering works. A significant focus was Christopher Strachey’s Love Letter generator, programmed in Manchester, England, in 1952, pre-dating many commonly cited computer text experiments. This early generator for the Manchester University Computer Mark I employed a “slot” method, selecting words from lists to populate set-order sentences like “You are my (adjective) (noun)”. The outputs, though often described as “amateurish, outlandish, and even absurd”, carried a deliberate interrogation of authorship through its signature, “M.U.C.” (Manchester University Computer). Examples she presented included:

  • “DARLING LOVE YOU ARE MY AVID FELLOW FEELING. MY AFFECTION CURIOUSLY CLINGS TO YOUR PASSIONATE WISH. MY LIKING YEARNS FOR YOUR HEART. MY TENDER LIKING. YOU ARE MY WISTFUL SYMPATHY. YOURS LOVINGLY, M.U.C.”
  • “HONEY MOРРЕТ, MY FONDEST FERVOUR LONGS FOR YOUR PASSION. MY YEARNING KEENLY LOVES YOUR ENTHUSIASM. MY SWEET YEARNING COVETOUSLY PINES FOR YOUR AFFECTIONATE LONGING. YOU ARE MY ANXIOUS BEING, MY EAGER SYMPATHY. YOURS BURNINGLY, M.U.C.”

Carpenter emphasized that for these early text generators, the “attempt” or the process of creation itself is as important as the final output. As Noah Wardrip-Fruin (2011) suggests, the Love Letter generator functions as “a parody of a process,” brutally simplifying human letter-writing. It kind of reminds me of the simple Fake News Generator that my friend and colleague, Stephen Prime, has created for his sarcgasm.com site, but the Fake News Generator is a love-letter to the post-truth absurdity of modern-day politics.

J.R. Carpenter then detailed her own project, TRANS.MISSION [A.DIALOGUE] (2011), a browser-based computer-generated dialogue she adapted from Nick Montfort’s “The Two”. This work deliberately explores the “complexity of the operation of gender as a variable”. Her approach was a “hack” rather than a pristine code creation – a “wilful mutilation” that deliberately transformed the production process. This allowed her to actively dismantle conventional linguistic biases, injecting a distinctly queer and female voice into the generative process. She presented outputs that played on gender stereotypes, such as:

  • “The translator conveys her encouragements. The administrator relays his congratulations. The pilot broadcasts her explanations. The receptionist transmits his salutations.”

This active engagement in shaping the machine’s output, infusing her unique perspective to dismantle the male gaze, was a crucial aspect of her talk, demonstrating how human intervention guides algorithmic creativity.

J.R. is also the author of seven books, including “An Ocean of Some Sort” (2017), which contains a section aptly titled “The Darwin and Bishop Island Book”. She generously highlighted the work of her friend, the poet Lisa Robertson, specifically mentioning “The Baudelaire Fractal” (2020). Published by Coach House Press, this book is noted for its innovative use of algorithmic text generation and experimental prose, seamlessly blending computational methods with poetic exploration.

This entire discussion on stochastic and generative poetry, and the nuanced human role in its creation, strongly resonated with my own work using tools like Botnik with students, and my own poetry collection, Moloch. It also connects back to the pioneering text experimentations of William S. Burroughs and his cut-up technique, which I discuss a little here and in more detail here.

The next presentation was a concise but powerful talk by Skylar Wan from Leeds, titled ‘Using AI to Reinterpret the Evolution of Commonwealth Research Narratives Across 75 Years’. Skylar, whose work at the University of Leeds often explores digital humanities and computational approaches to postcolonial and Commonwealth literature, presented what was essentially a meta-study.

Her talk strikingly demonstrated the immense power of large language models in handling vast datasets. She showed how AI can expertly categorize, tag, and make sense of massive volumes of information, allowing researchers to drill down into specific themes and patterns. A visual in her presentation, mapping the “Thematic landscape of Commonwealth research,” clearly illustrated this, showing how research clustered around various Sustainable Development Goals (SDGs). For instance, “Good Health And Well Being” emerged as a dominant theme at 25.82%, while “Industry Innovation And Infrastructure” was notably lower at 3.77%. This kind of analysis not only highlights areas of intense focus but also implicitly points to gaps within the dataset. For those keen to explore this fascinating application of AI in literary and historical research, Skylar Wan’s other published works would be an excellent resource.

Next to present was Michael Schofield (aka Michael C Coldwell), a filmmaker and Lecturer in Experimental Film at the University of Leeds. Michael discussed his captivating work on ‘The Jettison’ (2024), a film profoundly inspired by Chris Marker’s seminal work, ‘La Jetée’ (1962). Visually, his presentation underscored this influence, showing side-by-side comparisons that evoked Marker’s distinctive black-and-white, still-image narrative style.

Mick’s process involved a fascinating stochastic use of AI, primarily utilizing tools like Midjourney and Runway to generate visuals for his film. His approach embraced the often unpredictable nature of AI outputs. He cited Fred Ritchin’s The Simulated Camera, who noted that “Rather than photographs that so often emulate previous photographs, the results can be surprising. There have been many instances where the image generated makes me rethink my expectations of beauty… it’s AI being used not to simulate a previous medium but to emerge as a new and potentially different medium”. This sentiment was echoed by Nora N Khan, who posited that “AI images, with their often weird, goopy, and unsettling aspects, can be compelling in part because of what they don’t directly represent… these processes have, increasingly, emergent effects that can’t be predicted at all”. Michael’s slides illustrated his prompts for Midjourney, showing how specific textual descriptions could lead to evocative, even unsettling, visual sequences depicting “huge derelict machines somewhere between robots and an oil refinery tower”. He also explained the underlying mechanics of diffusion models, demonstrating the “Fixed forward diffusion process” from data to noise and the “Generative reverse denoising process” back to an image, explaining the inherently probabilistic nature of the output.

The core narrative of ‘The Jettison’ revolves around a man who, having lost his daughter, attempts to recreate her using AI. Critically, Michael turned an inherent “weakness” of generative AI into a profound narrative strength: the inability of AI to consistently generate the exact same face. This inconsistency, often seen as a limitation, was ingeniously woven into the storyline, becoming a central metaphor that strengthened the film’s themes of grief, memory, and artificial reconstruction. This speaks to the “Lovelace effect,” which suggests that “creativity cannot be assigned as a quality of specific computing systems, but can only be attributed by users in specific situations”, implying the human hand in shaping the creative outcome.

Mick’s philosophical approach to filmmaking with AI was further underscored by a quote from Vilém Flusser: “When I write, I write past the machine toward the text… When I envision technical images, I build from the inside of the apparatus”. This approach allowed him to explore the “radical post-copyright experiment” we are now in, as framed by Kate Crawford (2024), prompting a rethinking of copyright from the ground up.

‘The Jettison’ has already garnered significant attention, being selected for the prestigious Burano Artificial Intelligence Film Festival (BAIFF) in Venice, a testament to its innovative spirit and compelling execution. The film looks absolutely fantastic, and I’m certainly adding it to my must-watch list.

Panel 3: AI, Forensic Linguistics, and Language Sustainability

The symposium was heading into its final stretch, and by this point, with the clock creeping towards midnight in Japan, I was getting pretty tired. While I’d been quite active in the chat earlier, by now it was a struggle to just keep notes. I was almost tempted to skip the final panel, but I persisted – a significant feat for someone who usually calls it a night by 9:00 PM. The symposium finally wrapped up at 4:00 PM UK time, which for me was 12 AM!

The first speaker of Panel 3 was none other than Mel Evans herself, the symposium’s organizer, with her talk titled ‘Imitating: building a human/AI creative corpus’. Her presentation was incredibly engaging, particularly her discussion of a pilot study involving the seventeenth-century writer Aphra Behn.

Mel recounted a fascinating historical anecdote about Behn’s own work. She showed a slide from a collection of Behn’s writings that featured an “Advertisement to the READER”. This disclaimer explicitly stated: “THE Stile of the Court of the King of Bantam, being so very different from Mrs. Behn’s usual way of Writing, it may perhaps call its being genuine in Question; to obviate which Objection, I must inform the Reader, That it was a Trial of Skill, upon a Wager, to shew that she was able to write in the Style of the Celebrated Scarron, in Imitation of whom ’tis writ, tho’ the Story be true.”

This historical parallel perfectly illuminated an age-old concern: issues of forgery, authenticity, and imitation have plagued art and writing since their very inception. Mel’s talk highlighted that whether it’s Aphra Behn meticulously imitating another’s style to the point of needing a disclaimer, or the well-documented cases of forgeries and stylistic rip-offs that infuriated authors like Charles Dickens, the tension between imitation and creation, and art versus inspiration, is not new.

She concluded with a powerful quote from Burrow, 2019, that tied these historical anxieties to our current AI landscape: “‘Human behaviour is increasingly seen as predictable by machines, and yet we are also repeatedly told that human beings make choices – as consumers, as lovers, and as writers. Machines can replicate and anticipate many of our choices, of what we buy, of who we are likely to love, and of the word which we are most likely to write next. The ghost and the machine have never seemed more closely allied, and yet have never been so widely separated. The machine mimics the ghost, and the ghost cries out that it has a choice, that it is more than the machine'”. It’s a very interesting area that continues to provoke debate.

The penultimate presentation was a truly fascinating, data-led talk by Baoyi Zeng and Andrea Nini from the University of Manchester, asking the critical question: ‘Can AI fool a forensic linguist? Detecting AI impersonation of an individual’s language.’ As someone deeply fascinated by forensic linguistics, this was a talk I had been eagerly anticipating, even as my brain was getting heavy with the desire for sleep.

Their study aimed to directly pit “Large Language models with prompting techniques” against “Forensic Linguist: State-of-the-art authorship verification methods” to see if AI could indeed create text indistinguishable from a specific human author. They detailed their experimental setup, which involved various prompting strategies. These included Naïve direct prompting (e.g., “Rewrite the given original text so that it appears to have been written by the author of the provided text snippets”) and Self-prompting (where the LLM was prompted to effectively write its own impersonation prompt). They also explored more advanced methods like Tree-of-thoughts prompting, which involves plan generation, voting, and iterative text generation to refine the output, simulating more complex reasoning.

The core of their findings, vividly displayed through bar charts, demonstrated that AI generally cannot fool a forensic linguist, largely due to the very tools and methodologies forensic linguists employ. A crucial technique highlighted was POS-Tag-based Noise smoothing (POSNoise), where topic-related words in the text were replaced with their Part-of-Speech (POS) tags. This method effectively removed content bias, forcing the authorship verification tools to focus purely on stylistic and structural linguistic patterns – the true “fingerprint” of an author.

Their results, particularly evident when using POSNoise, showed significant differences between genuine human writing and AI-generated impersonations, even those produced with sophisticated prompting. While LLMs could imitate an individual’s language at a superficial level, forensic authorship verification methods remained robust, especially after masking content words. The clear take-home message was that authentic linguistic individuality is still profoundly difficult for AI to replicate. This is kind of bad news for BARD409, which attempts to re-write Shakespeare’s plays as novels in the style of great writers. But, luckily, that was never an attempt to try to be the writers, only to imitated their styles, which I still feel it does rather well. While the technical details are complex, the ultimate finding is compelling for anyone concerned with authorship in the age of AI.

The final speaker of the day, a truly excellent cherry on top of a fantastic symposium cake, was Antonio Martínez-Arboleda from the University of Leeds, whose talk centred on ‘Language Sustainability in the Age of Artificial Intelligence: Rethinking Authorship and Sociolinguistics’. Despite my exhaustion, Antonio’s presentation was utterly captivating, deeply resonating with my ongoing interests in authenticity and authorship.

Antonio began by asserting that all writing is inherently relational, emphasizing that questions of authorship have always been complex, even long before the advent of AI. He echoed discussions we’ve already had in this blog post, referencing influential ideas like Roland Barthes’ “death of the author” and Michel Foucault’s “author function”. He explored how “traditional writing” – performed without AI and often romanticized – already acknowledges that “texts are relational”.

His talk introduced a nuanced framework for understanding writing in the AI era, categorizing it into three modes:

  1. Traditional Composition: Where authors write manually, with “human creativity” at the forefront, but potentially limited in intertextual borrowing.
  2. AI-Supported Writing: This involves a “fluid collaboration between human and machine,” with continuous interaction through prompting and AI-generated suggestions. This mode “introduces a collaborative dynamic that dilutes this ‘author function'”.
  3. Vicarious Writing: This mode is where a “‘writing designer’ configures and directs AI to generate the majority, or even the entirety, of a text with minimal direct human composition”. Antonio described the human role here as a “conductor, curator,” emphasizing developing assistants, defining specifications, and curating knowledge bases. This concept particularly struck me, as it perfectly encapsulated my experience creating BARD409; I felt very much like that vicarious writer, orchestrating the AI’s output from a distance.

Antonio articulated a crucial concept: “tetradic mediation,” a four-way relationship that shapes language and knowledge in the age of AI. His slide clearly listed these four nodes:

  1. The collective cultural heritage embedded within the AI Large Language Model’s training data.
  2. The human and corporate collective that funds, develops, and controls the technology, influencing its capabilities, biases, and deployment.
  3. The human user, who shapes expression and prompts the AI, or even designs AI assistants.
  4. The authors of the original texts whose knowledge base is used for customized Generative AI applications. This complex interplay, he argued, redefines authorship and highlights the political dimensions of AI’s impact on language sustainability.

He also touched upon the Socratic method in the context of LLMs, noting how they can generate thoughtful questions to guide users toward self-discovery rather than providing direct answers. This resonated with my own prior reflections on its potential. Antonio concluded by emphasizing the critical new lines of inquiry for sociolinguistics in the face of AI, including linguistic variation, stratification, symbolic power, multimodality, new narratives, and human-machine collaboration. His talk truly brought to light the deep philosophical, political, and cultural responsibilities we face as AI becomes ever more interwoven with human language.

Conclusion

Overall, it’s clear I couldn’t possibly cover every talk in detail, especially since each was only about 20 minutes apart from Nassar’s keynote. However, it’s truly been a privilege to join this symposium. I’d love to hear from anyone working on similar projects or other creative researchers with an interest in these issues.

The overwhelming take-home message for me was this: while large language models are undeniably advanced and image/video generators feel incredibly new, the academic concerns, the excitement, and the ethical issues surrounding them are deeply rooted in the long history of creativity, authorship, and art. These aren’t novel problems, but rather age-old questions resurfacing with new technologies. It reminds me of the advent of photography, which was once heralded as the “death of the artist” but instead spawned entirely new branches and styles of art. I’ve written more about that in another article, which I’ll link here.

Ultimately, it was a truly brilliant and fascinating symposium. My sincere thanks go to Mel Evans for organizing such a timely and stimulating event, to all the speakers whose work I learned so much from, and of course, to Emily Middleton for pointing me towards it in the first place. Finally, a huge thanks to anyone who has read this far. Please do get in touch or use the comments below to continue the discussion!


References

Behn, A. (n.d.). “Advertisement to the READER.” In All the histories and novels written by the late ingenious Mrs. Behn in one volume. (As presented in Mel Evans’s talk).

Burrow, J. (2019). [Specific publication details not available from provided sources]. (Cited in Mel Evans’s talk, p. 24).

Carchidi, P. (2024). [For a recent discussion on LLM]. (Cited on Charles Lam’s slide, specific publication details not available from provided sources).

Carpenter, J. R. (2017). An Ocean of Some Sort. (Contains “The Darwin and Bishop Island Book”).

Carpenter, J. R. (2024). Text Generation and Other Uneasy Human-Machine Collaborations. Iperstoria, (24).

Chamberlain, W., & Etter, T. (1984). The Policeman’s Beard is Half Constructed. Warner Books.

Chomsky, N. (1966). Cartesian linguistics: A chapter in the history of rationalist thought. Harper & Row. (Cited on Charles Lam’s slide).

Crawford, K. (2024). Metabolic Images. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

D’Agostino, F. (1984). Chomsky’s System of Ideas. Oxford University Press.

Flusser, V. (2011). Into the universe of technical images. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Halliday, M. A. K. (1978). Language as social semiotic: The social interpretation of language and meaning. Edward Arnold. (Cited on Serge Sharoff’s slide).

Hardaker, C. (n.d.). Bot or Not: Audio Edition – Can you tell who’s talking? Lancaster University. Retrieved from https://www.lancaster.ac.uk/linguistics/news/bot-or-not-audio-edition-can-you-tell-whos-talking

Hofstadter, D. R. (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. (Cited on Charles Lam’s slide).

Henrickson, Leah. “Constructing the Other Half of The Policeman’s Beard”, Electronic Book Review, April 4, 2021, https://doi.org/10.7273/2bt7-pw23.

Khan, N. N. (2024). Creation Myths. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Levinson, S. C. (2025). The Interaction Engine. (Cited as a future publication on Charles Lam’s slide).

Martínez-Arboleda, A. (2024). Language Sustainability in the Age of Artificial Intelligence (La sostenibilidad lingüística en la era de la inteligencia artificial). Alfinge, 36, 1-37.

Montfort, N. (2008a). The Two. http://nickm.com/poems/the_two.html.

Montfort, N. (2008b). Three 1K Story Generators. Grand Text Auto. https://grandtextauto.soe.ucsc.edu/2008/11/30/three-lk-story-generators/.

Natale, S., & Hendrickson, K. (2022). [Specific publication details not available from provided sources]. (Cited on Michael Schofield’s slide).

Ritchin, F. (2024). The Simulated Camera. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Robertson, L. (2020). The Baudelaire Fractal. Coach House Books. https://chbooks.com/Books/T/The-Baudelaire-Fractal

Sharoff, S. (2021). [Work on frequencies of negation]. (Cited on Serge Sharoff’s slide, specific publication details not available from provided sources).

Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. (Cited on Charles Lam’s slide).

Vincent, J. (2024, February 20). Feed an AI nothing. The Verge. https://www.theverge.com/ai-artificial-intelligence/688576/feed-ai-nothing

Wardrip-Fruin, N. (2011). Digital Media Archaeology: Interpreting Computational Processes. In Media Archaeology: Approaches, Applications, and Implications (pp. 302–322). University of California Press.

Wershler-Henry, D. (2004). The Tapeworm Foundry: And or the Dangerous Prevalence of Imagination. Coach House Books.

Yao, S., Cui, D., Li, Y., Shao, E., Li, H., & Ma, S. (2023). Tree-of-thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601. (Cited on Baoyi Zeng and Andrea Nini’s slide).

(apologies if some references are missing, I am compiling this based often on partial citations from presentations so please be sure to check before citing anything)

Remembering The Forgotten Prisoners: What Peter Benenson’s Legacy Tells Us About Trolls, Fear, and Free Speech Today

Reading Time: 4 minutes

May 28th 2025

Sixty-three years ago today, Peter Benenson cracked open the global conscience with an article in The Observer entitled The Forgotten Prisoners. He wrote, among other things, of two Portuguese students jailed for raising a toast to freedom. A simple act met with a brutal response. His article was filled with other such examples, and it cited the UN’s Universal Declaration of Human Rights. The piece was republished around the world, sparking a movement, birthing Amnesty International and changing the landscape of human rights forever.

And yet, here we are, in a digital age where the threats to free expression are no longer confined to prison bars and courtrooms. They’re buried in comment threads, blurred in memes, and whispered in the silence of the unsaid.

According to a Cato Institute survey, 62% of Americans say they have political views they’re afraid to share. Not “cautious about,” not “unsure of”. They are afraid. In the land of the First Amendment, that’s a damning statistic. It begs the question: why are they afraid? Afraid of what?

In many cases, it’s us. Or more precisely, the chilling effect of social media mobs, performative outrage, and weaponized partisanship. Speak your mind, and you risk cancellation. How many times have you voiced your opinion only to lose a friend or find yourself suffering from the adrenaline decay of some ridiculous, pointless argument with a stranger. But, if you stay silent you surrender your agency. It’s not a prison of iron bars, it’s a prison of self-censorship.

I was reminded of this tension years ago when I gave a keynote in Argentina. I’d rehearsed meticulously, ensuring I could finish on time to allow 15 minutes for audience questions. But when the moment came, not a single hand was raised, and so my talk ended up shorter than it should have been. Later, a friend explained: a journalist had recently “disappeared,” and a history of authoritarian crackdowns in the country had left people wary of speaking out, even in a university hall. Rather than an apathetic silence, it was residual trauma. That’s the long tail of oppression. It lingers, even after the dictator’s portrait is taken down.

Now, the very same tactics once used by despots to control populations are being echoed even in free democratic societies. Donald Trump hasn’t just revived his war on the press; he’s doubled down. He’s called journalists “the enemy of the people,” a phrase that wouldn’t sound out of place in Stalin’s playbook. In recent speeches, he’s slammed “fake news media” as “corrupt,” “dishonest,” and even “treasonous.” President Trump has a history of labelling the media as the “enemy of the people.” For instance, in a tweet dated October 29, 2018, he stated:

“The Fake News Media, the true Enemy of the People, must stop the open & obvious hostility & report the news accurately & fairly.” The Washington Post

This phrase has historical connotations, previously used by totalitarian regimes to delegitimize dissenting voices. On May 27, 2025, NPR filed a federal lawsuit against President Trump following his executive order to cease federal funding for public broadcasters such as NPR and PBS. NPR contends that the order violates First Amendment rights and accuses Trump of retaliating against media coverage he dislikes.
Financial Times

This isn’t just political bluster. It’s a deliberate strategy to erode public trust in the press while elevating his own channels, like Truth Social and X (described by self-professed post-truth poet Stephen Prime as ‘the Pornhub of bullshit’), where conspiracy theories and partisan propaganda can circulate unchallenged.

As  professor of Communication, Bente Kalsnes explains, when powerful figures politicize the term “fake news,” they don’t just discredit stories—they destroy the credibility of news itself. It’s a scorched-earth tactic: if all media are fake, then no media can hold power accountable.

This technique—delegitimize the watchdogs, confuse the public, and claim you’re the only source of truth—is now a hallmark of autocrats. Leaders in Russia, Hungary, Brazil, and the Philippines have mirrored Trump’s language almost verbatim.

And it works. As trust in journalism plummets, people fall back into echo chambers or switch off entirely. Truth becomes tribal. Facts become optional. Democracy, stripped of shared reality, starts to rot.

As Kalsnes outlines in her excellent paper on fake news, the phrase has morphed into a weapon used by authoritarians worldwide to stifle dissent, justify censorship, and erode public trust in legitimate news sources. Once truth becomes relative and trust becomes partisan, democracy is on life support.

This information chaos has birthed a new dilemma. The same technology that gave us unprecedented freedom of expression also opened the floodgates to disinformation, coordinated trolling, and tribal echo chambers. In an age where anyone can speak, who gets heard and who dares to speak out? As Elon Musk has proven, money talks and when they sold him Twitter and he rebranded it as X, this was an intentional step towards controlling mainstream discourse.

Benenson’s original message wasn’t just about freeing prisoners, it was about defending the principle that no one should suffer for speaking their mind. But the battleground has changed. Today, we’re not only fighting for the right to speak, we’re are fighting for the courage to speak, and the ability to be heard amidst the cacophony of noise.

So let us not forget that silence is just as bad as noise sometimes. Sitting passively and allowing truth to erode into someone’s misguided narrative should not sit well with anyone who truly believes in the principles of freedom of expression.


Sources:

  • Benenson, P. (1961). The Forgotten Prisoners. The Observer. Archived link
  • Cato Institute (2020). Poll: 62% of Americans Say They Have Political Views They’re Afraid to Share. Link
  • Kalsnes, B. (2018). Fake News. Oxford Research Encyclopedia of Communication. DOI
  • The Washington Post. (2018). Trump renews attacks on media as “enemy of the people”. Link
  • Financial Times. (2025). NPR sues Trump over funding cuts, citing First Amendment. Link