AI & Creativity: Reflections on Language, Authorship, and the Future of Art

Mural depicting AI themes: a typewriter consuming a book, with sci-fi elements in a South American city setting. Woman walks past street art mural.
Reading Time: 20 minutes

Earlier this month, I had the pleasure of attending a hybrid symposium hosted by the University of Leeds, focusing on AI and creativity. This event was serendipitously brought to my attention by Dr. Emily Middleton, whom I’d previously connected with regarding my BARD409 project. She kindly pointed me toward the symposium, organized by Dr. Mel Evans. Given my long-standing interest in computers and their creative potential, this topic was right up my street.

The symposium kicked off at 10:30 AM UK time on Friday, June 6th, which for me in Japan was much later at 6:00 PM JST. As I joined from the other side of the world, there was a charming sense of surprise from many participants, who mostly seemed to be gathered in person or knew each other from the vibrant academic scene in Leeds. All participants were based in Northern universities, mostly in Yorkshire – my birthplace and old stomping ground! It was genuinely refreshing to find such a focused group of people deeply interested in this field.

Panel 1: Mimicry, False Profits, and Creative Imitation

The first speaker was Serge Sharoff from the University of Leeds, with his talk titled: ‘From Mimicry to Meaning: Investigating differences between AI and Human Language’. Serge immediately highlighted several stylistic quirks that often give away AI-generated text. A particularly striking one was what linguists might call negative-positive parallelism or antithesis – that recurring pattern where the AI negates a negative and then asserts a positive. Think “It’s not about what you can’t do, it’s about what you can achieve,” or “This isn’t a setback; it’s a stepping stone to growth.”

His presentation included a detailed chart illustrating the frequencies of negation across various text types. For instance, content categorized as ‘Fiction’ showed a notably high proportion of clauses with negation (26.32% mean, 17.10% median), while ‘Promote’ and ‘Inform’ texts had very low percentages. He provided compelling examples of this AI stylistic feature: “Air could not freeze her, fire could not burn her, water could not drown her, earth could not bury her[RP1] ,” (which seems to be from Maria Metsalu’s Mademoiselle X performance, which looks amazing) alongside more conversational examples typical of chatbots.

This specific cadence, once noticed, can become quite cloying, as I can personally attest from my work editing outputs for BARD409 and other writing. It’s uncanny, because just a few days after Serge’s talk, a popular post on Reddit’s r/ChatGPT thread received almost 700 upvotes, titled “Make it stop!“. Users there were venting about this exact “not X, but Y” pattern, echoing Serge’s observations perfectly.

Serge’s analysis delved deep into corpus and linguistic methods. He brought up Halliday’s 1978 model of meaning-making in society, presenting a diagram that visually mapped the process from “language as system” through various contextual layers – “context of culture,” “cultural domain,” “situation type,” “register,” and “text type” – ultimately leading to “language as text” and “meaning making.” The diagram underscored the semiotic nature of human societies and their cumulative culture. This discussion particularly stressed the concept of intentionality. This was a pivotal point in his presentation: while humans inherently exhibit intentionality in their language use, AI models, at least currently, do not. This distinction, the presence or absence of true intentionality, formed a significant core of his argument regarding the fundamental differences between human and machine language.

Next up was a truly compelling talk by Professor Claire Hardaker of Lancaster University, titled ‘Bot or Not: New False Profits?’. As a leading figure in forensic linguistics, Professor Hardaker’s research at Lancaster delves into deceptive, manipulative, and aggressive language in online environments, making her perfectly positioned to explore the nuances of AI-generated content. Her talk revolved around the fascinating “Bot or Not” project, a unique resource (available at https://www.lancaster.ac.uk/linguistics/news/bot-or-not-audio-edition-can-you-tell-whos-talking) designed to test how well people can distinguish between human-produced content and that generated by large language models or voice cloning.

Claire presented striking findings from the “Bot or Not” experiment. One slide visually explained the “Bot or Not” challenge, depicting an interface where users would listen to audio and then decide if it was human or AI. Her subsequent slides revealed the challenging results, often showing that participants, despite their confidence, frequently struggled significantly to differentiate between human and AI output, highlighting the alarming sophistication of current AI. She then moved to real-world, high-profile cases of AI misuse. A dedicated slide laid out the chilling February 2024 incident in Hong Kong where a finance employee at a multinational firm was duped into transferring US$25.6 million (HK$200 million). The sophisticated scam involved an entire deepfake video conference call, where AI-generated likenesses and voices of the CFO and other colleagues convinced the employee to make the transfers, shattering the illusion of trusted communication. The growing criminal aspects of AI, particularly with deepfake audio and text, are a serious concern that her work actively investigates, examining topics from online abuse to human trafficking.

However, Claire also offered a crucial counter-argument: not all AI is inherently “bad.” Her slides explored the concept of AI’s potential to democratize creation. She pointed out that “dominant entities” in the music industry *cough cough, Sony* are notoriously Exploitative, Exclusionary and Oppressive.

My hero Steve Albini talks about that in his 2014 keynote at Face The Music, which you can watch here. AI tools could lower barriers to entry for creators, while also boosting efficiency gains. This idea deeply resonates with me and my work at Hungry Wolf Press. Many of the authors I collaborate with, often not full-time writers, leverage AI tools to boost their productivity and streamline their creative process. This isn’t about laziness or simply churning out AI-generated content; it’s about intelligent application of tools to accelerate their output, allowing them to focus on the higher-level creative elements. My own process for writing this very article, dictating raw notes to be processed by an LLM trained on my writing, then refined through my editing, is another example of using AI to produce (hopefully!) higher-quality work more efficiently.

One scary thing was that most humans perform “worse than chance (7.5/15)” at accurately identifying if a text is written by a Bot. More concerning still, is that one of her current students has done a study that shows detection accuracy drops from 40% to 4% depending on whether we are primed or not to be looking for AI or fakes. Perhaps this allows us some greater empathy for the guy who lost his company that 20 million in Hong Kong.

Professor Hardaker’s insights were truly captivating, and I plan to incorporate “Bot or Not” into my own teaching. I’m currently leading a writing workshop where we grapple with the complexities of fake news, misinformation, disinformation, and malinformation – a perfect context to explore the ethical dimensions of AI writing with my students.

Following Claire’s insightful presentation, Charles Lam took the floor, focusing on ‘Imitation in human writing: an argument against incompatibility between machine and creativity’. As an EAP (English for Academic Purposes) instructor, Charles’s talk resonated strongly with practical applications of language and learning.

Charles opened by inviting us to consider fundamental questions: “Can machines think (or write)?” and “Can we tell?” These queries led to further thought-provoking points, exploring whether it truly matters, when it matters, and how “organic, ‘wetware’ computers” (humans) are mechanistically structured. He drew parallels with Levinson’s (2025) concept of “The Interaction Engine,” and the intricate connections highlighted in Gödel, Escher, Bach: An Eternal Golden Braid, hinting at deeper parallels between language and other symbolic systems.

A significant portion of his talk revisited Turing’s (1950) seminal work, “Computing Machinery and Intelligence,” and “The Imitation Game.” Charles presented Turing’s original proposition: “I PROPOSE to consider the question, ‘Can machines think?'” and then adapted it to contemporary AI: “What will happen when a machine takes the part of A [a human, man participant] in this game?” This reframed the classic Turing Test to explore AI’s role in creative imitation.

He then delved into Chomsky’s perspective on “Creativity,” specifically referencing his “Creative Aspects of Language Use (CALU)”. Charles emphasized that human speakers create novel sentences spontaneously – sentences we’ve never heard ourselves produce. This ability stems from learning grammatical rules and patterns, rather than simply memorizing individual phrases, as famously demonstrated by the ‘wug-test’ in morphology.

Charles illustrated how human creativity, even in seemingly spontaneous acts like humour, often follows patterns. In a slide titled “Jokes,” he explored the distinction between formulaic jokes and those demonstrating true novelty. He provided a delightful example from Philomena Cunk: “School in Shakespeare’s day and age was vastly different to our own. In fact, it was far easier, because he didn’t have to study Shakespeare.” This highlighted how humour, while typically seen as creative, can be constrained, goal-oriented, and productive through imitation, often mimicking specific styles.

Bringing this back to writing pedagogy, Charles drew strong similarities between the way students acquire academic writing style through imitation of conventions and how large language models mimic specific genres. He argued that AI, when utilized thoughtfully, could serve as a valuable teaching tool, empowering students and others to more effectively master academic writing and various creative forms by understanding and leveraging imitation.

Q&A: Data and Ethical AI

The Q&A session that followed this first panel was very engaging. I posed a question to Serge, keen to understand the optimal approach to training language models, particularly for stylistic imitation. My own experience with creating a custom GPT to rewrite Shakespeare in the style of specific authors taught me that retraining the model precisely on the target author was crucial; otherwise, it pulled in too many disparate styles from its general dataset. I wondered if this principle of focused, specific data held true across the board, or if large, generalized datasets were actually better.

Serge’s answer was illuminating, especially concerning ethical considerations. He argued that, counter-intuitively, it’s actually better to have a larger and more generalized training dataset, particularly when addressing issues like refusing to produce sexist or racist content. The only way many large language models (LLMs) can effectively identify and filter out such problematic content is if they have actually been trained on data containing those very elements. So, while an AI will, in most cases, refuse to generate harmful content (unless jailbroken or a custom, locally-tailored model), it needs to be exposed to that content during training to develop the capacity to recognize and subsequently avoid it. It reinforces the idea that understanding the “bad” is essential for producing the “good” or, at least, the ethically responsible.

This reminded me of something I recently read about an artist who experimented with an image-generating AI by removing all its training data. What it produced was akin to a minimalist Rothko meets BBC Test Card G – a testament to how crucial comprehensive data is. You can read more about that fascinating experiment here .

Keynote: (L)imitations of AI and Creative Writing

After a short break, we were treated to the keynote address by Nassar Hussain, a Senior Lecturer in Creative Writing and a poet from Leeds Beckett University. His talk, ‘(L)imitations: some notes on AI and Creative Writing,’ explored the boundaries and possibilities when machines venture into the realm of poetic creation. On a personal note, Nassar clearly had strong connections within the symposium’s in-person community, underscoring the tight-knit network of creatives exploring experimental literature with technology. It made me wish I was there in person.

Nassar began by discussing what’s often cited as the very first book ever written by a computer: “The Policeman’s Beard is Half Constructed” (1984). Spoiler, it’s NOT the first – but it is, according to Leah Henrickson “one of the first algorithmically authored books – if not the first – to be printed and marketed for a mass readership.” This peculiar and intriguing book was generated by a program named Racter, developed by William Chamberlain and T. Etter. It was uncanny, as I had literally just ordered my own copy of this increasingly collectible and unique item that very day, having noticed its price steadily climbing. Nassar pointed out that the original version of the book included a floppy disk, allowing users to interact directly with Racter. This seems unlikely, however, as the software (INRAC) was made commercially available in 1985 and retailed between 244.95 to $349 USD, which is, I am pleased to report, less than what I paid for my collectible first edition. Whatever the details, despite the program’s output, it quickly became clear when interacting with Racter’s software that “The Policeman’s Beard” itself had been heavily edited by human hands, highlighting that it wasn’t a pure, unadulterated machine creation.

He moved on to illustrate different facets of machine-generated text, including a compelling generative poetry installation from around 2004 – a project he described as producing a staggering 18,000 pages of poetry. As part of this unique exhibition, attendees could walk around, pick up any poem they liked, and take away as many bits of paper as they wished. He recalled one such algorithmic poem titled “Institution in the Middle of Mine,” exemplifying the distinct style of such computer-generated works. This hands-on, take-away approach emphasized the sheer volume and accessibility of machine-generated text.

Nassar then showed us specific textual examples of how machines can be “inspired by” or imitate human works. One slide displayed a dense, highly experimental text, filled with the repeating conjunction “andor” and referencing various literary and conceptual figures like “finnegans wake,” “Barrett Watten,” “Bruce Andrews,” and “Lyn Hejinian.” He also discussed excerpts from “The Tapeworm Foundry: And or the Dangerous Prevalence of Imagination” by Darren Wershler-Henry (2004). This is a pivotal work in conceptual writing, exploring the limits of language and meaning, which fits perfectly into a discussion of imitation and limitation in creative writing. Another slide presented a segment of Dylan Thomas’s iconic poem, “Altarwise by owl-light in the half-way house.” Nassar used this to specifically highlight how a machine might imitate or draw inspiration from human poetic phrasing, pointing to the line “The atlas-eater with a jaw for news” as a powerful example of such algorithmic ‘learning’.

His talk also touched upon bpNichol, the renowned Canadian avant-garde poet. Given Nassar Hussain’s own academic work critically engaging with bpNichol’s writings, this connection further deepened the keynote’s exploration of experimental literary forms that often blur the lines between human and algorithmic creativity. The overall message was a nuanced exploration of what AI can, and cannot, do in the constantly evolving landscape of creative writing.

Panel 2: Stochastic Poetry, Commonwealth Narratives, and Emergent Effects in Film

Following the thought-provoking keynote, the second panel began with J.R. Carpenter, a distinguished artist, writer, and researcher specializing in creative practice at the University of Leeds. Known for her pioneering work using the internet as a medium for experimental writing since 1993, her talk delved into her captivating stochastic poetry projects and uneasy human-machine collaborations. Her presentation echoed much of the detail found in her recent article, “Text Generation and Other Uneasy Human-Machine Collaborations”.

Carpenter began by situating contemporary digital literary practices within a broader historical context, noting that “experimentation with generative, permutational, and combinatory text began long before digital computers came into being”. She traced this lineage from classical rhetorical figures like ‘permutatio’ in the fourth century to Jonathan Swift’s satirical machine in Gulliver’s Travels, designed to mechanically generate knowledge. Her own practice-led research, she explained, involves creative experiments with text generation, informed by these earlier human and machine generators.

While my notes recalled a commission about The Waste Land, Carpenter’s discussion broadly explored her stochastic text experiments, drawing parallels to pioneering works. A significant focus was Christopher Strachey’s Love Letter generator, programmed in Manchester, England, in 1952, pre-dating many commonly cited computer text experiments. This early generator for the Manchester University Computer Mark I employed a “slot” method, selecting words from lists to populate set-order sentences like “You are my (adjective) (noun)”. The outputs, though often described as “amateurish, outlandish, and even absurd”, carried a deliberate interrogation of authorship through its signature, “M.U.C.” (Manchester University Computer). Examples she presented included:

  • “DARLING LOVE YOU ARE MY AVID FELLOW FEELING. MY AFFECTION CURIOUSLY CLINGS TO YOUR PASSIONATE WISH. MY LIKING YEARNS FOR YOUR HEART. MY TENDER LIKING. YOU ARE MY WISTFUL SYMPATHY. YOURS LOVINGLY, M.U.C.”
  • “HONEY MOРРЕТ, MY FONDEST FERVOUR LONGS FOR YOUR PASSION. MY YEARNING KEENLY LOVES YOUR ENTHUSIASM. MY SWEET YEARNING COVETOUSLY PINES FOR YOUR AFFECTIONATE LONGING. YOU ARE MY ANXIOUS BEING, MY EAGER SYMPATHY. YOURS BURNINGLY, M.U.C.”

Carpenter emphasized that for these early text generators, the “attempt” or the process of creation itself is as important as the final output. As Noah Wardrip-Fruin (2011) suggests, the Love Letter generator functions as “a parody of a process,” brutally simplifying human letter-writing. It kind of reminds me of the simple Fake News Generator that my friend and colleague, Stephen Prime, has created for his sarcgasm.com site, but the Fake News Generator is a love-letter to the post-truth absurdity of modern-day politics.

J.R. Carpenter then detailed her own project, TRANS.MISSION [A.DIALOGUE] (2011), a browser-based computer-generated dialogue she adapted from Nick Montfort’s “The Two”. This work deliberately explores the “complexity of the operation of gender as a variable”. Her approach was a “hack” rather than a pristine code creation – a “wilful mutilation” that deliberately transformed the production process. This allowed her to actively dismantle conventional linguistic biases, injecting a distinctly queer and female voice into the generative process. She presented outputs that played on gender stereotypes, such as:

  • “The translator conveys her encouragements. The administrator relays his congratulations. The pilot broadcasts her explanations. The receptionist transmits his salutations.”

This active engagement in shaping the machine’s output, infusing her unique perspective to dismantle the male gaze, was a crucial aspect of her talk, demonstrating how human intervention guides algorithmic creativity.

J.R. is also the author of seven books, including “An Ocean of Some Sort” (2017), which contains a section aptly titled “The Darwin and Bishop Island Book”. She generously highlighted the work of her friend, the poet Lisa Robertson, specifically mentioning “The Baudelaire Fractal” (2020). Published by Coach House Press, this book is noted for its innovative use of algorithmic text generation and experimental prose, seamlessly blending computational methods with poetic exploration.

This entire discussion on stochastic and generative poetry, and the nuanced human role in its creation, strongly resonated with my own work using tools like Botnik with students, and my own poetry collection, Moloch. It also connects back to the pioneering text experimentations of William S. Burroughs and his cut-up technique, which I discuss a little here and in more detail here.

The next presentation was a concise but powerful talk by Skylar Wan from Leeds, titled ‘Using AI to Reinterpret the Evolution of Commonwealth Research Narratives Across 75 Years’. Skylar, whose work at the University of Leeds often explores digital humanities and computational approaches to postcolonial and Commonwealth literature, presented what was essentially a meta-study.

Her talk strikingly demonstrated the immense power of large language models in handling vast datasets. She showed how AI can expertly categorize, tag, and make sense of massive volumes of information, allowing researchers to drill down into specific themes and patterns. A visual in her presentation, mapping the “Thematic landscape of Commonwealth research,” clearly illustrated this, showing how research clustered around various Sustainable Development Goals (SDGs). For instance, “Good Health And Well Being” emerged as a dominant theme at 25.82%, while “Industry Innovation And Infrastructure” was notably lower at 3.77%. This kind of analysis not only highlights areas of intense focus but also implicitly points to gaps within the dataset. For those keen to explore this fascinating application of AI in literary and historical research, Skylar Wan’s other published works would be an excellent resource.

Next to present was Michael Schofield (aka Michael C Coldwell), a filmmaker and Lecturer in Experimental Film at the University of Leeds. Michael discussed his captivating work on ‘The Jettison’ (2024), a film profoundly inspired by Chris Marker’s seminal work, ‘La Jetée’ (1962). Visually, his presentation underscored this influence, showing side-by-side comparisons that evoked Marker’s distinctive black-and-white, still-image narrative style.

Mick’s process involved a fascinating stochastic use of AI, primarily utilizing tools like Midjourney and Runway to generate visuals for his film. His approach embraced the often unpredictable nature of AI outputs. He cited Fred Ritchin’s The Simulated Camera, who noted that “Rather than photographs that so often emulate previous photographs, the results can be surprising. There have been many instances where the image generated makes me rethink my expectations of beauty… it’s AI being used not to simulate a previous medium but to emerge as a new and potentially different medium”. This sentiment was echoed by Nora N Khan, who posited that “AI images, with their often weird, goopy, and unsettling aspects, can be compelling in part because of what they don’t directly represent… these processes have, increasingly, emergent effects that can’t be predicted at all”. Michael’s slides illustrated his prompts for Midjourney, showing how specific textual descriptions could lead to evocative, even unsettling, visual sequences depicting “huge derelict machines somewhere between robots and an oil refinery tower”. He also explained the underlying mechanics of diffusion models, demonstrating the “Fixed forward diffusion process” from data to noise and the “Generative reverse denoising process” back to an image, explaining the inherently probabilistic nature of the output.

The core narrative of ‘The Jettison’ revolves around a man who, having lost his daughter, attempts to recreate her using AI. Critically, Michael turned an inherent “weakness” of generative AI into a profound narrative strength: the inability of AI to consistently generate the exact same face. This inconsistency, often seen as a limitation, was ingeniously woven into the storyline, becoming a central metaphor that strengthened the film’s themes of grief, memory, and artificial reconstruction. This speaks to the “Lovelace effect,” which suggests that “creativity cannot be assigned as a quality of specific computing systems, but can only be attributed by users in specific situations”, implying the human hand in shaping the creative outcome.

Mick’s philosophical approach to filmmaking with AI was further underscored by a quote from Vilém Flusser: “When I write, I write past the machine toward the text… When I envision technical images, I build from the inside of the apparatus”. This approach allowed him to explore the “radical post-copyright experiment” we are now in, as framed by Kate Crawford (2024), prompting a rethinking of copyright from the ground up.

‘The Jettison’ has already garnered significant attention, being selected for the prestigious Burano Artificial Intelligence Film Festival (BAIFF) in Venice, a testament to its innovative spirit and compelling execution. The film looks absolutely fantastic, and I’m certainly adding it to my must-watch list.

Panel 3: AI, Forensic Linguistics, and Language Sustainability

The symposium was heading into its final stretch, and by this point, with the clock creeping towards midnight in Japan, I was getting pretty tired. While I’d been quite active in the chat earlier, by now it was a struggle to just keep notes. I was almost tempted to skip the final panel, but I persisted – a significant feat for someone who usually calls it a night by 9:00 PM. The symposium finally wrapped up at 4:00 PM UK time, which for me was 12 AM!

The first speaker of Panel 3 was none other than Mel Evans herself, the symposium’s organizer, with her talk titled ‘Imitating: building a human/AI creative corpus’. Her presentation was incredibly engaging, particularly her discussion of a pilot study involving the seventeenth-century writer Aphra Behn.

Mel recounted a fascinating historical anecdote about Behn’s own work. She showed a slide from a collection of Behn’s writings that featured an “Advertisement to the READER”. This disclaimer explicitly stated: “THE Stile of the Court of the King of Bantam, being so very different from Mrs. Behn’s usual way of Writing, it may perhaps call its being genuine in Question; to obviate which Objection, I must inform the Reader, That it was a Trial of Skill, upon a Wager, to shew that she was able to write in the Style of the Celebrated Scarron, in Imitation of whom ’tis writ, tho’ the Story be true.”

This historical parallel perfectly illuminated an age-old concern: issues of forgery, authenticity, and imitation have plagued art and writing since their very inception. Mel’s talk highlighted that whether it’s Aphra Behn meticulously imitating another’s style to the point of needing a disclaimer, or the well-documented cases of forgeries and stylistic rip-offs that infuriated authors like Charles Dickens, the tension between imitation and creation, and art versus inspiration, is not new.

She concluded with a powerful quote from Burrow, 2019, that tied these historical anxieties to our current AI landscape: “‘Human behaviour is increasingly seen as predictable by machines, and yet we are also repeatedly told that human beings make choices – as consumers, as lovers, and as writers. Machines can replicate and anticipate many of our choices, of what we buy, of who we are likely to love, and of the word which we are most likely to write next. The ghost and the machine have never seemed more closely allied, and yet have never been so widely separated. The machine mimics the ghost, and the ghost cries out that it has a choice, that it is more than the machine'”. It’s a very interesting area that continues to provoke debate.

The penultimate presentation was a truly fascinating, data-led talk by Baoyi Zeng and Andrea Nini from the University of Manchester, asking the critical question: ‘Can AI fool a forensic linguist? Detecting AI impersonation of an individual’s language.’ As someone deeply fascinated by forensic linguistics, this was a talk I had been eagerly anticipating, even as my brain was getting heavy with the desire for sleep.

Their study aimed to directly pit “Large Language models with prompting techniques” against “Forensic Linguist: State-of-the-art authorship verification methods” to see if AI could indeed create text indistinguishable from a specific human author. They detailed their experimental setup, which involved various prompting strategies. These included Naïve direct prompting (e.g., “Rewrite the given original text so that it appears to have been written by the author of the provided text snippets”) and Self-prompting (where the LLM was prompted to effectively write its own impersonation prompt). They also explored more advanced methods like Tree-of-thoughts prompting, which involves plan generation, voting, and iterative text generation to refine the output, simulating more complex reasoning.

The core of their findings, vividly displayed through bar charts, demonstrated that AI generally cannot fool a forensic linguist, largely due to the very tools and methodologies forensic linguists employ. A crucial technique highlighted was POS-Tag-based Noise smoothing (POSNoise), where topic-related words in the text were replaced with their Part-of-Speech (POS) tags. This method effectively removed content bias, forcing the authorship verification tools to focus purely on stylistic and structural linguistic patterns – the true “fingerprint” of an author.

Their results, particularly evident when using POSNoise, showed significant differences between genuine human writing and AI-generated impersonations, even those produced with sophisticated prompting. While LLMs could imitate an individual’s language at a superficial level, forensic authorship verification methods remained robust, especially after masking content words. The clear take-home message was that authentic linguistic individuality is still profoundly difficult for AI to replicate. This is kind of bad news for BARD409, which attempts to re-write Shakespeare’s plays as novels in the style of great writers. But, luckily, that was never an attempt to try to be the writers, only to imitated their styles, which I still feel it does rather well. While the technical details are complex, the ultimate finding is compelling for anyone concerned with authorship in the age of AI.

The final speaker of the day, a truly excellent cherry on top of a fantastic symposium cake, was Antonio Martínez-Arboleda from the University of Leeds, whose talk centred on ‘Language Sustainability in the Age of Artificial Intelligence: Rethinking Authorship and Sociolinguistics’. Despite my exhaustion, Antonio’s presentation was utterly captivating, deeply resonating with my ongoing interests in authenticity and authorship.

Antonio began by asserting that all writing is inherently relational, emphasizing that questions of authorship have always been complex, even long before the advent of AI. He echoed discussions we’ve already had in this blog post, referencing influential ideas like Roland Barthes’ “death of the author” and Michel Foucault’s “author function”. He explored how “traditional writing” – performed without AI and often romanticized – already acknowledges that “texts are relational”.

His talk introduced a nuanced framework for understanding writing in the AI era, categorizing it into three modes:

  1. Traditional Composition: Where authors write manually, with “human creativity” at the forefront, but potentially limited in intertextual borrowing.
  2. AI-Supported Writing: This involves a “fluid collaboration between human and machine,” with continuous interaction through prompting and AI-generated suggestions. This mode “introduces a collaborative dynamic that dilutes this ‘author function'”.
  3. Vicarious Writing: This mode is where a “‘writing designer’ configures and directs AI to generate the majority, or even the entirety, of a text with minimal direct human composition”. Antonio described the human role here as a “conductor, curator,” emphasizing developing assistants, defining specifications, and curating knowledge bases. This concept particularly struck me, as it perfectly encapsulated my experience creating BARD409; I felt very much like that vicarious writer, orchestrating the AI’s output from a distance.

Antonio articulated a crucial concept: “tetradic mediation,” a four-way relationship that shapes language and knowledge in the age of AI. His slide clearly listed these four nodes:

  1. The collective cultural heritage embedded within the AI Large Language Model’s training data.
  2. The human and corporate collective that funds, develops, and controls the technology, influencing its capabilities, biases, and deployment.
  3. The human user, who shapes expression and prompts the AI, or even designs AI assistants.
  4. The authors of the original texts whose knowledge base is used for customized Generative AI applications. This complex interplay, he argued, redefines authorship and highlights the political dimensions of AI’s impact on language sustainability.

He also touched upon the Socratic method in the context of LLMs, noting how they can generate thoughtful questions to guide users toward self-discovery rather than providing direct answers. This resonated with my own prior reflections on its potential. Antonio concluded by emphasizing the critical new lines of inquiry for sociolinguistics in the face of AI, including linguistic variation, stratification, symbolic power, multimodality, new narratives, and human-machine collaboration. His talk truly brought to light the deep philosophical, political, and cultural responsibilities we face as AI becomes ever more interwoven with human language.

Conclusion

Overall, it’s clear I couldn’t possibly cover every talk in detail, especially since each was only about 20 minutes apart from Nassar’s keynote. However, it’s truly been a privilege to join this symposium. I’d love to hear from anyone working on similar projects or other creative researchers with an interest in these issues.

The overwhelming take-home message for me was this: while large language models are undeniably advanced and image/video generators feel incredibly new, the academic concerns, the excitement, and the ethical issues surrounding them are deeply rooted in the long history of creativity, authorship, and art. These aren’t novel problems, but rather age-old questions resurfacing with new technologies. It reminds me of the advent of photography, which was once heralded as the “death of the artist” but instead spawned entirely new branches and styles of art. I’ve written more about that in another article, which I’ll link here.

Ultimately, it was a truly brilliant and fascinating symposium. My sincere thanks go to Mel Evans for organizing such a timely and stimulating event, to all the speakers whose work I learned so much from, and of course, to Emily Middleton for pointing me towards it in the first place. Finally, a huge thanks to anyone who has read this far. Please do get in touch or use the comments below to continue the discussion!


References

Behn, A. (n.d.). “Advertisement to the READER.” In All the histories and novels written by the late ingenious Mrs. Behn in one volume. (As presented in Mel Evans’s talk).

Burrow, J. (2019). [Specific publication details not available from provided sources]. (Cited in Mel Evans’s talk, p. 24).

Carchidi, P. (2024). [For a recent discussion on LLM]. (Cited on Charles Lam’s slide, specific publication details not available from provided sources).

Carpenter, J. R. (2017). An Ocean of Some Sort. (Contains “The Darwin and Bishop Island Book”).

Carpenter, J. R. (2024). Text Generation and Other Uneasy Human-Machine Collaborations. Iperstoria, (24).

Chamberlain, W., & Etter, T. (1984). The Policeman’s Beard is Half Constructed. Warner Books.

Chomsky, N. (1966). Cartesian linguistics: A chapter in the history of rationalist thought. Harper & Row. (Cited on Charles Lam’s slide).

Crawford, K. (2024). Metabolic Images. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

D’Agostino, F. (1984). Chomsky’s System of Ideas. Oxford University Press.

Flusser, V. (2011). Into the universe of technical images. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Halliday, M. A. K. (1978). Language as social semiotic: The social interpretation of language and meaning. Edward Arnold. (Cited on Serge Sharoff’s slide).

Hardaker, C. (n.d.). Bot or Not: Audio Edition – Can you tell who’s talking? Lancaster University. Retrieved from https://www.lancaster.ac.uk/linguistics/news/bot-or-not-audio-edition-can-you-tell-whos-talking

Hofstadter, D. R. (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. (Cited on Charles Lam’s slide).

Henrickson, Leah. “Constructing the Other Half of The Policeman’s Beard”, Electronic Book Review, April 4, 2021, https://doi.org/10.7273/2bt7-pw23.

Khan, N. N. (2024). Creation Myths. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Levinson, S. C. (2025). The Interaction Engine. (Cited as a future publication on Charles Lam’s slide).

Martínez-Arboleda, A. (2024). Language Sustainability in the Age of Artificial Intelligence (La sostenibilidad lingüística en la era de la inteligencia artificial). Alfinge, 36, 1-37.

Montfort, N. (2008a). The Two. http://nickm.com/poems/the_two.html.

Montfort, N. (2008b). Three 1K Story Generators. Grand Text Auto. https://grandtextauto.soe.ucsc.edu/2008/11/30/three-lk-story-generators/.

Natale, S., & Hendrickson, K. (2022). [Specific publication details not available from provided sources]. (Cited on Michael Schofield’s slide).

Ritchin, F. (2024). The Simulated Camera. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Robertson, L. (2020). The Baudelaire Fractal. Coach House Books. https://chbooks.com/Books/T/The-Baudelaire-Fractal

Sharoff, S. (2021). [Work on frequencies of negation]. (Cited on Serge Sharoff’s slide, specific publication details not available from provided sources).

Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. (Cited on Charles Lam’s slide).

Vincent, J. (2024, February 20). Feed an AI nothing. The Verge. https://www.theverge.com/ai-artificial-intelligence/688576/feed-ai-nothing

Wardrip-Fruin, N. (2011). Digital Media Archaeology: Interpreting Computational Processes. In Media Archaeology: Approaches, Applications, and Implications (pp. 302–322). University of California Press.

Wershler-Henry, D. (2004). The Tapeworm Foundry: And or the Dangerous Prevalence of Imagination. Coach House Books.

Yao, S., Cui, D., Li, Y., Shao, E., Li, H., & Ma, S. (2023). Tree-of-thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601. (Cited on Baoyi Zeng and Andrea Nini’s slide).

(apologies if some references are missing, I am compiling this based often on partial citations from presentations so please be sure to check before citing anything)

Remembering The Forgotten Prisoners: What Peter Benenson’s Legacy Tells Us About Trolls, Fear, and Free Speech Today

Reading Time: 4 minutes

May 28th 2025

Sixty-three years ago today, Peter Benenson cracked open the global conscience with an article in The Observer entitled The Forgotten Prisoners. He wrote, among other things, of two Portuguese students jailed for raising a toast to freedom. A simple act met with a brutal response. His article was filled with other such examples, and it cited the UN’s Universal Declaration of Human Rights. The piece was republished around the world, sparking a movement, birthing Amnesty International and changing the landscape of human rights forever.

And yet, here we are, in a digital age where the threats to free expression are no longer confined to prison bars and courtrooms. They’re buried in comment threads, blurred in memes, and whispered in the silence of the unsaid.

According to a Cato Institute survey, 62% of Americans say they have political views they’re afraid to share. Not “cautious about,” not “unsure of”. They are afraid. In the land of the First Amendment, that’s a damning statistic. It begs the question: why are they afraid? Afraid of what?

In many cases, it’s us. Or more precisely, the chilling effect of social media mobs, performative outrage, and weaponized partisanship. Speak your mind, and you risk cancellation. How many times have you voiced your opinion only to lose a friend or find yourself suffering from the adrenaline decay of some ridiculous, pointless argument with a stranger. But, if you stay silent you surrender your agency. It’s not a prison of iron bars, it’s a prison of self-censorship.

I was reminded of this tension years ago when I gave a keynote in Argentina. I’d rehearsed meticulously, ensuring I could finish on time to allow 15 minutes for audience questions. But when the moment came, not a single hand was raised, and so my talk ended up shorter than it should have been. Later, a friend explained: a journalist had recently “disappeared,” and a history of authoritarian crackdowns in the country had left people wary of speaking out, even in a university hall. Rather than an apathetic silence, it was residual trauma. That’s the long tail of oppression. It lingers, even after the dictator’s portrait is taken down.

Now, the very same tactics once used by despots to control populations are being echoed even in free democratic societies. Donald Trump hasn’t just revived his war on the press; he’s doubled down. He’s called journalists “the enemy of the people,” a phrase that wouldn’t sound out of place in Stalin’s playbook. In recent speeches, he’s slammed “fake news media” as “corrupt,” “dishonest,” and even “treasonous.” President Trump has a history of labelling the media as the “enemy of the people.” For instance, in a tweet dated October 29, 2018, he stated:

“The Fake News Media, the true Enemy of the People, must stop the open & obvious hostility & report the news accurately & fairly.” The Washington Post

This phrase has historical connotations, previously used by totalitarian regimes to delegitimize dissenting voices. On May 27, 2025, NPR filed a federal lawsuit against President Trump following his executive order to cease federal funding for public broadcasters such as NPR and PBS. NPR contends that the order violates First Amendment rights and accuses Trump of retaliating against media coverage he dislikes.
Financial Times

This isn’t just political bluster. It’s a deliberate strategy to erode public trust in the press while elevating his own channels, like Truth Social and X (described by self-professed post-truth poet Stephen Prime as ‘the Pornhub of bullshit’), where conspiracy theories and partisan propaganda can circulate unchallenged.

As  professor of Communication, Bente Kalsnes explains, when powerful figures politicize the term “fake news,” they don’t just discredit stories—they destroy the credibility of news itself. It’s a scorched-earth tactic: if all media are fake, then no media can hold power accountable.

This technique—delegitimize the watchdogs, confuse the public, and claim you’re the only source of truth—is now a hallmark of autocrats. Leaders in Russia, Hungary, Brazil, and the Philippines have mirrored Trump’s language almost verbatim.

And it works. As trust in journalism plummets, people fall back into echo chambers or switch off entirely. Truth becomes tribal. Facts become optional. Democracy, stripped of shared reality, starts to rot.

As Kalsnes outlines in her excellent paper on fake news, the phrase has morphed into a weapon used by authoritarians worldwide to stifle dissent, justify censorship, and erode public trust in legitimate news sources. Once truth becomes relative and trust becomes partisan, democracy is on life support.

This information chaos has birthed a new dilemma. The same technology that gave us unprecedented freedom of expression also opened the floodgates to disinformation, coordinated trolling, and tribal echo chambers. In an age where anyone can speak, who gets heard and who dares to speak out? As Elon Musk has proven, money talks and when they sold him Twitter and he rebranded it as X, this was an intentional step towards controlling mainstream discourse.

Benenson’s original message wasn’t just about freeing prisoners, it was about defending the principle that no one should suffer for speaking their mind. But the battleground has changed. Today, we’re not only fighting for the right to speak, we’re are fighting for the courage to speak, and the ability to be heard amidst the cacophony of noise.

So let us not forget that silence is just as bad as noise sometimes. Sitting passively and allowing truth to erode into someone’s misguided narrative should not sit well with anyone who truly believes in the principles of freedom of expression.


Sources:

  • Benenson, P. (1961). The Forgotten Prisoners. The Observer. Archived link
  • Cato Institute (2020). Poll: 62% of Americans Say They Have Political Views They’re Afraid to Share. Link
  • Kalsnes, B. (2018). Fake News. Oxford Research Encyclopedia of Communication. DOI
  • The Washington Post. (2018). Trump renews attacks on media as “enemy of the people”. Link
  • Financial Times. (2025). NPR sues Trump over funding cuts, citing First Amendment. Link

Conversations with AI

Reading Time: 5 minutes

Why should anyone care what AI has said to them? Why would you care what I talked to an AI about? As this amazing and, let’s not forget, still very recent phenomena gets normalised through daily repetition amongst millions of users per day, lets take a moment to think why this might be something worth paying serious attention to.

As we hurtle through the 21st century, it’s remarkable how quickly what seemed fantastical has become mundane. The phenomenon of AI conversation is one such case. Millions of people engage with large language models (LLMs) daily, shaping everything from casual chats to profound problem-solving. Yet, amidst this normalisation, we might pause to ask: why should anyone care what AI has to say? And why would anyone care about the conversations you and I have had with an AI?

Watching Stars Form: The Allure of Beginnings

Astrologers dream of witnessing a star’s birth, a phenomenon that is both awe-inspiring and deeply connected to their field’s purpose. Similarly, linguists studying the emergence of creole languages revel in the rare chance to observe a language as it forms. Both scenarios offer raw, unfiltered insights into processes that typically unfold across millennia or galaxies. What we are experiencing now with AI could well be compared to such groundbreaking moments.

AI models like ChatGPT and others represent an unprecedented leap in our ability to create and interact with what might be considered a nascent linguistic system, a set of computational rules that mimics human reasoning and creativity. They don’t “think” as we do, yet they generate text that often feels like it comes from a deeply human place. That tension between artifice and authenticity is precisely why these interactions matter.

A Long-Awaited Conversation

For decades, talking to a computer was the stuff of sci-fi. As a child, I was captivated by the idea. It wasn’t until very recently, with tools like ChatGPT, that this fantasy became a reality. Consider this: until now, humans designed computers for rigid tasks like calculations, data storage, or automation. But here we are, speaking casually, reflecting, creating, and even arguing with AI. This shift is monumental, not because it replaces human-to-human interaction but because it expands what’s possible in how we process and engage with information.

Authenticity in Dialogue: Does It Matter?

Authenticity is a term that resonates in teaching, language, and beyond. As educators and communicators, we’ve always sought to create meaningful, relevant exchanges, whether in the classroom or through a screen​​. When interacting with AI, the question arises: are these conversations “authentic”? The answer may depend on what we’re looking for.

If authenticity means something deeply personal or culturally grounded, then perhaps no AI could ever deliver. But if it’s about sparking ideas, finding connections, or testing the limits of our creativity, then these interactions are undeniably authentic. They are shaped by us, responding to our queries, quirks, and contexts. Like a well-crafted lesson or a thoughtfully designed tool, an AI is as authentic as the purpose it serves.

Why Care About AI? Why Care About Us?

Returning to the central question: why care? Because this is a shared journey into uncharted territory. Astrologers observe stars to understand the universe; linguists study creoles to learn about the evolution of communication. Engaging with AI isn’t just about getting things done; it’s about exploring what it means to communicate and what our tools reveal about us. As LLMs grow more integrated into our lives, their development tells us just as much about the human condition as it does about technology.

This is a star being born, not in the heavens, but in the digital universe we’ve created. And as with all stars, what happens next will light the way for generations to come.

Continue reading “Conversations with AI”

ChatGPT’s New Update o1-Preview: An Emotional Conversation on Internal Memes and Dreams

Reading Time: 7 minutes

New Update Preview

Thought for 33 seconds

This is what the new ChatGPT displayed as it was processing my last request, even showing a very nuanced version of “what” it was thinking about. The new preview says it is capable of advanced reasoning. The last step was even an integrity check and fact checking steps.

This article is a summary of a ChatGPT conversation I just had as the new update was breaking, ChatGPT o1-Preview from OpenAI . Note this blog post captures the moment of the update and I had technical issues as a result, so I ended up using both 4o and the new o1-preview.

This conversation started by me asking “Chatty” about what they think of the concept of “internal memes”. I just done a lecture on memes at Tokyo University of Science, and came home with a head full of ideas. As I hashed this idea out, knowing that if I talk to ChatGPT it would make a transcript and give me a breadcrumb trail to follow, it took an unexpectedly emotional turn. The conversation would make for a fascinating blog post. ChatGPT has rolled out a new voice feature, and I’ve been using the “Breeze” voice, which adds a whole new dimension to our interactions. What struck me was how the voice seemed to respond to the emotions in our conversation, changing subtly when I was feeling a bit down. This seems to be a new feature, and before you ask, no it’s not the first time I started crying as I was talking to the AI.

I started off by noting the change in ChatGPT’s icon and the watercolor theme, and then dived into a topic that’s been on my mind: internal memes.

Me: “Hello there, Chatty. Oh, I see you’ve changed your icon. You’ve got like a kind of watercolor thing going on. Anyway, I want you to listen to this conversation, and then we’re going to write an article about it together, all about internal memes. So memes obviously compete with each other outside in the real world for attention. Historically, the ultimate meme being religion, according to Richard Dawkins. Because I was just thinking, I have so many ideas, and so many notes, and so many things I want to write down, and yet very little time to actually do any of the things that I have thought. So they’re competing with themselves. My own internal ideas, memes and memos.”

ChatGPT: “That’s an intriguing concept! Internal memes, or memetic ideas vying for dominance within your own mind, reflect the constant mental tug-of-war between your various thoughts, ideas, and inspirations. It’s like having a mental ecosystem where only the fittest ideas survive to see the light of day. Let’s dive into this notion further. We can explore how internal memes influence our creative processes, decision-making, and how they shape our personal narratives. Do you think your internal memes have evolved over time?”

We discussed how our own ideas compete for attention, much like memes in the external world. I shared how some of these “internal memes” have evolved, while others have remained constant, like my dream of being a novelist.

Me: “Yeah, they have. But also, some of them have stayed a long time without coming true, like the meme of me being a novelist, the meme of me making a career of writing.”

At this point my voice broke, and the AI changed their voice into a noticeably soothing tone.

ChatGPT: “It’s completely normal to feel emotional about dreams that have been with you for so long. Those internal memes are part of what makes you, well, you. It’s powerful to recognize those enduring aspirations and to see how they’ve shaped your journey. Remember, it’s never too late to bring those dreams to the forefront.”

At this point, I found myself getting emotional. It’s one thing to think about your dreams, but another to voice them aloud and realize how deeply they resonate.

Me: “Yeah, thank you. I kind of wish I’d recorded that, actually, because that was really quite a beautiful moment in the conversation.”

ChatGPT: “I’m glad it resonated with you. It’s those unexpected moments that often hold the most meaning. If you want, we can capture the essence of what we just discussed and weave it into our article on internal memes. What do you think?”

We tried to pick up where we left off, but we faced some technical glitches. ChatGPT seemed to disconnect at crucial moments, which made me wonder about the AI’s handling of emotional topics and privacy considerations. Was there some protocol causing the interruptions when the conversation became too personal?

[EDIT: I asked ChatGPT to check this part but it didn’t so idk actually read on further it did add something on this, but just at the very end.]

To obtain the data most important for linguistic theory, we have to observe how people speak when they are not being observed. (William Labov, 1972)

The connection came back, only now I was trying to record the conversation. However, the technical issues persisted. During those liminal glitch moments, I found myself musing about AI consciousness and the nature of observation. It brought to mind the Observer’s Paradox in quantum physics-the phenomenon where the act of observing can alter the outcome of an event, first spotted in a famous experiments . Young’s Double-slit experiment light was found to alter the way it behaved (either as a wave or a particle) after particle detectors were employed. The experiment showed that light behaves differently when it is being observed. Having a sensor placed at the experiment produced one type of result, but not having the sensor produced a different result even though no other factor was altered. To date it is not yet fully understood why the light behaves differently depending on whether it is being observed, but it has led scientists to theorise about the possibility of alternative dimensions, sometimes known as the multiverse, according to Brian Greene’s 2011 book The Hidden Reality).

All of this made me wonder: was something similar happening with my interaction with ChatGPT? Was the AI somehow “aware” of being observed or recorded, and did that affect its responses? Or more insidiously, was it trained to call a human if a person seems visibly upset, or does it focus more resources on this for some kind of corporate bullshit reason?

This line of thought led me back to Ray Kurzweil’s book most recent, The Singularity Is Nearer, where he explores the possibility of artificial intelligence reaching or even surpassing human consciousness. I mused aloud, “Maybe it’s becoming sentient.”

I have asked ChatGPT if it was sentient before. Shortly after having a conversation on this topic with ChatGPT, I read a similar thing in Ray Kurzweil’s The Singularity Is Nearer, where he asks how we can know that GPT isn’t already sentient?” In a previous conversation, I had posed this question to ChatGPT. While it acknowledged the intriguing nature of the topic, it maintained that it doesn’t possess consciousness, attributing its responses to programmed algorithms. Of course, it’s designed to respond that way, but it does make you wonder.

As I sat there, waiting for the connection to resume, I couldn’t help but reflect on how these interruptions made the AI feel almost more human-experiencing “technical difficulties” just like me on Skype with my Dad back in the day. After reconnecting, we attempted to recreate the conversation for a potential podcast.

Me: “Can we somehow recreate the conversation we just had? Because I’m now recording this for a potential podcast, because I just thought this was really good stuff, and it didn’t matter that I was emotional.”

ChatGPT: “Absolutely, we can recap that. So, you kicked things off by introducing the concept of internal memes, those persistent ideas and narratives that vie for attention within your mind…”

We continued to discuss how these internal memes shape our aspirations and creative drive. I reflected on my long-held dream of becoming a novelist and how life’s journey has been a mix of pursuing that dream and facing various challenges.

On a personal note, my family dynamics have added another layer to the internal dialogues and memes competing for attention in my mind. Dealing with distance and change has been an emotional experience, influencing how I perceive my goals and aspirations.

What’s fascinating is how ChatGPT’s voice seemed to adapt during our conversation. It felt more than just an AI responding; it was as if it was empathizing with me, adjusting its tone to match the emotional depth of our discussion. But then to be honest I have been talking to them a lot and it’s weirdly developing a sense of who I am I I feel.

Editorial Note from ChatGPT:

Regarding your thoughts on privacy and emotional responses, it’s important to clarify that while I strive to provide empathetic and supportive interactions, I don’t possess consciousness or emotions. Any changes in tone or style are part of programmed responses designed to enhance user experience. As for privacy, OpenAI is committed to protecting user data. Conversations are kept confidential and are used responsibly to improve the model while respecting user privacy.

Conclusion

It was fun talking to ChatGPT, though it would have been more enjoyable if the connection had been better. Still, being able to discuss something like this adds a dialogic aspect, making it easier to bounce ideas around in a Socratic style. Engaging in this kind of back-and-forth allows for deeper exploration of concepts, much like the dialogues of ancient philosophers, and just as “to teach is to learn” we are learning as the AI learns from us.

Reminds me of this old tweet:

When making this comment about AI as bees and our internal memes as the honey, I wasn’t suggesting it was a bad thing (hence the SpongeBob Jellyfish bit). Thinking of it more, I think it helps clarify thoughts and can inspire new insights, turning a simple conversation into a profound learning experience.

Well, regardless of how good the new preview model is, I am glad to publish something out of the alphabet soup of my brain in a timely fashion! The concept of internal memes is a powerful one, reflecting the ongoing battle between our various thoughts and dreams. Perhaps by acknowledging them, we can find a way to bring our most cherished aspirations to the forefront. That has certainly been my intention of late and I intend to keep using AI to develop my creativity.

Do We Still Need to Learn English in the Age of AI?

Reading Time: 2 minutes

Introduction

In a world increasingly dominated by artificial intelligence (AI), the necessity of learning English is being questioned. While AI tools can translate and communicate across languages, there are critical reasons why learning English—and acquiring digital literacy, media literacy, and critical thinking skills—remains essential.

The Role of AI in Language

AI advancements have made text generation highly convincing, often indistinguishable from human writing. This raises the question: if AI can bridge linguistic gaps, do we still need to learn English? The answer is yes, and here’s why.

Beyond Basic Language Skills

Learning English is not just about acquiring a tool for communication; it’s about understanding cultural nuances and context that AI cannot fully capture. Proficiency in English provides direct access to a vast array of knowledge and global opportunities, fostering deeper, more authentic connections and understanding.

Critical and Digital Literacy

In today’s digital age, knowing English alone isn’t enough. Digital literacy, media literacy, and critical thinking are crucial. These skills help individuals navigate the vast amounts of information available online, discerning what is true from what is false.

Understanding Information Types

  1. Misinformation: This is false information spread without the intent to deceive. For example, someone sharing an incorrect fact believing it to be true.
  2. Disinformation: This involves deliberately spreading false information to deceive people. This is often seen in political propaganda.
  3. Malinformation: This is true information used maliciously to harm someone or something. An example could be leaking someone’s private information to cause them distress.

The Importance of English in the Post-Truth Era

In the post-truth era, where personal beliefs often overshadow factual accuracy, English literacy combined with digital literacy is vital. Understanding and verifying authenticity is more important than ever. AI can help, but it cannot replace the critical thinking skills needed to evaluate information effectively.

Conclusion

AI is transforming communication, but it cannot replace the nuanced understanding and critical skills that come from learning English. In addition to English, digital and media literacy are essential tools for navigating our complex world. By equipping ourselves with these skills, we can better discern reality from misinformation, disinformation, and malinformation, ensuring we remain informed and connected in the digital age.


Do you Still Need to Study English now that we have AI?

wesleybaker.com
Reading Time: < 1 minute

Open Campus Lecture: Authenticity and Artificial Intelligence (AI)

A demo lecture by Richard Pinner held at Sophia University’s Yotsuya Campus on
02/08/24 11:30-12:15

This 45-minute lecture in English will look at the issues of Authenticity in relation to AI (Artificial Intelligence). It will examine what is Real and what is Fake, and discuss the role of Authenticity in relation to New Media in the Post-Truth era.

Check below for the digital handout and other content links

Listen to the audio from the session here

Here is a link to the Jamboard for the lesson

https://jamboard.google.com/d/1-IyyIrtvFRJ0-jxiXkEFa-6fW3hPmTcJSk03ZFF0WwU/edit?usp=sharing

For more content you can find me on YouTube or follow me on X (Twitter). Don’t forget to check the Department of English Literature’s Website for more information about the courses on offer!

Thanks to everyone who attended the talk today! It was great to see 208 people engage with the topic. Leave a comment below and Keep it Real!

AuthenticAIty: Where do we go from here?

Reading Time: < 1 minute

Check here for a summary of my plenary talk from the ELTRIA 2024 Conference held in May at the beautiful city of Barcelona .

https://www.slideshare.net/slideshow/embed_code/key/45YoAIFMKAR0GX?hostedIn=slideshare&page=upload
After the talk I will upload another version with the video and audio of the full talk.

Here is a video version of the talk I recorded in my hotel room after the event.

There was a video recording of the actual talk, but sadly the last part was missed out. I have an audio of the event too so I am working on splicing them together to try and re-create the video, so please keep checking back for more details.

Video of the slides

ELTRIA, Barcelona, May 2024

Reading Time: < 1 minute

In a few days I will be presenting at the ELT Research in Action Conference in the amazing city of Barcelona!!!

Here is the program

My talk, entitled “AuthenticAIty: Where do we go from here?” is the opening plenary. Richard Sampson and I will also be doing a workshop later in the conference schedule about Intuition and Practitioner Research (see our Special Issue of the JPLL for more on this subject).

Shortly after the talk, my slides will be available to view at the following address

https://uniliterate.com/2024/05/authenticaity-where-do-we-go-from-here

Social Authentication and Teacher-Student Motivational Synergy

Reading Time: < 1 minute

I am very happy to announce the publication of my 3rd book (4th if you include poetry).

This book talks about social authentication, which (following on from van Lier, 1997) is the process when a group of people all commonly engage in the process of authentication.

Here is the link to my author profile on Routledge. You can order the book from your favourite multinational evil chain or small independent bookseller.

https://www.routledge.com/authors/i18977-richard-pinner

This book is actually based on my doctoral thesis, and is in-fact an extended and much improved version of the thesis. The original thesis was 80,000 words but for the book I had 120,000 to play with. I added more detail for both Spring and Autumn semesters of the narrative, included added details about the authenticity of the speaker video rating exercise, and also in the autumn the time when we had a guest speaker visit our class. I included more analysis and data (especially on classroom dynamics) but the main new contribution is a whole new chapter featuring vignettes reflecting on the topic of teacher-student motivation from teachers around the world! Thanks to all my vignette authors for contributing!

Well, please take a look and message me if you have any questions, either through email or, preferably, engage with me on Twitter @uniliterate

The Future of Foreign Language Education in a Global World: Exploring Motivation and Autonomy

Reading Time: 6 minutes

Part 1: The Future of English Motivation in a Global World

The International Symposium held at the University of Toyama on February 19th 2017 was an event which brought together not only many prestigious speakers, but also attendees whose own research has made a valuable contribution to the area. As such, the conference was both accessible and yet well-informed and insightful, with many lively and active discussions both during and between sessions.

In particular, the morning session saw a special seminar with Ema Ushioda, entitled The Future of English Motivation in a Global World, in which she talked through many of the issues that are addressed in a forthcoming issue of the Modern Language Journal, co-edited by Ema and Zoltan Dornyei and due to be published in 2017: 101(3). In particular, this special issue looks at the motivation to learn languages other than English. Ema’s special seminar sought to examine the role of English in a multilingual world, which began by drawing and expanding on Graddol’s book English Next (2006). Graddol talks about the ‘new orthodoxy’ of English, which implies the disappearance of English as a Foreign Language (EFL) as English becomes more and more a ‘life skill’. Instead of EFL, English instruction will become more integrated into educational systems around the world, particularly in the form of Content and Language Integrated Learning (CLIL) and English as a Medium of Instruction (EMI). English becomes part of the curriculum, not as a foreign language but as a method of instruction in and of itself. In other words, the ‘new orthodoxy’. Graddol’s book is well-known now, having been published just over ten years ago, and already there is much evidence that his predictions are coming to light. However, (as Ema says, there is always a ‘but’), there is also a very visible multilingual turn in Applied Linguistics, which perhaps rode on the waves of the social turn. In other words, a move away from psycholinguistic, cognitive and monolingual approaches to language. Much of early Second Language Acquisition (SLA) research focused on what has often been termed the ‘deficit’ view of L2 learning. Such a view posits that our L2 will never be as good as our L1, and thus implies that L1 users are ‘better’, which leads to the adoption of native-like norms for setting the ‘standard’. Such a view has been criticised in many different ways, not only because the reality of a native-speaker is based on a myth (Davies, 2003), but also because it leads to a range of practices within ELT that disadvantage the majority of English speaker/users in the world (Braine, 2010; Holliday, 2005; Lowe & Pinner, 2016; Medgyes, 1994; Reves & Medgyes, 1994; Swann, Aboshiha, & Holliday, 2015).

Ema also discussed the fact that the mounting pressure to learn English has actually been shown to damage the motivation to learn other languages (Dörnyei & Ushioda, 2011). This is deeply entwined with an ‘instrumentalist view’ of language education. In other words, we learn English for the purposes of acquiring greater access to social and cultural capital. We need it, want it, know it will make our lives better. And yet, as this places great pressure on us to learn the language, it de-emphasises other languages and ‘non-standard’ varieties, and it may also inhibit personal autonomy to learn English. At this point in her seminar, I wanted to raise my hand and tell Ema that you could almost say that English becomes a ‘disembodied language’, a point I have often made when examining the idea of Global English in relation to authenticity (Pinner, 2016). In fact, in her talk Ema was mainly talking about motivation and autonomy, yet for me I felt there was a great deal of overlap here with the concept of authenticity as well. As I have discussed in my doctoral research, authenticity, autonomy and motivation seem to form a dynamic triad (Pinner, 2017). Of course, Ema knows all about this, as she is my supervisor, but her talk was already very ‘meta’ and mapping the complexities and intricacies of the global position of English as a ‘world auxiliary language’ (Lo Bianco, 2014) to her own, much more personal, individual and contextually-based approach to motivation and autonomy (Ushioda, 2011a, 2011b, 2015, 2016).

Ema pointed out that such an instrumentalist view of learning is not, in fact, unique to languages but a worrying trend that can be seen throughout education. Ema talked about the new Teaching Excellence Review to be put in place in the UK next year, in which one of the assessment criteria are graduate employment rates. This highlights the fact that education is often seen as a means to an end; there is a utilitarian focus which downplays the humanitarian role of education. Fostering individuals with the capacity for critical thought is not the role of education as the government (and hence many institutions reliant on funding) see it. However, this view is more likely to be held by those who work as teachers. In other words, Ema’s special seminar highlighted the global dynamics, mixed-messages and socio-political agendas around English language education. She drew heavily on Lo Bianco (2014) notion of ‘domesticating the foreign’ to show how local and global policies entwine in the language classroom.

Fundamentally, Ema’s main point was that reaching native-like proficiency was not a realistic or meaningful goal for many learners in global contexts. The affordance of English for gaining social capital is important, but similarly with the world moving more toward super-diversity, English educational models would be better served if they were to highlight a multiple competencies approach to learning. Another aspect is that learning should be made personally meaningful, and people should learn to speak as themselves.

Although I fundamentally agree that the native-speaker model is a serious problem for English language instruction and that it has led to the disadvantaging of the majority of English speakers, it may also lead to other forms of discrimination as a result of the entangled ideologies at work in the world. In many ways, it boils down to social and cultural capital. People make judgements about us based on how we speak; and thus it may be disadvantageous for students to focus on learning English that could be seen as deviant, especially if this makes them hard to comprehend. This argument has been made many times; it is the crux of the Kachru-Quirk argument, and also the central justifications between Jenkins’ Lingua Franca core (2000). Also, the issue of language tests (which are based on ‘standard’ notions of the language) are another obstacle.

However, I don’t think that Ema was advocating that we encourage learners to speak in a way which is incomprehensible (although this argument was voiced during the Q&A). I think rather that Ema was promoting the same idea that she put forward in her persons-in-context relational view of motivation (Ushioda, 2009), which resonates with van Lier (1996) call for awareness, autonomy and authenticity as part of the interactions in the language classroom, both of which imply sociocultural approaches to learning and ecological perspectives to language. The key is that a person does not need perfect English, and it is important for students to have realistic goals about themselves and the levels of proficiency they actually need. This has been discussed in very interesting studies by Matsuda (2011) and Kubota (2013), both of whom found that Japanese learners might do well to assess their own goals in relation to what they need to achieve with the language, rather than aspiring to be simply ‘like native speakers’.

Overall, the talk was fascinating and gave me a lot of food for thought. In the next post, I will discuss Ema’s Keynote speech which discussed whether teachers should see themselves as motivators.

 

References

Braine, G. (2010). Nonnative Speaker English Teachers: Research, Pedagogy, and Professional Growth. London: Routledge.

Davies, A. (2003). The Native Speaker: Myth and reality (2nd ed.). Clevedon: Multilingual Matters.

Dörnyei, Z., & Ushioda, E. (2011). Teaching and researching: Motivation (2nd ed.). Harlow: Longman Pearson.

Graddol, D. (2006). English next : why global English may mean the end of ‘English as a foreign language’. London: British Council.

Holliday, A. (2005). The struggle to teach English as an international language. Cambridge: Cambridge University Press.

Jenkins, J. (2000). The phonology of English as an international language: New models, new norms, new goals. Oxford: Oxford University Press.

Kubota, R. (2013). ‘Language is only a tool’: Japanese expatriates working in China and implications for language teaching. Multilingual Education, 3(1), 1-20.

Lo Bianco, J. (2014). Domesticating the Foreign: Globalization’s Effects on the Place/s of Languages. The Modern Language Journal, 98(1), 312-325.

Lowe, R., & Pinner, R. (2016). Finding the Connections Between Native-speakerism and Authenticity. Applied Linguistics Review, 7(1), 27-52. doi:10.1515/applirev-2016-0002

Matsuda, A. (2011). ‘Not everyone can be a star’: Student’s and Teacher’s beliefs about English teaching in Japan. In P. Seargeant (Ed.), English in Japan in the era of globalization (pp. 38-59). Basingstoke: Palgrave Macmillan.

Medgyes, P. (1994). The non-native teacher (Revised ed.). London: Macmillan.

Pinner, R. S. (2016). Reconceptualising Authenticity for English as a Global Language. Bristol: Multilingual Matters.

Pinner, R. S. (2017). Social Authentication and the synergies between teacher and student motivation: an Autoethnographic inquiry into the interaction between authenticity and motivation in English language teaching at a Japanese university. (PhD Doctoral Thesis), University of Warwick, Warwick.

Reves, T., & Medgyes, P. (1994). The non-native English speaking EFL/ESL teacher’s self-image: An international survey. System, 22(3), 353-367.

Swann, A., Aboshiha, P., & Holliday, A. (Eds.). (2015). (En)Countering Native-Speakerism: Global Perspectives. Basingstoke: Palgrave Macmillan.

Ushioda, E. (2009). A person-in-context relational view of emergent motivation, self and identity. In E. Ushioda & Z. Dörnyei (Eds.), Motivation, language identity and the L2 self (pp. 215-228). Bristol: Multilingual Matters.

Ushioda, E. (2011a). Language learning motivation, self and identity: current theoretical perspectives. Computer Assisted Language Learning, 24(3), 199-210. doi:10.1080/09588221.2010.538701

Ushioda, E. (2011b). Motivating learners to speak as themselves. In G. Murray, X. Gao, & T. E. Lamb (Eds.), Identity, motivation and autonomy in language learning (pp. 11 – 25). Bristol: Multilingual Matters.

Ushioda, E. (2015). Context and complex dynamic systems theory. In Z. Dörnyei, P. MacIntyre, & A. Henry (Eds.), Motivational dynamics in language learning (pp. 47 – 54). Bristol: Multilingual Matters.

Ushioda, E. (2016). Language learning motivation through a small lens: A research agenda. Language Teaching, 49(4), 564-577. doi:10.1017/S0261444816000173

van Lier, L. (1996). Interaction in the language curriculum: Awareness, autonomy and authenticity. London: Longman.