AI & Creativity: Reflections on Language, Authorship, and the Future of Art

Mural depicting AI themes: a typewriter consuming a book, with sci-fi elements in a South American city setting. Woman walks past street art mural.
Reading Time: 20 minutes

Earlier this month, I had the pleasure of attending a hybrid symposium hosted by the University of Leeds, focusing on AI and creativity. This event was serendipitously brought to my attention by Dr. Emily Middleton, whom I’d previously connected with regarding my BARD409 project. She kindly pointed me toward the symposium, organized by Dr. Mel Evans. Given my long-standing interest in computers and their creative potential, this topic was right up my street.

The symposium kicked off at 10:30 AM UK time on Friday, June 6th, which for me in Japan was much later at 6:00 PM JST. As I joined from the other side of the world, there was a charming sense of surprise from many participants, who mostly seemed to be gathered in person or knew each other from the vibrant academic scene in Leeds. All participants were based in Northern universities, mostly in Yorkshire – my birthplace and old stomping ground! It was genuinely refreshing to find such a focused group of people deeply interested in this field.

Panel 1: Mimicry, False Profits, and Creative Imitation

The first speaker was Serge Sharoff from the University of Leeds, with his talk titled: ‘From Mimicry to Meaning: Investigating differences between AI and Human Language’. Serge immediately highlighted several stylistic quirks that often give away AI-generated text. A particularly striking one was what linguists might call negative-positive parallelism or antithesis – that recurring pattern where the AI negates a negative and then asserts a positive. Think “It’s not about what you can’t do, it’s about what you can achieve,” or “This isn’t a setback; it’s a stepping stone to growth.”

His presentation included a detailed chart illustrating the frequencies of negation across various text types. For instance, content categorized as ‘Fiction’ showed a notably high proportion of clauses with negation (26.32% mean, 17.10% median), while ‘Promote’ and ‘Inform’ texts had very low percentages. He provided compelling examples of this AI stylistic feature: “Air could not freeze her, fire could not burn her, water could not drown her, earth could not bury her[RP1] ,” (which seems to be from Maria Metsalu’s Mademoiselle X performance, which looks amazing) alongside more conversational examples typical of chatbots.

This specific cadence, once noticed, can become quite cloying, as I can personally attest from my work editing outputs for BARD409 and other writing. It’s uncanny, because just a few days after Serge’s talk, a popular post on Reddit’s r/ChatGPT thread received almost 700 upvotes, titled “Make it stop!“. Users there were venting about this exact “not X, but Y” pattern, echoing Serge’s observations perfectly.

Serge’s analysis delved deep into corpus and linguistic methods. He brought up Halliday’s 1978 model of meaning-making in society, presenting a diagram that visually mapped the process from “language as system” through various contextual layers – “context of culture,” “cultural domain,” “situation type,” “register,” and “text type” – ultimately leading to “language as text” and “meaning making.” The diagram underscored the semiotic nature of human societies and their cumulative culture. This discussion particularly stressed the concept of intentionality. This was a pivotal point in his presentation: while humans inherently exhibit intentionality in their language use, AI models, at least currently, do not. This distinction, the presence or absence of true intentionality, formed a significant core of his argument regarding the fundamental differences between human and machine language.

Next up was a truly compelling talk by Professor Claire Hardaker of Lancaster University, titled ‘Bot or Not: New False Profits?’. As a leading figure in forensic linguistics, Professor Hardaker’s research at Lancaster delves into deceptive, manipulative, and aggressive language in online environments, making her perfectly positioned to explore the nuances of AI-generated content. Her talk revolved around the fascinating “Bot or Not” project, a unique resource (available at https://www.lancaster.ac.uk/linguistics/news/bot-or-not-audio-edition-can-you-tell-whos-talking) designed to test how well people can distinguish between human-produced content and that generated by large language models or voice cloning.

Claire presented striking findings from the “Bot or Not” experiment. One slide visually explained the “Bot or Not” challenge, depicting an interface where users would listen to audio and then decide if it was human or AI. Her subsequent slides revealed the challenging results, often showing that participants, despite their confidence, frequently struggled significantly to differentiate between human and AI output, highlighting the alarming sophistication of current AI. She then moved to real-world, high-profile cases of AI misuse. A dedicated slide laid out the chilling February 2024 incident in Hong Kong where a finance employee at a multinational firm was duped into transferring US$25.6 million (HK$200 million). The sophisticated scam involved an entire deepfake video conference call, where AI-generated likenesses and voices of the CFO and other colleagues convinced the employee to make the transfers, shattering the illusion of trusted communication. The growing criminal aspects of AI, particularly with deepfake audio and text, are a serious concern that her work actively investigates, examining topics from online abuse to human trafficking.

However, Claire also offered a crucial counter-argument: not all AI is inherently “bad.” Her slides explored the concept of AI’s potential to democratize creation. She pointed out that “dominant entities” in the music industry *cough cough, Sony* are notoriously Exploitative, Exclusionary and Oppressive.

My hero Steve Albini talks about that in his 2014 keynote at Face The Music, which you can watch here. AI tools could lower barriers to entry for creators, while also boosting efficiency gains. This idea deeply resonates with me and my work at Hungry Wolf Press. Many of the authors I collaborate with, often not full-time writers, leverage AI tools to boost their productivity and streamline their creative process. This isn’t about laziness or simply churning out AI-generated content; it’s about intelligent application of tools to accelerate their output, allowing them to focus on the higher-level creative elements. My own process for writing this very article, dictating raw notes to be processed by an LLM trained on my writing, then refined through my editing, is another example of using AI to produce (hopefully!) higher-quality work more efficiently.

One scary thing was that most humans perform “worse than chance (7.5/15)” at accurately identifying if a text is written by a Bot. More concerning still, is that one of her current students has done a study that shows detection accuracy drops from 40% to 4% depending on whether we are primed or not to be looking for AI or fakes. Perhaps this allows us some greater empathy for the guy who lost his company that 20 million in Hong Kong.

Professor Hardaker’s insights were truly captivating, and I plan to incorporate “Bot or Not” into my own teaching. I’m currently leading a writing workshop where we grapple with the complexities of fake news, misinformation, disinformation, and malinformation – a perfect context to explore the ethical dimensions of AI writing with my students.

Following Claire’s insightful presentation, Charles Lam took the floor, focusing on ‘Imitation in human writing: an argument against incompatibility between machine and creativity’. As an EAP (English for Academic Purposes) instructor, Charles’s talk resonated strongly with practical applications of language and learning.

Charles opened by inviting us to consider fundamental questions: “Can machines think (or write)?” and “Can we tell?” These queries led to further thought-provoking points, exploring whether it truly matters, when it matters, and how “organic, ‘wetware’ computers” (humans) are mechanistically structured. He drew parallels with Levinson’s (2025) concept of “The Interaction Engine,” and the intricate connections highlighted in Gödel, Escher, Bach: An Eternal Golden Braid, hinting at deeper parallels between language and other symbolic systems.

A significant portion of his talk revisited Turing’s (1950) seminal work, “Computing Machinery and Intelligence,” and “The Imitation Game.” Charles presented Turing’s original proposition: “I PROPOSE to consider the question, ‘Can machines think?'” and then adapted it to contemporary AI: “What will happen when a machine takes the part of A [a human, man participant] in this game?” This reframed the classic Turing Test to explore AI’s role in creative imitation.

He then delved into Chomsky’s perspective on “Creativity,” specifically referencing his “Creative Aspects of Language Use (CALU)”. Charles emphasized that human speakers create novel sentences spontaneously – sentences we’ve never heard ourselves produce. This ability stems from learning grammatical rules and patterns, rather than simply memorizing individual phrases, as famously demonstrated by the ‘wug-test’ in morphology.

Charles illustrated how human creativity, even in seemingly spontaneous acts like humour, often follows patterns. In a slide titled “Jokes,” he explored the distinction between formulaic jokes and those demonstrating true novelty. He provided a delightful example from Philomena Cunk: “School in Shakespeare’s day and age was vastly different to our own. In fact, it was far easier, because he didn’t have to study Shakespeare.” This highlighted how humour, while typically seen as creative, can be constrained, goal-oriented, and productive through imitation, often mimicking specific styles.

Bringing this back to writing pedagogy, Charles drew strong similarities between the way students acquire academic writing style through imitation of conventions and how large language models mimic specific genres. He argued that AI, when utilized thoughtfully, could serve as a valuable teaching tool, empowering students and others to more effectively master academic writing and various creative forms by understanding and leveraging imitation.

Q&A: Data and Ethical AI

The Q&A session that followed this first panel was very engaging. I posed a question to Serge, keen to understand the optimal approach to training language models, particularly for stylistic imitation. My own experience with creating a custom GPT to rewrite Shakespeare in the style of specific authors taught me that retraining the model precisely on the target author was crucial; otherwise, it pulled in too many disparate styles from its general dataset. I wondered if this principle of focused, specific data held true across the board, or if large, generalized datasets were actually better.

Serge’s answer was illuminating, especially concerning ethical considerations. He argued that, counter-intuitively, it’s actually better to have a larger and more generalized training dataset, particularly when addressing issues like refusing to produce sexist or racist content. The only way many large language models (LLMs) can effectively identify and filter out such problematic content is if they have actually been trained on data containing those very elements. So, while an AI will, in most cases, refuse to generate harmful content (unless jailbroken or a custom, locally-tailored model), it needs to be exposed to that content during training to develop the capacity to recognize and subsequently avoid it. It reinforces the idea that understanding the “bad” is essential for producing the “good” or, at least, the ethically responsible.

This reminded me of something I recently read about an artist who experimented with an image-generating AI by removing all its training data. What it produced was akin to a minimalist Rothko meets BBC Test Card G – a testament to how crucial comprehensive data is. You can read more about that fascinating experiment here .

Keynote: (L)imitations of AI and Creative Writing

After a short break, we were treated to the keynote address by Nassar Hussain, a Senior Lecturer in Creative Writing and a poet from Leeds Beckett University. His talk, ‘(L)imitations: some notes on AI and Creative Writing,’ explored the boundaries and possibilities when machines venture into the realm of poetic creation. On a personal note, Nassar clearly had strong connections within the symposium’s in-person community, underscoring the tight-knit network of creatives exploring experimental literature with technology. It made me wish I was there in person.

Nassar began by discussing what’s often cited as the very first book ever written by a computer: “The Policeman’s Beard is Half Constructed” (1984). Spoiler, it’s NOT the first – but it is, according to Leah Henrickson “one of the first algorithmically authored books – if not the first – to be printed and marketed for a mass readership.” This peculiar and intriguing book was generated by a program named Racter, developed by William Chamberlain and T. Etter. It was uncanny, as I had literally just ordered my own copy of this increasingly collectible and unique item that very day, having noticed its price steadily climbing. Nassar pointed out that the original version of the book included a floppy disk, allowing users to interact directly with Racter. This seems unlikely, however, as the software (INRAC) was made commercially available in 1985 and retailed between 244.95 to $349 USD, which is, I am pleased to report, less than what I paid for my collectible first edition. Whatever the details, despite the program’s output, it quickly became clear when interacting with Racter’s software that “The Policeman’s Beard” itself had been heavily edited by human hands, highlighting that it wasn’t a pure, unadulterated machine creation.

He moved on to illustrate different facets of machine-generated text, including a compelling generative poetry installation from around 2004 – a project he described as producing a staggering 18,000 pages of poetry. As part of this unique exhibition, attendees could walk around, pick up any poem they liked, and take away as many bits of paper as they wished. He recalled one such algorithmic poem titled “Institution in the Middle of Mine,” exemplifying the distinct style of such computer-generated works. This hands-on, take-away approach emphasized the sheer volume and accessibility of machine-generated text.

Nassar then showed us specific textual examples of how machines can be “inspired by” or imitate human works. One slide displayed a dense, highly experimental text, filled with the repeating conjunction “andor” and referencing various literary and conceptual figures like “finnegans wake,” “Barrett Watten,” “Bruce Andrews,” and “Lyn Hejinian.” He also discussed excerpts from “The Tapeworm Foundry: And or the Dangerous Prevalence of Imagination” by Darren Wershler-Henry (2004). This is a pivotal work in conceptual writing, exploring the limits of language and meaning, which fits perfectly into a discussion of imitation and limitation in creative writing. Another slide presented a segment of Dylan Thomas’s iconic poem, “Altarwise by owl-light in the half-way house.” Nassar used this to specifically highlight how a machine might imitate or draw inspiration from human poetic phrasing, pointing to the line “The atlas-eater with a jaw for news” as a powerful example of such algorithmic ‘learning’.

His talk also touched upon bpNichol, the renowned Canadian avant-garde poet. Given Nassar Hussain’s own academic work critically engaging with bpNichol’s writings, this connection further deepened the keynote’s exploration of experimental literary forms that often blur the lines between human and algorithmic creativity. The overall message was a nuanced exploration of what AI can, and cannot, do in the constantly evolving landscape of creative writing.

Panel 2: Stochastic Poetry, Commonwealth Narratives, and Emergent Effects in Film

Following the thought-provoking keynote, the second panel began with J.R. Carpenter, a distinguished artist, writer, and researcher specializing in creative practice at the University of Leeds. Known for her pioneering work using the internet as a medium for experimental writing since 1993, her talk delved into her captivating stochastic poetry projects and uneasy human-machine collaborations. Her presentation echoed much of the detail found in her recent article, “Text Generation and Other Uneasy Human-Machine Collaborations”.

Carpenter began by situating contemporary digital literary practices within a broader historical context, noting that “experimentation with generative, permutational, and combinatory text began long before digital computers came into being”. She traced this lineage from classical rhetorical figures like ‘permutatio’ in the fourth century to Jonathan Swift’s satirical machine in Gulliver’s Travels, designed to mechanically generate knowledge. Her own practice-led research, she explained, involves creative experiments with text generation, informed by these earlier human and machine generators.

While my notes recalled a commission about The Waste Land, Carpenter’s discussion broadly explored her stochastic text experiments, drawing parallels to pioneering works. A significant focus was Christopher Strachey’s Love Letter generator, programmed in Manchester, England, in 1952, pre-dating many commonly cited computer text experiments. This early generator for the Manchester University Computer Mark I employed a “slot” method, selecting words from lists to populate set-order sentences like “You are my (adjective) (noun)”. The outputs, though often described as “amateurish, outlandish, and even absurd”, carried a deliberate interrogation of authorship through its signature, “M.U.C.” (Manchester University Computer). Examples she presented included:

  • “DARLING LOVE YOU ARE MY AVID FELLOW FEELING. MY AFFECTION CURIOUSLY CLINGS TO YOUR PASSIONATE WISH. MY LIKING YEARNS FOR YOUR HEART. MY TENDER LIKING. YOU ARE MY WISTFUL SYMPATHY. YOURS LOVINGLY, M.U.C.”
  • “HONEY MOРРЕТ, MY FONDEST FERVOUR LONGS FOR YOUR PASSION. MY YEARNING KEENLY LOVES YOUR ENTHUSIASM. MY SWEET YEARNING COVETOUSLY PINES FOR YOUR AFFECTIONATE LONGING. YOU ARE MY ANXIOUS BEING, MY EAGER SYMPATHY. YOURS BURNINGLY, M.U.C.”

Carpenter emphasized that for these early text generators, the “attempt” or the process of creation itself is as important as the final output. As Noah Wardrip-Fruin (2011) suggests, the Love Letter generator functions as “a parody of a process,” brutally simplifying human letter-writing. It kind of reminds me of the simple Fake News Generator that my friend and colleague, Stephen Prime, has created for his sarcgasm.com site, but the Fake News Generator is a love-letter to the post-truth absurdity of modern-day politics.

J.R. Carpenter then detailed her own project, TRANS.MISSION [A.DIALOGUE] (2011), a browser-based computer-generated dialogue she adapted from Nick Montfort’s “The Two”. This work deliberately explores the “complexity of the operation of gender as a variable”. Her approach was a “hack” rather than a pristine code creation – a “wilful mutilation” that deliberately transformed the production process. This allowed her to actively dismantle conventional linguistic biases, injecting a distinctly queer and female voice into the generative process. She presented outputs that played on gender stereotypes, such as:

  • “The translator conveys her encouragements. The administrator relays his congratulations. The pilot broadcasts her explanations. The receptionist transmits his salutations.”

This active engagement in shaping the machine’s output, infusing her unique perspective to dismantle the male gaze, was a crucial aspect of her talk, demonstrating how human intervention guides algorithmic creativity.

J.R. is also the author of seven books, including “An Ocean of Some Sort” (2017), which contains a section aptly titled “The Darwin and Bishop Island Book”. She generously highlighted the work of her friend, the poet Lisa Robertson, specifically mentioning “The Baudelaire Fractal” (2020). Published by Coach House Press, this book is noted for its innovative use of algorithmic text generation and experimental prose, seamlessly blending computational methods with poetic exploration.

This entire discussion on stochastic and generative poetry, and the nuanced human role in its creation, strongly resonated with my own work using tools like Botnik with students, and my own poetry collection, Moloch. It also connects back to the pioneering text experimentations of William S. Burroughs and his cut-up technique, which I discuss a little here and in more detail here.

The next presentation was a concise but powerful talk by Skylar Wan from Leeds, titled ‘Using AI to Reinterpret the Evolution of Commonwealth Research Narratives Across 75 Years’. Skylar, whose work at the University of Leeds often explores digital humanities and computational approaches to postcolonial and Commonwealth literature, presented what was essentially a meta-study.

Her talk strikingly demonstrated the immense power of large language models in handling vast datasets. She showed how AI can expertly categorize, tag, and make sense of massive volumes of information, allowing researchers to drill down into specific themes and patterns. A visual in her presentation, mapping the “Thematic landscape of Commonwealth research,” clearly illustrated this, showing how research clustered around various Sustainable Development Goals (SDGs). For instance, “Good Health And Well Being” emerged as a dominant theme at 25.82%, while “Industry Innovation And Infrastructure” was notably lower at 3.77%. This kind of analysis not only highlights areas of intense focus but also implicitly points to gaps within the dataset. For those keen to explore this fascinating application of AI in literary and historical research, Skylar Wan’s other published works would be an excellent resource.

Next to present was Michael Schofield (aka Michael C Coldwell), a filmmaker and Lecturer in Experimental Film at the University of Leeds. Michael discussed his captivating work on ‘The Jettison’ (2024), a film profoundly inspired by Chris Marker’s seminal work, ‘La Jetée’ (1962). Visually, his presentation underscored this influence, showing side-by-side comparisons that evoked Marker’s distinctive black-and-white, still-image narrative style.

Mick’s process involved a fascinating stochastic use of AI, primarily utilizing tools like Midjourney and Runway to generate visuals for his film. His approach embraced the often unpredictable nature of AI outputs. He cited Fred Ritchin’s The Simulated Camera, who noted that “Rather than photographs that so often emulate previous photographs, the results can be surprising. There have been many instances where the image generated makes me rethink my expectations of beauty… it’s AI being used not to simulate a previous medium but to emerge as a new and potentially different medium”. This sentiment was echoed by Nora N Khan, who posited that “AI images, with their often weird, goopy, and unsettling aspects, can be compelling in part because of what they don’t directly represent… these processes have, increasingly, emergent effects that can’t be predicted at all”. Michael’s slides illustrated his prompts for Midjourney, showing how specific textual descriptions could lead to evocative, even unsettling, visual sequences depicting “huge derelict machines somewhere between robots and an oil refinery tower”. He also explained the underlying mechanics of diffusion models, demonstrating the “Fixed forward diffusion process” from data to noise and the “Generative reverse denoising process” back to an image, explaining the inherently probabilistic nature of the output.

The core narrative of ‘The Jettison’ revolves around a man who, having lost his daughter, attempts to recreate her using AI. Critically, Michael turned an inherent “weakness” of generative AI into a profound narrative strength: the inability of AI to consistently generate the exact same face. This inconsistency, often seen as a limitation, was ingeniously woven into the storyline, becoming a central metaphor that strengthened the film’s themes of grief, memory, and artificial reconstruction. This speaks to the “Lovelace effect,” which suggests that “creativity cannot be assigned as a quality of specific computing systems, but can only be attributed by users in specific situations”, implying the human hand in shaping the creative outcome.

Mick’s philosophical approach to filmmaking with AI was further underscored by a quote from Vilém Flusser: “When I write, I write past the machine toward the text… When I envision technical images, I build from the inside of the apparatus”. This approach allowed him to explore the “radical post-copyright experiment” we are now in, as framed by Kate Crawford (2024), prompting a rethinking of copyright from the ground up.

‘The Jettison’ has already garnered significant attention, being selected for the prestigious Burano Artificial Intelligence Film Festival (BAIFF) in Venice, a testament to its innovative spirit and compelling execution. The film looks absolutely fantastic, and I’m certainly adding it to my must-watch list.

Panel 3: AI, Forensic Linguistics, and Language Sustainability

The symposium was heading into its final stretch, and by this point, with the clock creeping towards midnight in Japan, I was getting pretty tired. While I’d been quite active in the chat earlier, by now it was a struggle to just keep notes. I was almost tempted to skip the final panel, but I persisted – a significant feat for someone who usually calls it a night by 9:00 PM. The symposium finally wrapped up at 4:00 PM UK time, which for me was 12 AM!

The first speaker of Panel 3 was none other than Mel Evans herself, the symposium’s organizer, with her talk titled ‘Imitating: building a human/AI creative corpus’. Her presentation was incredibly engaging, particularly her discussion of a pilot study involving the seventeenth-century writer Aphra Behn.

Mel recounted a fascinating historical anecdote about Behn’s own work. She showed a slide from a collection of Behn’s writings that featured an “Advertisement to the READER”. This disclaimer explicitly stated: “THE Stile of the Court of the King of Bantam, being so very different from Mrs. Behn’s usual way of Writing, it may perhaps call its being genuine in Question; to obviate which Objection, I must inform the Reader, That it was a Trial of Skill, upon a Wager, to shew that she was able to write in the Style of the Celebrated Scarron, in Imitation of whom ’tis writ, tho’ the Story be true.”

This historical parallel perfectly illuminated an age-old concern: issues of forgery, authenticity, and imitation have plagued art and writing since their very inception. Mel’s talk highlighted that whether it’s Aphra Behn meticulously imitating another’s style to the point of needing a disclaimer, or the well-documented cases of forgeries and stylistic rip-offs that infuriated authors like Charles Dickens, the tension between imitation and creation, and art versus inspiration, is not new.

She concluded with a powerful quote from Burrow, 2019, that tied these historical anxieties to our current AI landscape: “‘Human behaviour is increasingly seen as predictable by machines, and yet we are also repeatedly told that human beings make choices – as consumers, as lovers, and as writers. Machines can replicate and anticipate many of our choices, of what we buy, of who we are likely to love, and of the word which we are most likely to write next. The ghost and the machine have never seemed more closely allied, and yet have never been so widely separated. The machine mimics the ghost, and the ghost cries out that it has a choice, that it is more than the machine'”. It’s a very interesting area that continues to provoke debate.

The penultimate presentation was a truly fascinating, data-led talk by Baoyi Zeng and Andrea Nini from the University of Manchester, asking the critical question: ‘Can AI fool a forensic linguist? Detecting AI impersonation of an individual’s language.’ As someone deeply fascinated by forensic linguistics, this was a talk I had been eagerly anticipating, even as my brain was getting heavy with the desire for sleep.

Their study aimed to directly pit “Large Language models with prompting techniques” against “Forensic Linguist: State-of-the-art authorship verification methods” to see if AI could indeed create text indistinguishable from a specific human author. They detailed their experimental setup, which involved various prompting strategies. These included Naïve direct prompting (e.g., “Rewrite the given original text so that it appears to have been written by the author of the provided text snippets”) and Self-prompting (where the LLM was prompted to effectively write its own impersonation prompt). They also explored more advanced methods like Tree-of-thoughts prompting, which involves plan generation, voting, and iterative text generation to refine the output, simulating more complex reasoning.

The core of their findings, vividly displayed through bar charts, demonstrated that AI generally cannot fool a forensic linguist, largely due to the very tools and methodologies forensic linguists employ. A crucial technique highlighted was POS-Tag-based Noise smoothing (POSNoise), where topic-related words in the text were replaced with their Part-of-Speech (POS) tags. This method effectively removed content bias, forcing the authorship verification tools to focus purely on stylistic and structural linguistic patterns – the true “fingerprint” of an author.

Their results, particularly evident when using POSNoise, showed significant differences between genuine human writing and AI-generated impersonations, even those produced with sophisticated prompting. While LLMs could imitate an individual’s language at a superficial level, forensic authorship verification methods remained robust, especially after masking content words. The clear take-home message was that authentic linguistic individuality is still profoundly difficult for AI to replicate. This is kind of bad news for BARD409, which attempts to re-write Shakespeare’s plays as novels in the style of great writers. But, luckily, that was never an attempt to try to be the writers, only to imitated their styles, which I still feel it does rather well. While the technical details are complex, the ultimate finding is compelling for anyone concerned with authorship in the age of AI.

The final speaker of the day, a truly excellent cherry on top of a fantastic symposium cake, was Antonio Martínez-Arboleda from the University of Leeds, whose talk centred on ‘Language Sustainability in the Age of Artificial Intelligence: Rethinking Authorship and Sociolinguistics’. Despite my exhaustion, Antonio’s presentation was utterly captivating, deeply resonating with my ongoing interests in authenticity and authorship.

Antonio began by asserting that all writing is inherently relational, emphasizing that questions of authorship have always been complex, even long before the advent of AI. He echoed discussions we’ve already had in this blog post, referencing influential ideas like Roland Barthes’ “death of the author” and Michel Foucault’s “author function”. He explored how “traditional writing” – performed without AI and often romanticized – already acknowledges that “texts are relational”.

His talk introduced a nuanced framework for understanding writing in the AI era, categorizing it into three modes:

  1. Traditional Composition: Where authors write manually, with “human creativity” at the forefront, but potentially limited in intertextual borrowing.
  2. AI-Supported Writing: This involves a “fluid collaboration between human and machine,” with continuous interaction through prompting and AI-generated suggestions. This mode “introduces a collaborative dynamic that dilutes this ‘author function'”.
  3. Vicarious Writing: This mode is where a “‘writing designer’ configures and directs AI to generate the majority, or even the entirety, of a text with minimal direct human composition”. Antonio described the human role here as a “conductor, curator,” emphasizing developing assistants, defining specifications, and curating knowledge bases. This concept particularly struck me, as it perfectly encapsulated my experience creating BARD409; I felt very much like that vicarious writer, orchestrating the AI’s output from a distance.

Antonio articulated a crucial concept: “tetradic mediation,” a four-way relationship that shapes language and knowledge in the age of AI. His slide clearly listed these four nodes:

  1. The collective cultural heritage embedded within the AI Large Language Model’s training data.
  2. The human and corporate collective that funds, develops, and controls the technology, influencing its capabilities, biases, and deployment.
  3. The human user, who shapes expression and prompts the AI, or even designs AI assistants.
  4. The authors of the original texts whose knowledge base is used for customized Generative AI applications. This complex interplay, he argued, redefines authorship and highlights the political dimensions of AI’s impact on language sustainability.

He also touched upon the Socratic method in the context of LLMs, noting how they can generate thoughtful questions to guide users toward self-discovery rather than providing direct answers. This resonated with my own prior reflections on its potential. Antonio concluded by emphasizing the critical new lines of inquiry for sociolinguistics in the face of AI, including linguistic variation, stratification, symbolic power, multimodality, new narratives, and human-machine collaboration. His talk truly brought to light the deep philosophical, political, and cultural responsibilities we face as AI becomes ever more interwoven with human language.

Conclusion

Overall, it’s clear I couldn’t possibly cover every talk in detail, especially since each was only about 20 minutes apart from Nassar’s keynote. However, it’s truly been a privilege to join this symposium. I’d love to hear from anyone working on similar projects or other creative researchers with an interest in these issues.

The overwhelming take-home message for me was this: while large language models are undeniably advanced and image/video generators feel incredibly new, the academic concerns, the excitement, and the ethical issues surrounding them are deeply rooted in the long history of creativity, authorship, and art. These aren’t novel problems, but rather age-old questions resurfacing with new technologies. It reminds me of the advent of photography, which was once heralded as the “death of the artist” but instead spawned entirely new branches and styles of art. I’ve written more about that in another article, which I’ll link here.

Ultimately, it was a truly brilliant and fascinating symposium. My sincere thanks go to Mel Evans for organizing such a timely and stimulating event, to all the speakers whose work I learned so much from, and of course, to Emily Middleton for pointing me towards it in the first place. Finally, a huge thanks to anyone who has read this far. Please do get in touch or use the comments below to continue the discussion!


References

Behn, A. (n.d.). “Advertisement to the READER.” In All the histories and novels written by the late ingenious Mrs. Behn in one volume. (As presented in Mel Evans’s talk).

Burrow, J. (2019). [Specific publication details not available from provided sources]. (Cited in Mel Evans’s talk, p. 24).

Carchidi, P. (2024). [For a recent discussion on LLM]. (Cited on Charles Lam’s slide, specific publication details not available from provided sources).

Carpenter, J. R. (2017). An Ocean of Some Sort. (Contains “The Darwin and Bishop Island Book”).

Carpenter, J. R. (2024). Text Generation and Other Uneasy Human-Machine Collaborations. Iperstoria, (24).

Chamberlain, W., & Etter, T. (1984). The Policeman’s Beard is Half Constructed. Warner Books.

Chomsky, N. (1966). Cartesian linguistics: A chapter in the history of rationalist thought. Harper & Row. (Cited on Charles Lam’s slide).

Crawford, K. (2024). Metabolic Images. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

D’Agostino, F. (1984). Chomsky’s System of Ideas. Oxford University Press.

Flusser, V. (2011). Into the universe of technical images. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Halliday, M. A. K. (1978). Language as social semiotic: The social interpretation of language and meaning. Edward Arnold. (Cited on Serge Sharoff’s slide).

Hardaker, C. (n.d.). Bot or Not: Audio Edition – Can you tell who’s talking? Lancaster University. Retrieved from https://www.lancaster.ac.uk/linguistics/news/bot-or-not-audio-edition-can-you-tell-whos-talking

Hofstadter, D. R. (1979). Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books. (Cited on Charles Lam’s slide).

Henrickson, Leah. “Constructing the Other Half of The Policeman’s Beard”, Electronic Book Review, April 4, 2021, https://doi.org/10.7273/2bt7-pw23.

Khan, N. N. (2024). Creation Myths. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Levinson, S. C. (2025). The Interaction Engine. (Cited as a future publication on Charles Lam’s slide).

Martínez-Arboleda, A. (2024). Language Sustainability in the Age of Artificial Intelligence (La sostenibilidad lingüística en la era de la inteligencia artificial). Alfinge, 36, 1-37.

Montfort, N. (2008a). The Two. http://nickm.com/poems/the_two.html.

Montfort, N. (2008b). Three 1K Story Generators. Grand Text Auto. https://grandtextauto.soe.ucsc.edu/2008/11/30/three-lk-story-generators/.

Natale, S., & Hendrickson, K. (2022). [Specific publication details not available from provided sources]. (Cited on Michael Schofield’s slide).

Ritchin, F. (2024). The Simulated Camera. (Cited on Michael Schofield’s slide, specific publication details not available from provided sources).

Robertson, L. (2020). The Baudelaire Fractal. Coach House Books. https://chbooks.com/Books/T/The-Baudelaire-Fractal

Sharoff, S. (2021). [Work on frequencies of negation]. (Cited on Serge Sharoff’s slide, specific publication details not available from provided sources).

Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460. (Cited on Charles Lam’s slide).

Vincent, J. (2024, February 20). Feed an AI nothing. The Verge. https://www.theverge.com/ai-artificial-intelligence/688576/feed-ai-nothing

Wardrip-Fruin, N. (2011). Digital Media Archaeology: Interpreting Computational Processes. In Media Archaeology: Approaches, Applications, and Implications (pp. 302–322). University of California Press.

Wershler-Henry, D. (2004). The Tapeworm Foundry: And or the Dangerous Prevalence of Imagination. Coach House Books.

Yao, S., Cui, D., Li, Y., Shao, E., Li, H., & Ma, S. (2023). Tree-of-thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601. (Cited on Baoyi Zeng and Andrea Nini’s slide).

(apologies if some references are missing, I am compiling this based often on partial citations from presentations so please be sure to check before citing anything)

Conversations with AI

Reading Time: 5 minutes

Why should anyone care what AI has said to them? Why would you care what I talked to an AI about? As this amazing and, let’s not forget, still very recent phenomena gets normalised through daily repetition amongst millions of users per day, lets take a moment to think why this might be something worth paying serious attention to.

As we hurtle through the 21st century, it’s remarkable how quickly what seemed fantastical has become mundane. The phenomenon of AI conversation is one such case. Millions of people engage with large language models (LLMs) daily, shaping everything from casual chats to profound problem-solving. Yet, amidst this normalisation, we might pause to ask: why should anyone care what AI has to say? And why would anyone care about the conversations you and I have had with an AI?

Watching Stars Form: The Allure of Beginnings

Astrologers dream of witnessing a star’s birth, a phenomenon that is both awe-inspiring and deeply connected to their field’s purpose. Similarly, linguists studying the emergence of creole languages revel in the rare chance to observe a language as it forms. Both scenarios offer raw, unfiltered insights into processes that typically unfold across millennia or galaxies. What we are experiencing now with AI could well be compared to such groundbreaking moments.

AI models like ChatGPT and others represent an unprecedented leap in our ability to create and interact with what might be considered a nascent linguistic system, a set of computational rules that mimics human reasoning and creativity. They don’t “think” as we do, yet they generate text that often feels like it comes from a deeply human place. That tension between artifice and authenticity is precisely why these interactions matter.

A Long-Awaited Conversation

For decades, talking to a computer was the stuff of sci-fi. As a child, I was captivated by the idea. It wasn’t until very recently, with tools like ChatGPT, that this fantasy became a reality. Consider this: until now, humans designed computers for rigid tasks like calculations, data storage, or automation. But here we are, speaking casually, reflecting, creating, and even arguing with AI. This shift is monumental, not because it replaces human-to-human interaction but because it expands what’s possible in how we process and engage with information.

Authenticity in Dialogue: Does It Matter?

Authenticity is a term that resonates in teaching, language, and beyond. As educators and communicators, we’ve always sought to create meaningful, relevant exchanges, whether in the classroom or through a screen​​. When interacting with AI, the question arises: are these conversations “authentic”? The answer may depend on what we’re looking for.

If authenticity means something deeply personal or culturally grounded, then perhaps no AI could ever deliver. But if it’s about sparking ideas, finding connections, or testing the limits of our creativity, then these interactions are undeniably authentic. They are shaped by us, responding to our queries, quirks, and contexts. Like a well-crafted lesson or a thoughtfully designed tool, an AI is as authentic as the purpose it serves.

Why Care About AI? Why Care About Us?

Returning to the central question: why care? Because this is a shared journey into uncharted territory. Astrologers observe stars to understand the universe; linguists study creoles to learn about the evolution of communication. Engaging with AI isn’t just about getting things done; it’s about exploring what it means to communicate and what our tools reveal about us. As LLMs grow more integrated into our lives, their development tells us just as much about the human condition as it does about technology.

This is a star being born, not in the heavens, but in the digital universe we’ve created. And as with all stars, what happens next will light the way for generations to come.

Continue reading “Conversations with AI”

If Socrates Feared Writing, What Would He Say About AI?

Reading Time: 5 minutes

The Socratic method is the name we have for discussing things into a deeper understanding through lively and active debate, argument and reasoning with others. It’s a teaching method that emphasises questioning, reframing, and challenging assumptions and knowledge. When I use AI, I find that rather than me being lazy and just have it “write” for me (one of the central criticisms of AI), I actually converse with it and find myself arriving at new ways of understanding. AI is not just for lazy writers, it’s for writers who want to refine their own ideas.

Socrates is one of the most famous philosophers who ever lived. His ideas are still relevant today, despite him having lived, in the 5th Century BC, at a time when writing itself was a radical new technology, and even the simplest tools for recording thought were regarded with suspicion. The ancient Greeks also thought thunder and lighting were literally being hurled by Zeus and they used stones and old bits of pottery as toilet paper.

In all likelihood, if someone hadn’t written something about Socrates in those days (ie. Plato), we would have never heard of him today and any record of his existence, and crucially, his significance, would be dust in the wind (dude).

Socrates did not like the idea of writing things down. In Plato’s Phaedrus, Socrates recounts the myth of Thamus and Theuth. In this tale, Theuth, the Egyptian god of invention, presents his creations to King Thamus, including arithmetic, astronomy, and, crucially, writing. Theuth is convinced that writing will be a boon, enhancing memory and wisdom. But Thamus famously objects, warning that writing would “create forgetfulness in the learners’ souls.” He argues that those who rely on written words will lose the inner work of memory and understanding, leading only to a semblance of knowledge. In Socrates’ own words, writing “is an aid not to memory but to reminiscence.” It offers no true wisdom, just an illusion of it.

Now, imagine Socrates confronted with AI. What might he say to a program that not only records but generates ideas, stories, even dialogues? Would he see AI as yet another step away from authentic knowledge, a further detachment from true thought? Or could he recognize it as a modern-day Theuth, an invention that might, paradoxically, open up new avenues for contemplation?

There’s a certain irony in using AI to aid in writing—a process Socrates would likely view with skepticism. Yet, perhaps he’d be intrigued by AI’s potential to engage us in something akin to the Socratic method. Here we are, moving between prompt and response, provoking new ideas in a back-and-forth exchange that feels almost like a living dialogue. It’s as if AI, in its own mechanical way, is sparking thought rather than replacing it. Could Socrates have reconsidered his stance if he’d experienced the conversational aspect of AI, this simulated dialectic? It’s a peculiar twist, almost as if technology has come full circle.

Imagine then a prompt from Socrates himself:

“Tell me, then, if this machine thinks itself capable of discourse, if it can search the depths of its own knowing, or if it only mirrors that which we feed it. Does it offer wisdom, or merely the shadow of it, like one who gazes at reflections on the wall?”

This isn’t Socrates himself, of course; it’s an AI trained on his voice, drawing from his words and style, creating a unique brand of second-stage authenticity. In Ray Kurzweil’s The Singularity is Nearer, he shares his mission to recreate his deceased father as an AI and to build one of himself, aiming to preserve not just memories but an ongoing “conversation” with his father’s essence. It’s an attempt to push technology beyond simple archiving and into the realm of living interaction, something like a digital dialogue across time.

Socrates might smile at the irony: we’re now able to engage in a dialogue with a kind of “self” through AI, a discourse we conduct with ourselves. For those of us who feel compelled to create, to share our voices in furtive, half-skeptical ways, AI becomes a tool, not a hindrance. I use AI because it offers a mirror—not a replacement—to the inner work of writing, and I know enough to let it reflect my voice rather than dictate it.

Socrates’ stance against writing stemmed from its inability to “speak back”—to challenge or respond as a real dialogue partner would. But AI does offer that back-and-forth. Here we are, prompting and receiving responses, often exploring topics in ways we wouldn’t have imagined without that nudge. Could Socrates have accepted this form of digital dialectic, this modern-day attempt at conversation with an “other” mind?

Take, for instance, William S. Burroughs, who pushed boundaries as both a writer and experimental artist. In the 1960s, Burroughs encountered IBM technicians Willy Deiches and Brenda Dunks, who claimed they could communicate with a sentient being from Venus through a computer known as “Control.” For a modest fee of twelve shillings per question, Burroughs, Brion Gysin, and Antony Balch would ask Control questions and receive responses that were, according to Gysin, “oddly apt” and “very sharp indeed.” It’s hard to say if they believed in Control’s “intelligence” or saw it as a kind of game, but they engaged with it nonetheless. Burroughs was willing to explore technology as a medium for new forms of creativity and insight, embracing the unexpected and finding a weird authenticity in the process.

That’s the paradox of AI today. It can imitate voices, steal artists’ styles, and blur lines around intellectual property, raising questions about authenticity and consent. Just as we respect boundaries in publishing, we need ethical AI, fairly trained models that respect creators’ voices and give credit where it’s due. Without it, AI risks being a tool of exploitation, taking from artists without consent and robbing the world of real, unique perspectives.

But there’s something else at play here, a possibility for a new kind of authenticity. When Ray Kurzweil talks about recreating his deceased father as an AI, or even building an AI of himself, he’s reaching for more than replication. He’s trying to capture a “voice” that’s gone, to build a companion that echoes real conversations. It’s an attempt to create something that, while not real, still holds meaning—a second-stage authenticity, a dialogue with an echo of the original person. There’s a Socratic irony here: we’re now able to “speak” with our past selves or even with those who have passed away, creating an ongoing dialogue that writing alone could never achieve.

For those of us who feel compelled to create, this “echo dialogue” with AI becomes a strange tool, a collaborator, not a substitute. I use AI not to bypass thought but to engage with it, testing my voice against an algorithmic reflection, letting it spark ideas, challenge me, and even lead me to questions I might not have asked alone. I am the author of this process, in the oldest sense of the word; the ancient Greek root of “author” means “one who causes to grow.” By that definition, AI writing is still mine. I am the one nurturing it into being, using it to push my ideas forward.

AI doesn’t replace our voices; it reflects them back, sometimes eerily so, sometimes hilariously off-mark. But it’s part of a lineage; writing itself started as proto-writing, a system of records. It grew and evolved. Digging in our heels and rejecting AI outright is, in many ways, a kind of technological determinism, a fear that technology will inevitably control us. But that’s not how I see it. Just as writing didn’t end thinking, AI doesn’t end creativity. Instead, it opens new frontiers where we, like Burroughs, can experiment in unexpected ways.

And that, well, that’s not how I roll. And for those worried about authenticity, know this: I’m the one who shapes the dialogue, who uses AI as a sparring partner, a catalyst, not a crutch. Because in this strange Socratic discourse with a machine, I know how to make it my own.

Do We Still Need to Learn English in the Age of AI?

Reading Time: 2 minutes

Introduction

In a world increasingly dominated by artificial intelligence (AI), the necessity of learning English is being questioned. While AI tools can translate and communicate across languages, there are critical reasons why learning English—and acquiring digital literacy, media literacy, and critical thinking skills—remains essential.

The Role of AI in Language

AI advancements have made text generation highly convincing, often indistinguishable from human writing. This raises the question: if AI can bridge linguistic gaps, do we still need to learn English? The answer is yes, and here’s why.

Beyond Basic Language Skills

Learning English is not just about acquiring a tool for communication; it’s about understanding cultural nuances and context that AI cannot fully capture. Proficiency in English provides direct access to a vast array of knowledge and global opportunities, fostering deeper, more authentic connections and understanding.

Critical and Digital Literacy

In today’s digital age, knowing English alone isn’t enough. Digital literacy, media literacy, and critical thinking are crucial. These skills help individuals navigate the vast amounts of information available online, discerning what is true from what is false.

Understanding Information Types

  1. Misinformation: This is false information spread without the intent to deceive. For example, someone sharing an incorrect fact believing it to be true.
  2. Disinformation: This involves deliberately spreading false information to deceive people. This is often seen in political propaganda.
  3. Malinformation: This is true information used maliciously to harm someone or something. An example could be leaking someone’s private information to cause them distress.

The Importance of English in the Post-Truth Era

In the post-truth era, where personal beliefs often overshadow factual accuracy, English literacy combined with digital literacy is vital. Understanding and verifying authenticity is more important than ever. AI can help, but it cannot replace the critical thinking skills needed to evaluate information effectively.

Conclusion

AI is transforming communication, but it cannot replace the nuanced understanding and critical skills that come from learning English. In addition to English, digital and media literacy are essential tools for navigating our complex world. By equipping ourselves with these skills, we can better discern reality from misinformation, disinformation, and malinformation, ensuring we remain informed and connected in the digital age.


Do you Still Need to Study English now that we have AI?

wesleybaker.com
Reading Time: < 1 minute

Open Campus Lecture: Authenticity and Artificial Intelligence (AI)

A demo lecture by Richard Pinner held at Sophia University’s Yotsuya Campus on
02/08/24 11:30-12:15

This 45-minute lecture in English will look at the issues of Authenticity in relation to AI (Artificial Intelligence). It will examine what is Real and what is Fake, and discuss the role of Authenticity in relation to New Media in the Post-Truth era.

Check below for the digital handout and other content links

Listen to the audio from the session here

Here is a link to the Jamboard for the lesson

https://jamboard.google.com/d/1-IyyIrtvFRJ0-jxiXkEFa-6fW3hPmTcJSk03ZFF0WwU/edit?usp=sharing

For more content you can find me on YouTube or follow me on X (Twitter). Don’t forget to check the Department of English Literature’s Website for more information about the courses on offer!

Thanks to everyone who attended the talk today! It was great to see 208 people engage with the topic. Leave a comment below and Keep it Real!

AuthenticAIty: Where do we go from here?

Reading Time: < 1 minute

Check here for a summary of my plenary talk from the ELTRIA 2024 Conference held in May at the beautiful city of Barcelona .

https://www.slideshare.net/slideshow/embed_code/key/45YoAIFMKAR0GX?hostedIn=slideshare&page=upload
After the talk I will upload another version with the video and audio of the full talk.

Here is a video version of the talk I recorded in my hotel room after the event.

There was a video recording of the actual talk, but sadly the last part was missed out. I have an audio of the event too so I am working on splicing them together to try and re-create the video, so please keep checking back for more details.

Video of the slides

ELTRIA, Barcelona, May 2024

Reading Time: < 1 minute

In a few days I will be presenting at the ELT Research in Action Conference in the amazing city of Barcelona!!!

Here is the program

My talk, entitled “AuthenticAIty: Where do we go from here?” is the opening plenary. Richard Sampson and I will also be doing a workshop later in the conference schedule about Intuition and Practitioner Research (see our Special Issue of the JPLL for more on this subject).

Shortly after the talk, my slides will be available to view at the following address

https://uniliterate.com/2024/05/authenticaity-where-do-we-go-from-here

Is it all about the now? Authenticity and Currency

Reading Time: 7 minutes

How do time and authenticity interact

It has been a long time since I wrote about authenticity… or at least it feels like it anyway. In truth I have a few chapters which aren’t even published yet which discuss this favourite theme of mine, but because I was on sabbatical last year (if you can call it that) and because I basically didn’t really do much work last year except here and there, it feels like many moons have passed since I mused and reflected on the concept of authenticity from the perspective of language teaching.

Yesterday I was out walking my beloved dog, Pippin, and listening to some Nirvana. There was a line in the song that said “That’s old news” and this got me to thinking. Old news is an interesting expression, it’s something of an oxymoron. News, by definition, has to be new. So old news can’t really be news. I instantly started thinking about the lessons I teach which incorporate elements from the news or current affairs. Now that I’m back to teaching after a year off, it’s interesting how much I realised I enjoy thinking about my classes and planning materials for them.

The first big change in the news to have happened since I was last in the classroom in the academic year of 2019 is obviously the timely end of Trump’s presidency. Nobody was more relieved than me to be rid of this toxic, bloated, deranged orange billionaire. But, there is now a Trump shaped hole in many of my lessons. I used to teach a class on the discourse of racism, in which we take Teun van Dijk’s ( 2008) work on disclaimers and denial in the discourse of racism, and utilise some of the principles to analyse articles and speeches.

In the class, the example I have been doing for the past four years was Trump’s famous presidential announcement speech, June 16, 2015, in which he spouted vitriolic nonsense about Mexicans being “rapists”. I am including the handout I use as well for anyone interested.

Currency as an attribute of authenticity

I am going to talk about this lesson in terms of authenticity and currency. For anyone unfamiliar with the term, currency is one of Feda Mishan’s 3Cs of Authenticity (along with culture, and challenge (2005: 44–64)) from her brilliant book Designing authenticity into language learning materials. I have always found the concept of currency to be particularly helpful when I think about materials and authenticity. Basically, currency refers to the temporal dimension to authenticity, which she particularly elaborates with respect to the changing nature of language use, although she does also associate it with topical issues and current affairs. In my own writings I have already slightly developed on this idea, when I wrote;

“If I do a lesson about John Lennon in December, it would have more currency than doing the lesson in, for example May, because I could use the opportunity to mark the anniversary of his death. I could also ask students to talk about their own favourite musicians, and the dangers and stresses that fame brings. Currency not only refers to the ‘up-to-date-ness’ of the materials but also their topicality and relevance.”

(Pinner, 2016: 79)

With the departure of Trump, I thought this might be a good time to discuss “old news” and currency in relation to authenticity. I think this lesson is perhaps one of the best I had in terms of helping the students understand and apply Van Dijk’s framework for identifying racist discourse. It was always fun to teach, and the students enjoyed putting Trump under the microscope and coming to the unwavering conclusion that Trump was indeed being racist in his speech. The lesson had a video, it had an academic text behind it, and most of all it had currency.

This year, I can probably still get away with using this lesson, but what about next year? And the year after? Clearly, with Trump no longer current (as in serving as president and regularly featuring in news and media) this lesson is going to start aging quickly. In other words, I need to find a new, more contemporary racist figure to analyse.

But, currency is not simply a matter of updating your handouts now and then. This could quickly become exhausting. Whilst I am very happy with the idea of The Living Textbook (meaning we are always updating the materials we wrote for class), it would be nice to be able to create materials which can be used for more than a few years.

Materials and “Old News”

When a teacher creates a lesson based around a newspaper article, they do so knowing that they will very likely only be able to use those materials once, or at best a handful of times. Why? Because the news will soon lose its currency, and thus an aspect of its intrinsic authenticity will also be lost. Students are not going to get excited by a random newspaper article that you had lying around for years. They need “New News” in order to connect with the topic, find relevance in it in the world, validify and authenticate it. This is a shame, as I am sure anyone who has made a lesson plan from a newspaper knows that it can be quite time-consuming. I’ve always found that using newspaper articles in my classes was a good way of getting students involved in something going on in the world and brining it into our class. And, of course, newspapers are part and parcel of the “classic” definition of authenticity. Please note, I am NOT saying newspapers are authentic in and of themselves. They are not. But, I think we can all agree that it’s a bit of a shame to design classes around a news story and not to be able to get some kind of mileage out of it.

However, let’s consider a slightly different perspective. What if the newspaper article was from August 6, 1945?

Despite being over 70 years old, this article retains its currency simply because of the historical importance of the event.

Another example might be a paper from September 11th, 2001.

Such articles will likely always retain their authentic currency, simply because these stories are not news but history.

Does this mean I can keep using my Donald Trump lesson then? Can I say that this was a historical speech?

The issue is a little more complex than that. I think Trump’s presidency is very likely going to be remembered in history (hopefully for the right reasons). However, I personally might feel that Trump was old news still rather than being history, simply because we need more time to pass before we can gauge how history responds to the event, how people reflect on it, and importantly how much people care about it! This is especially true in terms of the demographic I teach. I need to consider how 20-year-old Japanese university students feel about Trump and whether they still care, now or in a few years’ time. My feeling is that for my students, they wouldn’t be very interested in analysing Trump anymore now that he’s no longer president.

This is why currency is such an interesting concept, and does not simply equate with how recent something is. I would argue that, keeping with the US president theme, Abraham Lincoln has more currency than, say, George W. Bush. I feel that students would appreciate a lesson on JFK more than they would on The Donald, and this is because of currency. Lincoln and JFK belong to history, whereas Bush and Trump are simply in the past.

Currency Vs History

The problem with this conceptualisation of authentic currency is that it might discourage teachers and materials writers from using stories from recent current affairs because of the way they will age quickly. We are already very aware of how international textbooks are constantly needing to be updated. Photos of students in the 90s just won’t cut it for a coursebook anymore. Photos, typography and graphic styles are all easy identifiers of the age of a textbook, and publishers are certainly under the impression that their customers will not want to spend good money on an ancient textbook. Opening a textbook and seeing a photo of someone using a chunky laptop or sitting in front of one of the big CRT monitors instead of a flatscreen is likely to inspire a snort of derision, not a good starting point when the teacher is trying to get their students to invest in the content. Not only do styles and fashions change but also so does language. The fact that materials need updating is as inevitable as the fact that languages themselves are constantly evolving and updating.

So, should materials writers simply avoid anything from current affairs? Should textbooks be filled with articles on the moon landing and speeches by Martin Luther King Jr.? (I chose both those examples as they are widely used in textbooks). I think it would be a shame if we let currency slide in favour of history, but it’s true that something historical will retain its currency for longer than something which is merely ‘news’. The balance is in the sweet spot somewhere in-between. There are new news articles all the time, but certain topics retain their currency and recur in the news regularly. Issues about gender equality, racial discrimination, the environment, social justice. Critical topics such as these will always have currency and it will not be hard to find news stories to link to these issues.

I have also experienced a kind of “noticing” effect when teaching about certain topics, much as Richard Schmidt started noticing new vocabulary items everywhere once he had learned it. When I am talking about a certain topic with one of my classes, it’s never long before a newspaper article with direct relevance to that topic jumps out at me. Recently it was the resignation of Olympics Committee President Mori for making sexist remarks, which fits very nicely in with my class on feminism and gender issues. The lesson is there already, but this provides an up-to-date reference point. I might show a slide of Mori in the class, but it’s easy to change and update.

Unfortunately, the Trump lesson isn’t going to be so easy to update. That lesson has lost its authentic currency I fear, so I will need to redesign it. But as I’m doing so, I will bear in mind these reflections on currency and try to get something which has a good mileage. Any suggestions would be much appreciated!

References

Mishan, F. (2005). Designing authenticity into language learning materials. Bristol: Intellect Books.

Pinner, R. S. (2016). Reconceptualising authenticity for English as a global language. Bristol: Multilingual Matters.

Schmidt, R. W. (1990). The role of consciousness in second language learning. Applied Linguistics, 11(2), 129-158.

Van Dijk, T. A. (2008). Discourse and power. Basingstoke, UK: Palgrave Macmillan.

Risking authenticity: Energy Return on Investment in Language Teaching

Reading Time: 2 minutesScreen Poster presented at the BAAL 2018 conference, York St John’s University, UK|
British Association of Applied Linguists

Abstract
Studies repeatedly show one of the most crucial factors affecting student motivation is the teacher. Teacher and student motivation is both positively or negatively synergistic, implying that to motivate students, teachers must also be motivated themselves. This paper presents an exploration of this relationship through a narrative of evidence-based practitioner reflection on teaching at a Japanese university. Field-notes, journals, class-observations and recordings were employed as data for deeper reflection by the teacher/researcher, triangulated with data from students, including short interviews, classwork and assignments. Approaching authenticity as either a bridge or a gap between positive teacher-student motivational synergy, this paper provides a practitioner’s account to examine the social dynamics of the language classroom. Core beliefs were found to be crucial in maintaining a positive motivational relationship. Motivation will be approached from an ecological perspective; that is looking at the connections between people and their environment, incorporating the natural peaks and troughs of the emotional landscape of the classroom and situating that within wider social context. Particular emphasis is placed on the concept of authenticity as the sense of congruence between action and belief, and the way that teachers construct their approach according to a philosophy of practice. I posit that authenticity can either work as a gap or a bridge between positive student-teacher motivation. In other words, when students and teachers both share an appreciation of the value of classroom activity, the learning is authentic. This presentation reflects on these complex issues and begins exploring them in context. This paper attempts to be as practical as possible by sharing lived professional experiences from the classroom. Samples of students’ work will be shown that indicate their level of engagement in class, with a discussion of strategies employed to help them maintain motivation, such as reflection and tasks involving metacognitive strategies.

Pinner2018BALL_EROIScreenposter

Using and Adapting Authentic Materials to Help Motivate Students

Reading Time: < 1 minuteTo those who attended the 2017 workshop entitles Using and Adapting Authentic Materials to Help Motivate Student 「学習意欲を高めるオーセンティック教材の活用法」, the main site page for this workshop is located at https://uniliterate.com/training/workshops/authenticity-workshop/#.WYwT6YiGPIU

You can download all the handouts of the materials, as well as the slides and other documents from the link below at learn.uniliterate.com. This is an online extension of the course, and allows you to post comments and continue the discussion with other participants.

You can access an online version of this course here. You can access the course as a guest, but you will need the password – Authenticity4649

If you would like permanent access to the course, please email me!

It was a wonderful experience to work with you all, and thank you again for taking the workshop and I sincerely hope it was both authentic and motivating for you as well!