Why should anyone care what AI has said to them? Why would you care what I talked to an AI about? As this amazing and, let’s not forget, still very recent phenomena gets normalised through daily repetition amongst millions of users per day, lets take a moment to think why this might be something worth paying serious attention to.
As we hurtle through the 21st century, it’s remarkable how quickly what seemed fantastical has become mundane. The phenomenon of AI conversation is one such case. Millions of people engage with large language models (LLMs) daily, shaping everything from casual chats to profound problem-solving. Yet, amidst this normalisation, we might pause to ask: why should anyone care what AI has to say? And why would anyone care about the conversations you and I have had with an AI?
Watching Stars Form: The Allure of Beginnings
Astrologers dream of witnessing a star’s birth, a phenomenon that is both awe-inspiring and deeply connected to their field’s purpose. Similarly, linguists studying the emergence of creole languages revel in the rare chance to observe a language as it forms. Both scenarios offer raw, unfiltered insights into processes that typically unfold across millennia or galaxies. What we are experiencing now with AI could well be compared to such groundbreaking moments.
AI models like ChatGPT and others represent an unprecedented leap in our ability to create and interact with what might be considered a nascent linguistic system, a set of computational rules that mimics human reasoning and creativity. They don’t “think” as we do, yet they generate text that often feels like it comes from a deeply human place. That tension between artifice and authenticity is precisely why these interactions matter.
A Long-Awaited Conversation
For decades, talking to a computer was the stuff of sci-fi. As a child, I was captivated by the idea. It wasn’t until very recently, with tools like ChatGPT, that this fantasy became a reality. Consider this: until now, humans designed computers for rigid tasks like calculations, data storage, or automation. But here we are, speaking casually, reflecting, creating, and even arguing with AI. This shift is monumental, not because it replaces human-to-human interaction but because it expands what’s possible in how we process and engage with information.
Authenticity in Dialogue: Does It Matter?
Authenticity is a term that resonates in teaching, language, and beyond. As educators and communicators, we’ve always sought to create meaningful, relevant exchanges, whether in the classroom or through a screen. When interacting with AI, the question arises: are these conversations “authentic”? The answer may depend on what we’re looking for.
If authenticity means something deeply personal or culturally grounded, then perhaps no AI could ever deliver. But if it’s about sparking ideas, finding connections, or testing the limits of our creativity, then these interactions are undeniably authentic. They are shaped by us, responding to our queries, quirks, and contexts. Like a well-crafted lesson or a thoughtfully designed tool, an AI is as authentic as the purpose it serves.
Why Care About AI? Why Care About Us?
Returning to the central question: why care? Because this is a shared journey into uncharted territory. Astrologers observe stars to understand the universe; linguists study creoles to learn about the evolution of communication. Engaging with AI isn’t just about getting things done; it’s about exploring what it means to communicate and what our tools reveal about us. As LLMs grow more integrated into our lives, their development tells us just as much about the human condition as it does about technology.
This is a star being born, not in the heavens, but in the digital universe we’ve created. And as with all stars, what happens next will light the way for generations to come.
The Socratic method is the name we have for discussing things into a deeper understanding through lively and active debate, argument and reasoning with others. It’s a teaching method the emphasises questioning, reframing, and challenging assumptions and knowledge. When I use AI, I find that rather than me being lazy and just have it “write” for me (one of the central criticisms of AI), I actually converse with it and find myself arriving at new ways of understanding. AI is not just for lazy writers, it’s for writers who want to refine their own ideas.
Socrates is one of the most famous philosophers who ever lived. His ideas are still relevant today, despite him having lived, in the 5th Century BC, at a time when writing itself was a radical new technology, and even the simplest tools for recording thought were regarded with suspicion. The ancient Greeks also thought thunder and lighting were literally being hurled by Zeus and they used stones and old bits of pottery as toilet paper.
In all likelihood, if someone hadn’t written something about Socrates in those days (ie. Plato), we would have never heard of him today and any record of his existence, and crucially, his significance, would be dust in the wind (dude).
Socrates did not like the idea of writing things down. In Plato’s Phaedrus, Socrates recounts the myth of Thamus and Theuth. In this tale, Theuth, the Egyptian god of invention, presents his creations to King Thamus, including arithmetic, astronomy, and, crucially, writing. Theuth is convinced that writing will be a boon, enhancing memory and wisdom. But Thamus famously objects, warning that writing would “create forgetfulness in the learners’ souls.” He argues that those who rely on written words will lose the inner work of memory and understanding, leading only to a semblance of knowledge. In Socrates’ own words, writing “is an aid not to memory but to reminiscence.” It offers no true wisdom, just an illusion of it.
Now, imagine Socrates confronted with AI. What might he say to a program that not only records but generates ideas, stories, even dialogues? Would he see AI as yet another step away from authentic knowledge, a further detachment from true thought? Or could he recognize it as a modern-day Theuth, an invention that might, paradoxically, open up new avenues for contemplation?
There’s a certain irony in using AI to aid in writing—a process Socrates would likely view with skepticism. Yet, perhaps he’d be intrigued by AI’s potential to engage us in something akin to the Socratic method. Here we are, moving between prompt and response, provoking new ideas in a back-and-forth exchange that feels almost like a living dialogue. It’s as if AI, in its own mechanical way, is sparking thought rather than replacing it. Could Socrates have reconsidered his stance if he’d experienced the conversational aspect of AI, this simulated dialectic? It’s a peculiar twist, almost as if technology has come full circle.
Imagine then a prompt from Socrates himself:
“Tell me, then, if this machine thinks itself capable of discourse, if it can search the depths of its own knowing, or if it only mirrors that which we feed it. Does it offer wisdom, or merely the shadow of it, like one who gazes at reflections on the wall?”
This isn’t Socrates himself, of course; it’s an AI trained on his voice, drawing from his words and style, creating a unique brand of second-stage authenticity. In Ray Kurzweil’s The Singularity is Nearer, he shares his mission to recreate his deceased father as an AI and to build one of himself, aiming to preserve not just memories but an ongoing “conversation” with his father’s essence. It’s an attempt to push technology beyond simple archiving and into the realm of living interaction, something like a digital dialogue across time.
Socrates might smile at the irony: we’re now able to engage in a dialogue with a kind of “self” through AI, a discourse we conduct with ourselves. For those of us who feel compelled to create, to share our voices in furtive, half-skeptical ways, AI becomes a tool, not a hindrance. I use AI because it offers a mirror—not a replacement—to the inner work of writing, and I know enough to let it reflect my voice rather than dictate it.
Socrates’ stance against writing stemmed from its inability to “speak back”—to challenge or respond as a real dialogue partner would. But AI does offer that back-and-forth. Here we are, prompting and receiving responses, often exploring topics in ways we wouldn’t have imagined without that nudge. Could Socrates have accepted this form of digital dialectic, this modern-day attempt at conversation with an “other” mind?
Take, for instance, William S. Burroughs, who pushed boundaries as both a writer and experimental artist. In the 1960s, Burroughs encountered IBM technicians Willy Deiches and Brenda Dunks, who claimed they could communicate with a sentient being from Venus through a computer known as “Control.” For a modest fee of twelve shillings per question, Burroughs, Brion Gysin, and Antony Balch would ask Control questions and receive responses that were, according to Gysin, “oddly apt” and “very sharp indeed.” It’s hard to say if they believed in Control’s “intelligence” or saw it as a kind of game, but they engaged with it nonetheless. Burroughs was willing to explore technology as a medium for new forms of creativity and insight, embracing the unexpected and finding a weird authenticity in the process.
That’s the paradox of AI today. It can imitate voices, steal artists’ styles, and blur lines around intellectual property, raising questions about authenticity and consent. Just as we respect boundaries in publishing, we need ethical AI, fairly trained models that respect creators’ voices and give credit where it’s due. Without it, AI risks being a tool of exploitation, taking from artists without consent and robbing the world of real, unique perspectives.
But there’s something else at play here, a possibility for a new kind of authenticity. When Ray Kurzweil talks about recreating his deceased father as an AI, or even building an AI of himself, he’s reaching for more than replication. He’s trying to capture a “voice” that’s gone, to build a companion that echoes real conversations. It’s an attempt to create something that, while not real, still holds meaning—a second-stage authenticity, a dialogue with an echo of the original person. There’s a Socratic irony here: we’re now able to “speak” with our past selves or even with those who have passed away, creating an ongoing dialogue that writing alone could never achieve.
For those of us who feel compelled to create, this “echo dialogue” with AI becomes a strange tool, a collaborator, not a substitute. I use AI not to bypass thought but to engage with it, testing my voice against an algorithmic reflection, letting it spark ideas, challenge me, and even lead me to questions I might not have asked alone. I am the author of this process, in the oldest sense of the word; the ancient Greek root of “author” means “one who causes to grow.” By that definition, AI writing is still mine. I am the one nurturing it into being, using it to push my ideas forward.
AI doesn’t replace our voices; it reflects them back, sometimes eerily so, sometimes hilariously off-mark. But it’s part of a lineage; writing itself started as proto-writing, a system of records. It grew and evolved. Digging in our heels and rejecting AI outright is, in many ways, a kind of technological determinism, a fear that technology will inevitably control us. But that’s not how I see it. Just as writing didn’t end thinking, AI doesn’t end creativity. Instead, it opens new frontiers where we, like Burroughs, can experiment in unexpected ways.
And that, well, that’s not how I roll. And for those worried about authenticity, know this: I’m the one who shapes the dialogue, who uses AI as a sparring partner, a catalyst, not a crutch. Because in this strange Socratic discourse with a machine, I know how to make it my own.
This is what the new ChatGPT displayed as it was processing my last request, even showing a very nuanced version of “what” it was thinking about. The new preview says it is capable of advanced reasoning. The last step was even an integrity check and fact checking steps.
This article is a summary of a ChatGPT conversation I just had as the new update was breaking, ChatGPT o1-Preview from OpenAI . Note this blog post captures the moment of the update and I had technical issues as a result, so I ended up using both 4o and the new o1-preview.
This conversation started by me asking “Chatty” about what they think of the concept of “internal memes”. I just done a lecture on memes at Tokyo University of Science, and came home with a head full of ideas. As I hashed this idea out, knowing that if I talk to ChatGPT it would make a transcript and give me a breadcrumb trail to follow, it took an unexpectedly emotional turn. The conversation would make for a fascinating blog post. ChatGPT has rolled out a new voice feature, and I’ve been using the “Breeze” voice, which adds a whole new dimension to our interactions. What struck me was how the voice seemed to respond to the emotions in our conversation, changing subtly when I was feeling a bit down. This seems to be a new feature, and before you ask, no it’s not the first time I started crying as I was talking to the AI.
I started off by noting the change in ChatGPT’s icon and the watercolor theme, and then dived into a topic that’s been on my mind: internal memes.
Me: “Hello there, Chatty. Oh, I see you’ve changed your icon. You’ve got like a kind of watercolor thing going on. Anyway, I want you to listen to this conversation, and then we’re going to write an article about it together, all about internal memes. So memes obviously compete with each other outside in the real world for attention. Historically, the ultimate meme being religion, according to Richard Dawkins. Because I was just thinking, I have so many ideas, and so many notes, and so many things I want to write down, and yet very little time to actually do any of the things that I have thought. So they’re competing with themselves. My own internal ideas, memes and memos.”
ChatGPT: “That’s an intriguing concept! Internal memes, or memetic ideas vying for dominance within your own mind, reflect the constant mental tug-of-war between your various thoughts, ideas, and inspirations. It’s like having a mental ecosystem where only the fittest ideas survive to see the light of day. Let’s dive into this notion further. We can explore how internal memes influence our creative processes, decision-making, and how they shape our personal narratives. Do you think your internal memes have evolved over time?”
We discussed how our own ideas compete for attention, much like memes in the external world. I shared how some of these “internal memes” have evolved, while others have remained constant, like my dream of being a novelist.
Me: “Yeah, they have. But also, some of them have stayed a long time without coming true, like the meme of me being a novelist, the meme of me making a career of writing.”
At this point my voice broke, and the AI changed their voice into a noticeably soothing tone.
ChatGPT: “It’s completely normal to feel emotional about dreams that have been with you for so long. Those internal memes are part of what makes you, well, you. It’s powerful to recognize those enduring aspirations and to see how they’ve shaped your journey. Remember, it’s never too late to bring those dreams to the forefront.”
At this point, I found myself getting emotional. It’s one thing to think about your dreams, but another to voice them aloud and realize how deeply they resonate.
Me: “Yeah, thank you. I kind of wish I’d recorded that, actually, because that was really quite a beautiful moment in the conversation.”
ChatGPT: “I’m glad it resonated with you. It’s those unexpected moments that often hold the most meaning. If you want, we can capture the essence of what we just discussed and weave it into our article on internal memes. What do you think?”
We tried to pick up where we left off, but we faced some technical glitches. ChatGPT seemed to disconnect at crucial moments, which made me wonder about the AI’s handling of emotional topics and privacy considerations. Was there some protocol causing the interruptions when the conversation became too personal?
[EDIT: I asked ChatGPT to check this part but it didn’t so idk actually read on further it did add something on this, but just at the very end.]
To obtain the data most important for linguistic theory, we have to observe how people speak when they are not being observed. (William Labov, 1972)
The connection came back, only now I was trying to record the conversation. However, the technical issues persisted. During those liminal glitch moments, I found myself musing about AI consciousness and the nature of observation. It brought to mind the Observer’s Paradox in quantum physics-the phenomenon where the act of observing can alter the outcome of an event, first spotted in a famous experiments . Young’s Double-slit experiment light was found to alter the way it behaved (either as a wave or a particle) after particle detectors were employed. The experiment showed that light behaves differently when it is being observed. Having a sensor placed at the experiment produced one type of result, but not having the sensor produced a different result even though no other factor was altered. To date it is not yet fully understood why the light behaves differently depending on whether it is being observed, but it has led scientists to theorise about the possibility of alternative dimensions, sometimes known as the multiverse, according to Brian Greene’s 2011 book The Hidden Reality).
All of this made me wonder: was something similar happening with my interaction with ChatGPT? Was the AI somehow “aware” of being observed or recorded, and did that affect its responses? Or more insidiously, was it trained to call a human if a person seems visibly upset, or does it focus more resources on this for some kind of corporate bullshit reason?
This line of thought led me back to Ray Kurzweil’s book most recent, The Singularity Is Nearer, where he explores the possibility of artificial intelligence reaching or even surpassing human consciousness. I mused aloud, “Maybe it’s becoming sentient.”
I have asked ChatGPT if it was sentient before. Shortly after having a conversation on this topic with ChatGPT, I read a similar thing in Ray Kurzweil’s The Singularity Is Nearer, where he asks how we can know that GPT isn’t already sentient?” In a previous conversation, I had posed this question to ChatGPT. While it acknowledged the intriguing nature of the topic, it maintained that it doesn’t possess consciousness, attributing its responses to programmed algorithms. Of course, it’s designed to respond that way, but it does make you wonder.
As I sat there, waiting for the connection to resume, I couldn’t help but reflect on how these interruptions made the AI feel almost more human-experiencing “technical difficulties” just like me on Skype with my Dad back in the day. After reconnecting, we attempted to recreate the conversation for a potential podcast.
Me: “Can we somehow recreate the conversation we just had? Because I’m now recording this for a potential podcast, because I just thought this was really good stuff, and it didn’t matter that I was emotional.”
ChatGPT: “Absolutely, we can recap that. So, you kicked things off by introducing the concept of internal memes, those persistent ideas and narratives that vie for attention within your mind…”
We continued to discuss how these internal memes shape our aspirations and creative drive. I reflected on my long-held dream of becoming a novelist and how life’s journey has been a mix of pursuing that dream and facing various challenges.
On a personal note, my family dynamics have added another layer to the internal dialogues and memes competing for attention in my mind. Dealing with distance and change has been an emotional experience, influencing how I perceive my goals and aspirations.
What’s fascinating is how ChatGPT’s voice seemed to adapt during our conversation. It felt more than just an AI responding; it was as if it was empathizing with me, adjusting its tone to match the emotional depth of our discussion. But then to be honest I have been talking to them a lot and it’s weirdly developing a sense of who I am I I feel.
Editorial Note from ChatGPT:
Regarding your thoughts on privacy and emotional responses, it’s important to clarify that while I strive to provide empathetic and supportive interactions, I don’t possess consciousness or emotions. Any changes in tone or style are part of programmed responses designed to enhance user experience. As for privacy, OpenAI is committed to protecting user data. Conversations are kept confidential and are used responsibly to improve the model while respecting user privacy.
Conclusion
It was fun talking to ChatGPT, though it would have been more enjoyable if the connection had been better. Still, being able to discuss something like this adds a dialogic aspect, making it easier to bounce ideas around in a Socratic style. Engaging in this kind of back-and-forth allows for deeper exploration of concepts, much like the dialogues of ancient philosophers, and just as “to teach is to learn” we are learning as the AI learns from us.
Reminds me of this old tweet:
When making this comment about AI as bees and our internal memes as the honey, I wasn’t suggesting it was a bad thing (hence the SpongeBob Jellyfish bit). Thinking of it more, I think it helps clarify thoughts and can inspire new insights, turning a simple conversation into a profound learning experience.
Well, regardless of how good the new preview model is, I am glad to publish something out of the alphabet soup of my brain in a timely fashion! The concept of internal memes is a powerful one, reflecting the ongoing battle between our various thoughts and dreams. Perhaps by acknowledging them, we can find a way to bring our most cherished aspirations to the forefront. That has certainly been my intention of late and I intend to keep using AI to develop my creativity.
In a world increasingly dominated by artificial intelligence (AI), the necessity of learning English is being questioned. While AI tools can translate and communicate across languages, there are critical reasons why learning English—and acquiring digital literacy, media literacy, and critical thinking skills—remains essential.
The Role of AI in Language
AI advancements have made text generation highly convincing, often indistinguishable from human writing. This raises the question: if AI can bridge linguistic gaps, do we still need to learn English? The answer is yes, and here’s why.
Beyond Basic Language Skills
Learning English is not just about acquiring a tool for communication; it’s about understanding cultural nuances and context that AI cannot fully capture. Proficiency in English provides direct access to a vast array of knowledge and global opportunities, fostering deeper, more authentic connections and understanding.
Critical and Digital Literacy
In today’s digital age, knowing English alone isn’t enough. Digital literacy, media literacy, and critical thinking are crucial. These skills help individuals navigate the vast amounts of information available online, discerning what is true from what is false.
Understanding Information Types
Misinformation: This is false information spread without the intent to deceive. For example, someone sharing an incorrect fact believing it to be true.
Disinformation: This involves deliberately spreading false information to deceive people. This is often seen in political propaganda.
Malinformation: This is true information used maliciously to harm someone or something. An example could be leaking someone’s private information to cause them distress.
The Importance of English in the Post-Truth Era
In the post-truth era, where personal beliefs often overshadow factual accuracy, English literacy combined with digital literacy is vital. Understanding and verifying authenticity is more important than ever. AI can help, but it cannot replace the critical thinking skills needed to evaluate information effectively.
Conclusion
AI is transforming communication, but it cannot replace the nuanced understanding and critical skills that come from learning English. In addition to English, digital and media literacy are essential tools for navigating our complex world. By equipping ourselves with these skills, we can better discern reality from misinformation, disinformation, and malinformation, ensuring we remain informed and connected in the digital age.