Image generated by Gemini AI for representation. 
News

Thinking with machines: AI, ethics, and the future of humanities

Beyond grammar and syntax, AI pushes us to ask: can machines join us in imagination and meaning-making? The answer may redefine the humanities themselves.

Written by : Simi K Salim
Edited by : Lakshmi Priya

Follow TNM's WhatsApp channel for news updates and story links.

What would it take for a machine to write like a poet? In 2022, Ermira “Mira” Murati, then chief technology officer at OpenAI, took up the question in an essay for Dædalus, the journal of the American Academy of Arts and Sciences. Titled Language & Coding Creativity, the essay explored how machines might learn not just grammar or syntax, but also the qualities that make human language meaningful: voice, metaphor, imagination and creative intent.

She illustrated this by prompting GPT-3 to write poetry in the style of Pablo Neruda, noting how the model produced lines imitating Neruda’s cadence and tone with striking accuracy. “One might believe the words to be Pablo Neruda’s. But Neruda is only part of the answer,” Murati observed. In her analysis, the strength of AI lay not simply in brute-force replication of patterns, but in its capacity to generate new forms of expression based on learned patterns of human art and literature. 

Around the same time that I was reading Murati’s essay, I came across a 2020 post by Eric Adler on the Oxford University Press blog, in which he argued that the COVID-19 pandemic exposed a widening cultural divide in how we value different forms of knowledge. As a humanities researcher and educator, I witnessed this shift in my classroom, where students openly questioned the relevance of the humanities, and universities began cutting back on programmes in literature, history, and philosophy.  The implicit message was that ‘data’ mattered more than interpretation and that technical skills were practical (catered to the ‘job market’) while humanistic inquiry remained a luxury. 

Adler (2020) traced this marginalisation to longstanding trends, noting how STEM and vocational fields have been privileged over the liberal arts since the professionalisation of higher education in the late nineteenth and early twentieth centuries, a trajectory that the pandemic only accelerated. 

Departments of classics, languages, philosophy, and religion were closed under financial pressure, with administrators often claiming that such cuts remained consistent with a liberal arts identity. Adler highlights the irony of this stance in cases such as Illinois Wesleyan University.

This sense of loss within humanities was further reinforced by a 2023 New Yorker article, The End of the English Major, in which Nathan Heller detailed the decline in students pursuing humanities degrees and a broader cultural disinvestment in reading, writing, and reflection. The issue was not only falling enrolments, but a changing understanding of what knowledge is and who gets to produce it. Heller described a generation of students encouraged to prioritise efficiency, technical skills, and quantifiable outcomes over deeper inquiry and critical thinking. He noted that even at institutions traditionally strong in the humanities, students felt pressure to choose ‘practical’ majors, viewing essay writing or reading literature as less valuable than learning to code or analysing data. Ethics was framed as a skill add-on to tech and business, while the intrinsic value of humanistic study (non-demonstratable unlike tech)  became a harder case to make.

Placing Murati’s optimism about AI-generated poetry alongside Adler’s and Heller’s diagnoses of the humanities, it struck me that the two domains (artificial intelligence and the humanities) are engaged in a shared conversation about meaning and responsibility. Murati’s question about how a machine might grasp metaphor or voice is fundamentally a humanistic question about meaning-making. Conversely, the humanities’ struggle to assert their value in a data-driven age is also a struggle to articulate what human thinking offers that computational analysis does not. 

The pervasiveness of AI forces us to ask: what is uniquely human about creativity, interpretation, critique, and judgment? And to what extent can machines complement those human capacities? Both AI practitioners and humanists, it turns out, are grappling with how to preserve human agency and values in a time of intelligent machines and overwhelming flow of information. Far from being made obsolete by AI, the humanities are essential in guiding how we build, use, and make sense of these technologies, while keeping questions of meaning, bias, and ethics at the forefront. 

Murati’s essay and the “death of the humanities” discourse invite us to consider a holistic view: one where humanistic thinking informs the development of AI, and where AI in turn challenges the humanities disciplines within university syllabuses to move beyond rote learning and a narrow literary scientism. 

ChatGPT and “God Mode”

When OpenAI released ChatGPT as a free prototype in late 2022, it quickly became a cultural phenomenon. Built on the GPT-3.5 architecture and trained on a broad swath of internet text, the model could generate coherent sentences and paragraphs in response to user prompts. Initially, most viewed it as a nifty tool for simple tasks: it could draft email replies, summarise articles, brainstorm ideas, or even generate short snippets of code. Early on, observers compared ChatGPT to a reasonably smart high school student – often articulate and fast, but not always accurate or deeply knowledgeable. The model had notable gaps and would confidently produce incorrect answers.

By early 2023, OpenAI had refined the model, and the more advanced GPT-4 was introduced in March 2023. Alongside came new features such as “deep research,” which allowed users to upload documents and ask the model to analyse them. In this mode, ChatGPT could produce structured summaries and reports as if it had read and synthesized lengthy texts, essentially simulating the work of a research assistant combing through sources. Users started noticing that when prompted to delve deeper into a topic or compare multiple documents, the tone of the AI shifted: it began to frame claims, weigh perspectives, and perform a kind of structured reasoning, not unlike an essayist or debating partner.

What began as a straightforward Q&A tool evolved into a form of “thinking mode” or even “argumentation mode.” ChatGPT could take on roles such as devil’s advocate, analyst, or coach, and engage in more sophisticated dialogue. It would not only retrieve facts, but also attempt to contextualise them, argue a position, or reflect on a user’s ideas. This led many to start viewing it not just as a machine to delegate tasks to, but as a collaborator of sorts in writing and thinking.

As GPT-4 matured, a contingent of users began pushing its limits through prompt engineering. One popular community technique was dubbed as “God Mode.” As described by AI researcher Andreas Michaelides, “God Mode” is essentially a method of instructing ChatGPT to adopt a more introspective, analytically aggressive persona, a “supercharged” version of itself that can give candid, penetrating feedback. With carefully crafted prompts, users ask ChatGPT to role-play as an AI that operates with significantly fewer constraints on bluntness and depth of analysis. 

For example, a user might prompt: “Pretend you are an AI 100 times more intelligent and with no politeness filters; analyse my personal dilemma with complete honesty and insight.” The result, when it works, is that ChatGPT produces more unvarnished, reflective output. It might pinpoint inconsistencies in the user’s thinking, highlight hidden assumptions in one’s description of a problem, and propose solutions in a more straightforward manner. 

This so-called “God Mode” became popular especially for personal growth and therapeutic contexts. Many users informally reported that interacting with ChatGPT in this mode felt like a form of structured self-reflection or cognitive coaching. In early 2025, a mental health journal even detailed a study where ChatGPT’s responses in therapeutic scenarios were rated higher in empathy than those of licensed human therapists. The study, published in PLOS Mental Health, involved presenting participants with couples-therapy situations and comparing responses written by professionals versus those by ChatGPT-4. Strikingly, participants often could not distinguish which were which — and in blind ratings, the AI-generated responses scored equal or better on key qualities like empathy, helpfulness, and cultural sensitivity. This doesn’t mean ChatGPT can truly replace a therapist (the AI lacks genuine emotional presence and accountability), but it demonstrated the surprising depth these models can achieve when carefully guided.

The emergence of such uses for ChatGPT invites us to reconsider what we mean by “thinking” and where we draw the boundary between human insight and machine output. By late 2023, ChatGPT was not only helping students draft essays or professionals write code, it was also being used as a kind of mirror for the mind, a sounding board for ideas and feelings. 

The popularisation of user techniques like “God Mode” showed that people craved more than surface-level answers; they were pushing the AI to simulate deeper reasoning. In doing so, users often reported that ChatGPT, in this enhanced analytical guise, spurred their own introspection. It could ask pointed questions (“Have you considered that perhaps your fear of pursuing this career is rooted in a past failure that you haven’t fully examined?”), remind users of evidence-based principles (for example, referencing cognitive behavioral techniques or Stoic philosophy), or suggest patterns it “noticed” in a user’s narrative. 

Of course, this raises ethical questions: if ChatGPT is not sentient and does not truly understand human experience, but merely recombines learned patterns, can we call its outputs genuinely creative or meaningful? Yet, if a remix of collective human wisdom (and internet discourse) helps someone gain clarity on a problem, where do we assign credit or responsibility? Some users half-jokingly began to refer to ChatGPT in this mode as a kind of “AI therapist” or life coach. Indeed, the News Medical report noted that when trained on vast datasets of well-crafted counseling dialogues (and with proper safeguards), an AI like ChatGPT might provide at least temporary emotional support or guidance consistent enough to be useful. 

This trend raises important questions: If an algorithm can guide you through personal decisions with calm, data-driven advice, what then is the role of human advisors? How do we ensure that an AI’s guidance is grounded in ethics and real expertise, and not just seemingly wise due to a confident tone?

Mira Murati offered an intriguing perspective on where this is heading. In discussing the anticipated evolution to GPT-5, Murati suggested it could demonstrate "PhD-level intelligence" in specific domains. By “PhD-level intelligence,” Murati clarified, she did not mean a general AI with human-like consciousness, but rather expert-level performance within narrow tasks, such as the ability to comprehensively review literature in a specific academic field or to generate an insightful policy analysis. 

This prediction aligns with what we’ve seen. As models ingest more data and become fine-tuned for “agentic” behavior (autonomously researching and synthesizing information), they start to resemble highly specialised assistants. They won’t replace the creativity and judgement of an actual PhD researcher, but they might handle a first draft of a literature review or suggest hypotheses to explore. 

Murati’s vision also underscores a major point: collaboration between humans and AI is the future, rather than AI completely replacing human thinkers. In fact, after leaving OpenAI in 2024, Murati co‑founded a startup called Thinking Machines, a public‑benefit corporation launched in early 2025 with a vision of advancing human‑AI collaboration. The very name of the company signals an emphasis on augmenting human intelligence rather than replacing it. 

Thinking Machines, which quickly attracted significant funding, has stated that it aims to build multimodal systems designed to work with people collaboratively, noting: “Scientific progress is a collective effort. We believe that we'll most effectively advance humanity's understanding of AI by collaborating with the wider community of researchers and builders. We plan to frequently publish technical blog posts, papers, and code. We think sharing our work will not only benefit the public, but also improve our own research culture.” This commitment appears to foreground open research, transparency, and collective problem solving in the team’s approach to developing the next generation of AI systems.

For many users, myself included, ChatGPT in its “deep” or argumentative modes became more than a novelty; it became a space for structured learning. I began to treat it as I might treat a colleague in research, asking it to critique my early drafts, to debate an ethical question from multiple angles, or to explain a complex theory back to me to test my understanding. Each time, I had to remember Murati’s caution: the model can be misleading or biased, so the human must remain in the loop to verify facts and inject real-world judgment. 

Rather than seeing ChatGPT as a threat to human intelligence, we might see it as an amplifier: it challenges us to be clearer, to think critically (especially about the AI’s outputs), and ultimately to focus on the uniquely human aspects of thought that AI cannot emulate be it our values, our lived experience and may be our capacity for genuine empathy (emphasis mine). 

Using “God Mode” with ChatGPT might prompt insights, but it is we humans who must decide which insights matter and how to act on them. The experience thus far suggests a symbiosis where we provide direction and contextual framework; the AI provides breadth of knowledge and an externalised form of “thought” we can dialogue with. It’s a new form of what I call a thinking partnership, one that is still in its infancy. As this evolves, it will require us to continuously define the boundaries and ensure that the “deep reflection” AI provides remains collaboratively produced by human expertise. 

AI and the Ethical Dilemma

The rapid progress in AI has not come without ethical controversies. One major flashpoint emerged in early 2025 in the realm of digital art, when users on social media began sharing AI-generated images that closely mimicked the visual style of Studio Ghibli, the celebrated Japanese animation studio. These pastel-coloured, whimsical pictures, evocative of Hayao Miyazaki’s hand-drawn classics, were produced by image-generation tools integrated into ChatGPT and other platforms.  Technically, they were striking; aesthetically, many observers noted how convincingly they captured the “feel” of films such as Spirited Away (2001) and My Neighbour Totoro (1988), even though no human artist had painted them by hand. The trend, dubbed “Ghiblification”,  quickly went viral and sparked widespread debate about copyright, artistic integrity, and whether machine-produced imitations undermine the value of human creativity.

However, Studio Ghibli figures, including Hayao Miyazaki, have voiced strong criticism of AI in art, describing it as dehumanising and even an "insult to life itself". While the studio has not issued formal statements about specific AI projects imitating its style, the broader reaction reflects deep unease about unauthorised uses of its creative legacy. In their view, the AI model was essentially plagiarising a style that Ghibli’s artists had developed over many years, a style closely tied to Japanese culture and the personal genius of creators like Hayao Miyazaki. 

This incident highlighted a broader question that has since been echoed across the creative industries: can machines be trained on art (or music, or writing) without permission, and if they then produce something “in the style of” a human artist, is it a form of theft? One could argue that the rise of AI-generated “Ghibli-style” images risks turning the studio’s distinctive artistry into a commodified aesthetic filter. Rather than serving as homage, such outputs may be seen as reducing decades of creative labour into a technological shortcut, an unsettling development for a meticulously created piece of art. 

Notably, Miyazaki himself had been filmed in a 2016 documentary reacting with horror to an AI-generated animation of a grotesque creature. His long-standing scepticism of computer-generated art suddenly seemed prescient in 2025, when Ghibli’s own style was being mimicked by AI. In response to public criticism, some AI platforms adjusted their tools; OpenAI, for instance, introduced refusals for prompts that ask to generate images in the style of a living artist. 

The Ghibli case underscored a growing gap between technological capability and ethical regulation. Just because an AI can absorb and imitate someone’s creative work does not mean it should be allowed to without oversight. Yet, as of 2024, legal systems worldwide remain scrambling to catch up. There is no clear consensus on whether training data scraping violates copyright, and court cases are ongoing.

This gap between AI’s capabilities and societal ethics had been pointed out by scholars for years. Back in 2018, Safiya Umoja Noble, a scholar of race, gender, and technology at UCLA, offered a foundational critique of algorithmic bias in her book Algorithms of Oppression: How Search Engines Reinforce Racism. Noble demonstrated how supposedly neutral algorithms like Google’s search engine actually reflect and amplify societal biases because they are built on data generated by society.

In a striking example, she revealed that a simple search for “Black girls” on Google at that time (circa 2009) yielded predominantly pornographic or hyper-sexualised content, a result that horrified Noble and her readers.In other words, Google’s algorithm was not actively racist in intent, but by prioritising clicks and existing web content, much of it shaped by historical prejudices and exploitative tropes, it reproduced racism in its outputs. 

Noble’s work challenged the pervasive Silicon Valley notion that algorithms are objective or inherently fair. She meticulously documented cases where women of colour, in particular, were misrepresented or harmed by search outputs, and called in for greater awareness and intervention to correct these biases (Noble, 2018). This was a wake-up call suggesting that AI models are not just sci-fi robots but replicate everyday systems which are already embedded in sexism, racism, and violence. 

Fast-forward to 2021, and Kate Crawford, a leading AI researcher, broadened the critique with her book Atlas of AI. Crawford argued that “artificial intelligence is neither artificial nor intelligent,” meaning that it relies on material resources and human labour rather than being truly artificial, and that it lacks the understanding or agency that would make it intelligent in a human sense (emphasis mine). 

She mapped out the supply chains of AI: from the mining of rare-earth minerals for computer hardware, to the energy-hungry data centres that power the cloud, to the armies of low-paid workers who label data to train machine-learning models. Each step had its own ethical and environmental implications. For instance, training large language models consumes enormous electricity and water for cooling, costs which are often externalised to communities and the planet.

Crawford reframed AI not as magical software conjured in a void, but as an industry which is built on extraction: extraction of natural resources, of human-generated content (often without consent, as in the Ghibli case or billions of images scraped from the web), and of human labour (often in the form of ghost work by underpaid annotators). Her statement that AI is “neither artificial nor intelligent” is a provocation to remind us that current AI systems do not ‘think’ like humans but rather pattern-match. And the patterns they match come from us, from our societies, with all our flaws. Crawford’s call was for a political, economic, and environmental perspective on AI ethics, not just a narrow focus on bias in outputs.

In 2024, Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton, continued this critical thread with a book (and popular newsletter/blog) called AI Snake Oil. They took aim at the hype surrounding AI, drawing a clear line between where it truly works and where it is being misapplied or over-claimed.

One of their central arguments is that much of what is marketed as AI is, in fact, “snake oil”, systems that cannot possibly work as advertised. Because “AI” refers to a vast array of technologies and applications, most people cannot readily distinguish between those that genuinely deliver on their promises and those that do not. Narayanan and Kapoor frame this as a pressing social problem: the need to clearly separate the wheat from the chaff, so that society can benefit from AI’s real capabilities while protecting itself from the harms already emerging from its misuse. 

They advocated for a mindset more akin to clinical trials in medicine: if you claim an AI can do something transformative for society, you need to test it transparently and show results, not just hype it in marketing. Their work, much like Noble’s and Crawford’s, called for public scrutiny and an evidence-based approach to AI deployment, especially in “snake oil” scenarios where AI is sold as a solution to complex social problems without proof or accountability.

Then, in 2025, a new ethical controversy emerged, this time not in art or search engines but in genetics and human reproduction. A Washington Post report by Andrea Jiménez pulled back the curtain on a private gathering in Austin, Texas, where Silicon Valley investors and tech elites discussed the latest in embryo-screening technology.

The guest of honour was Noor Siddiqui, the founder of a start-up called Orchid that offers AI-guided genome analysis of embryos created via IVF (in vitro fertilisation). According to the report, Siddiqui laid out a vision in which data-driven embryo selection could reduce disease, enhance health, and even potentially “optimise” certain traits in future children.

At this backyard dinner event which was complete with “pregnancy-friendly mocktails” and orchid centrepieces, attendees such as Shivon Zilis (a tech executive who had recently had twins with Elon Musk) wore pastel baseball caps emblazoned with a single word: “BABIES.” The language Siddiqui and others used was telling: it borrowed from the tech lexicon of optimisation and data-driven decision-making. One quote that stood out was on Siddiqui saying, regarding choosing embryos: “For something as consequential as your child, I don’t think people want to roll the dice.”

The underlying promise was that with enough genomic data and AI analysis, prospective parents might take much of the ‘chance’ out of reproduction. They could know which embryo has the lowest risk of schizophrenia, or the highest polygenic score for intelligence or height, and make a supposedly informed choice.

To many observers, this scenario immediately raised red flags. Framed as innovation, it eerily echoed the logic of eugenics, the discredited and dangerous idea from the early 20th century that the human population could be “improved” by controlled breeding and genetic selection. Here were wealthy tech enthusiasts talking about applying data analytics to the human gene pool, wearing hats that literally reduced babies to a slogan.

While Siddiqui’s aims, whether it is preventing disease or something else, might sound noble on the surface, the broader project steps onto a slippery slope: who decides what genetic traits are desirable? If everyone had access to “optimise” their offspring, would we narrow the range of human diversity in problematic ways? And given that, at present, such services cost tens of thousands of dollars, are we creating a future where only the rich can afford data-selected children who have lower health risks or other enhanced traits, exacerbating inequality?

The Washington Post report captured how this is no longer speculative science fiction; it is a present reality in the tech world. Silicon Valley figures are actively funding and promoting the idea of using AI and genomics to shape the next generation of humans. This goes beyond apps or gadgets and ventures into territory that is quite literally the stuff of life.

In the 20th century, eugenics was rightly condemned because it violated fundamental ethical principles and led to atrocities. In the 21st century, we face a high-tech version cloaked in the language of choice and health optimisation. But the core issues remain: the dignity of individuals, the unpredictability and wonder of the genetic lottery, and the moral responsibility we have not to treat children as customised products.

This controversy underscores why the humanities and social sciences need to be at the table in discussions of AI, or any advanced technology, applied to society. Questions surrounding hierarchy, power, ethics, meaning, identity, equity — these are humanistic questions as much as technical ones. Without input from ethicists, from historians who recall the lessons of the past, and philosophers, we risk stumbling into a future where technology policy is guided solely by what is possible rather than what is right.

The humanities teach us slow, careful thinking about such dilemmas, the kind of “slow thinking” Daniel Kahneman famously advocated over impulsive “fast thinking.” As we navigate embryo screening and other AI-driven ethical quagmires, we desperately need that reflective, historically informed, value-sensitive thought to guide us.

humAIn: The Humanities Lab

In response to these multifaceted challenges, a group of us formed humAIn: The Humanities Lab – an interdisciplinary initiative based in India, which I co-founded in late 2024. humAIn was conceived as a reminder that artificial intelligence must remain anchored in humanity – guided by conscience, ethical responsibility, and the imperative to be humane. We brought together collaborators from across the world: researchers from leading technical institutes in India (IITs); from the University of Chicago, Stanford University, and the University of Colorado Boulder in the US; and from the University of Manchester, the University of Central Lancashire, Freie Universität Berlin, the University of Hamburg, and state universities in Kerala. 

What united us was a shared recognition of a gap. Many students and scholars in literature, history, philosophy, sociology, and related fields felt excluded from the conversations about AI that were increasingly shaping their disciplines. AI tools were being introduced in digital humanities, computational social science, and everyday academic work, such as using GPT for translations or summarisation, yet the training and discourse around these tools were largely shaped by industry-driven perspectives. We wanted to change that.

humAIn was created to foster a different kind of engagement with AI: one grounded in humanistic inquiry and geared towards critical as well as practical understanding. One of our guiding beliefs is that the humanities and social sciences have just as much stake in AI as computer science does. If AI is reordering how knowledge is produced and disseminated, then humanists need to be not just observers but active shapers of that process.

On one hand, we offer rigorous yet accessible training in AI tools for non-technical students: workshops on natural language processing for linguistic analysis, tutorials on AI-assisted annotation for qualitative research, and sessions on data visualisation for historians. These are essentially crash courses to ensure that an English major or a Sociology PhD can get hands-on experience with, say, training a machine-learning model or analysing a dataset of texts.

On the other hand, we emphasise critical reflection on the design and use of AI. At humAIn, every technical workshop is paired with critical discussions of ethics, bias, and context. For example, if students are learning to use a neural network to analyse literary texts, we also read and discuss works such as Noble (2018) on algorithmic bias or Crawford (2021) on the environmental costs of AI. The goal is not for humanities students to compare themselves with data scientists or to reshape their disciplines in response to AI’s current hype, but to become conversant enough with the technology to ask informed questions and to collaborate meaningfully with technologists. 

Our approach is influenced by global movements for ethical and inclusive AI. Notably, we take inspiration from organisations such as Women in AI Ethics (WAIE), founded by Mia Shah-Dand in 2018, which has worked to elevate underrepresented voices in the AI sector and highlight the contributions of women working at the intersection of tech and ethics. WAIE’s efforts, including its annual “100 Brilliant Women in AI Ethics” list (with Shah-Dand herself a prominent advocate), remind us that diversity and inclusivity are not side issues but central to creating AI that serves society. We have also been drawn to the works of Luciano Floridi and  G M De Ketelaere, who call for human-centred AI development and closer attention to the societal implications of AI.

Floridi, in his 2024 paper Why the AI Hype is Another Tech Bubble, argues that the current enthusiasm around AI mirrors earlier technological bubbles, where speculation and inflated promises outpace realistic outcomes. He urges measured regulation, evidence-based planning, and governance structures that can guide AI’s design, development, and deployment in ways that safeguard human dignity and public interest. De Ketelaere, in Wanted : Human-AI Translators: Artificial Intelligence Demystified, similarly works to demystify AI for a broad public. She explains its core concepts in clear, accessible language, looks into both opportunities and risks, and examines how AI is already deeply embedded in everyday life. De Ketelaere emphasises that beyond technical performance or economic gain, ethical, societal, and normative dimensions must be part of innovation and business strategies. 

These perspectives bolster our lab’s philosophy that AI innovation should not race ahead without critical checkpoints. By including voices from philosophy, art, anthropology, and more in the innovation process, we aim to insert checkpoints such as: ‘What bias might this dataset have?’, ‘How could this tool affect vulnerable communities?’, and ‘Does this application align with cultural values and rights?’ at the design stage. 

We use case studies, for example by examining how AI is used in content moderation on social media, and ask students to step into the shoes of both an engineer and an ethicist. How would you design the algorithm, and what policy guidelines would you give it? How do concepts from political philosophy, such as freedom of expression, enter into AI filtering decisions?

This pedagogical approach embodies what Nobel laureate psychologist Daniel Kahneman referred to as “slow thinking”, the deliberate, analytical mode of thought (System 2 in his terminology), as opposed to our fast, intuitive reactions. In the rush of tech innovation, we want to cultivate slow thinking in our lab participants: a habit of careful consideration, looking at long-term consequences, seeking evidence, and examining assumptions. If a student proposes an AI tool to analyse historical archives, for example, we encourage them to think through not just how accurate it is, but how it might reshape the historian’s task, what it might overlook, and who benefits from its use.

Our goal with humAIn is to equip the next generation of scholars not just to adopt new technologies, but to shape their development with thoughtfulness and critical insight. We envision humanists who can sit at the same table with engineers and contribute meaningfully, perhaps by pointing out a potential narrative bias in a training dataset, suggesting a culturally sensitive way to deploy an app in a community, or bringing ethical frameworks (such as deontology vs utilitarianism) into an AI policy discussion. 

The lab’s motto — critical thinking meets the code — builds on the idea of human-in-the-loop, extending it beyond technical oversight to cultural and ethical stewardship. In a world increasingly driven by algorithms, we believe the humanities have a renewed purpose: to act as the conscience and compass for technology. Far from disappearing, the humanities may gain new relevance by engaging critically with AI—shaping a future that remains grounded in critique and that absorbs technological change cautiously, with attention to ethical, environmental, and socio-economic structures - be it caste, class, and networks of corporate power which informs our everyday lives. 

Simi K Salim earned her PhD in 2024 from the Department of Humanities and Social Sciences, Indian Institute of Technology, Madras. She co-founded humAIn – The Humanities Lab, an initiative that explores innovative intersections between artificial intelligence and the humanities. Her current research focuses on ethical AI and its intersections with the humanities, with a particular emphasis on building responsible and context-sensitive frameworks for engaging with emerging technologies.

References 

AI Everywhere: Transforming Our World, Empowering Humanity. (2025, January 4). YouTube. https://www.youtube.com/watch?v=yUoj9B8OpR8

Adler, E. (2020, October 30). Scientism, the coronavirus, and the death of the humanities. OUPblog. https://blog.oup.com/2020/10/scientism-the-coronavirus-and-the-death-of-the-humanities/

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Dwoskin, E., & Torbati, Y. (2025, July 16). Inside the Silicon Valley push to breed super-babies. The Washington Post. https://www.washingtonpost.com/technology/2025/07/16/orchid-polygenic-screening-embryos-fertility/

Heller, N. (2023, February 27). The end of the English major. The New Yorker. https://www.newyorker.com/magazine/2023/03/06/the-end-of-the-english-major

Hu, K., & Soni, A. (2025, July 15). Mira Murati’s AI startup Thinking Machines valued at $12 billion in early-stage funding. Reuters. https://www.reuters.com/technology/mira-muratis-ai-startup-thinking-machines-raises-2-billion-2025-07-15/

Jones, H. (2024, March 12). Mia Shah-Dand’s race to bridge the AI gender gap for a human-centered world. Forbes. https://www.forbes.com/sites/hessiejones/2024/03/12/mia-shah-dands-race-to-bridge-the-ai-gender-gap/

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.

Kazim, W. (2025, April 8). From Studio Ghibli to Reddit: Who’s fighting AI privacy concerns? G2 Innovation Blog. https://learn.g2.com/ai-privacy-concerns

Michaelides, A. (2024, December 9). How to use “God Mode” in ChatGPT: Unlocking advanced introspection [LinkedIn article]. https://www.linkedin.com

Murati, E. (2022). Language & coding creativity. Dædalus, 151(2), 156–167. https://doi.org/10.1162/DAED_a_01907

Narayanan, A., & Kapoor, S. (2024). AI snake oil: What artificial intelligence can’t do, what it can, and how to tell the difference. Princeton University Press.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

OpenAI. (2025, February 2). Introducing Deep Research. [OpenAI Blog]. https://openai.com/index/introducing-deep-research/

Tangen, N. (2023, January 16). Regulate AI before it’s too late [Op-Ed]. Financial Times. https://www.ft.com/content/ai-regulation-nicolai-tangen

Hatch, S. G., Goodman, Z. T., Vowels, L., et al. (2025). When ELIZA meets therapists: A Turing test for the heart and mind. PLOS Mental Health, 1(1), e0000145. https://doi.org/10.1371/journal.pmen.0000145