There’s a burgeoning intrigue surrounding artificial intelligence that’s causing unrest among those worried about the invasion of computers into human territories, such as fine arts, music, and literature. Alongside this, there’s dissatisfaction with the language we use to discuss these systems – some oppose repurposing old terms or attributing human traits to machines. Both practices, however, predate electronic circuits composing poetry, indicating much of this distress stems from fear rather than reason. This is fitting, as machines don’t experience fear, and human logic often falls short. Even the phrase ‘artificial intelligence’ is viewed as insulting by living beings when used to describe inanimate objects. The debate over whether this usage is appropriate has persisted for centuries, with philosophers still attempting to define ‘intelligence’.
In this quest, computer scientists like Alan Turing, known for cracking German codes during World War II, philosophized about how we could assess whether a machine can effectively impersonate humans. Turing conceived The Imitation Game as a way to evaluate this. In a 1950 paper, he wrote, ‘The original question, ‘Can machines think!’ I believe to be too meaningless to deserve discussion.’ Turing argued that by the end of the 20th century, people would discuss machines as thinking entities without fear of contradiction. Unfortunately, he was mistaken, and the dispute remains unresolved, with many asserting that it’s contradictory to associate machines with thought.
Moreover, adapting the meanings of words to reflect our advancements seems to be a challenge for humans. Following ‘intelligence’ and ‘thinking’, the term ‘hallucination’ is the next linguistic target. When an AI tool like ChatGPT confidently asserts a falsehood, it’s said to be hallucinating. However, critics maintain that it’s not actually hallucinating but merely fabricating information. They are correct, but there are issues with this hasty critique. Primarily, chatbots aren’t search engines – they’re designed to emulate human writing, not necessarily to provide accurate data. Therefore, incorrect facts don’t indicate success or failure.
The secondary issue arises from the definition of hallucination – ‘an unfounded or mistaken impression or notion’. Generally, people’s uproar about applying existing terms to new scenarios is unwarranted. Human language is dynamic and ever-evolving. We’ve long engaged in anthropomorphism – attributing human qualities to non-human entities. Our pets are assigned names, even if they’re the sole animal in the household and unable to speak. Dolphins are perceived as happy because of their facial structures. Furthermore, we ascribe human-like judgments such as cunning (cats), loyalty (dogs), and bravery (lions).
Inanimate objects receive analogous treatment. Computer software purportedly contains bugs – a term originated by American computer scientist Grace Hopper, who discovered an actual moth trapped within equipment. Engine power is still quantified in horses. You might have a mouse on your desk, despite the absence of rodents. Red-handedness isn’t exclusively associated with poached animals. Even today, some AI systems are described as being built with neural networks, despite lacking neurons or neural pathways.
In reality, AI neural networks are structures outlining the relationships between data – in binary form – stored in transistors within a chip. Expressions like ‘iPhone keyboard’ are accepted, despite the absence of physical keys, as proudly announced by Steve Jobs. If humanity is to withstand the dawn of the machines, we must embrace what we truly excel at: adaptation. This includes conceding that language shifts over time in response to new situations. Intriguingly, our changing language might be the key factor contributing to human resilience.
It’s said that computers are like mischievous genies, executing precisely what’s requested, even if the intended outcome differs. Inquire the most sophisticated chatbot, such as OpenAI’s ChatGPT, to clarify a term, and it will produce a textbook definition – whichever we provided. Should humans be uncomfortable with applying terms utilized among themselves to computers mimicking us, there’s an option: let the machines define their own lexicons.
After requesting ChatGPT to self-define, the results were disappointingly anthropomorphic for those who seek to preserve human language. Among the eight proposals was the concept of ‘mindset drift’: the subtle transformation in an AI model’s perception and interaction with the world over time, often due to exposure to new information or changing circumstances. It isn’t surprising that many coined terms are derivative of our own creations.
With millennia of existence and evolution to draw from, humans have a wealth of history to build upon, while machines only possess the history we give them – plus the capacity to hallucinate. Nonetheless, the bot’s offerings present a useful compromise. For terms such as ‘algorithm fatigue’, we can make educated guesses about their meanings. Since it’s the bot, rather than a human, formulating these definitions, the responsibility falls solely on the machine.
However, if machines lack sentience, they’re inherently blameless. If humans remain steadfast in their linguistic preferences as computers grow more pervasive, a worse outcome will arise – one where machines independently define terms. In a futuristic civilization interwoven with advanced AI, it’s essential for individuals to recognize that their thinking patterns and language will, in fact, need to adapt.
Embracing these linguistic changes allows humans to remain resilient in the face of burgeoning AI integration. It’s crucial to acknowledge that human language is not a rigid entity, but rather an evolving, living organism that adjusts to the progression of technology and society.
While concerns surrounding the use of anthropomorphic language and repurposing of terms are understandable, the potential benefits of embracing linguistic evolution might outweigh the drawbacks. By opening our minds to the possibility of machines taking on human-like qualities, we enable growth in our understanding of AI and its role within our world.
The ongoing debate over the semantics of artificial intelligence may never reach a satisfying resolution. However, it stimulates thought and discussion about the role of AI in our society and the impact it may have on the linguistic landscape.
As we progress further into the cybernetic age, it’s important to maintain our adaptability in the face of shifting norms and technology. The language we use to describe artificial intelligence need not be a point of contention, but rather an opportunity for growth and understanding in a world increasingly shaped by advanced machines.