My View On AI
Can artificial intelligence truly be as intelligent as we imagine? There are only two possible answers: yes or no. My view is that regardless of the answer, the continued development of AI means humanity is gradually moving toward self-destruction.
If AI Works as Intended
If AI functions as intended, it’s undoubtedly beneficial for capitalists, as it will greatly reduce production costs, allowing them to earn more profits.
I’m not a Luddite opposing technological progress (referring to Ned Ludd, who reportedly smashed two looms in Leicester, England in 1779, becoming synonymous with those who oppose new technology). Until now, technological advances throughout human history have improved overall human welfare. Although these improvements have been uneven, they’re better than no improvement at all.
Why would I oppose AI when I haven’t opposed previous technological advances? Could this be another case of “crying wolf” (where the wolf eventually does come)?
The reason is that before AI, all technological advances increased human value. Looms eliminated many manual weaving jobs, but by improving efficiency, they made clothes cheaper. The money people saved on clothing stimulated consumption, indirectly creating job opportunities in other fields. Those who remained in the textile industry became more valuable, as did those who produced, maintained, and repaired looms.
The invention of airplanes allowed people to travel between continents in a short time, extending the journeys one could make in a limited lifetime, thus enhancing human value.
AI Reduces Human Participation
AI, however, becomes more valuable the less human participation it requires. With autonomous driving, for example, a self-driving program becomes more valuable the more completely it eliminates human operation. While driver assistance systems (the precursor to autonomous driving) remained human-centered and reduced driving fatigue, the autonomous driving era no longer requires human involvement.
Perhaps autonomous driving creates many “autonomous driving safety officer” positions, but this clearly falls short of the creators’ ideal state, which is to eliminate human participation entirely.
Will autonomous driving improve the welfare of all humanity? Perhaps lower taxi fares will allow people to save money for new demands, but even if new demands are created, the resulting positions will likely not belong to humans, as each field has its own version of autonomous systems.
In the era of human drivers, drivers paid social security contributions used to provide pensions for retirees. If a company “deploys” numerous autonomous vehicles, will it require these vehicles to pay social security?
The Existential Threat of AI
Due to this characteristic of AI—“the more it can detach from humans, the more valuable it becomes”—it’s not an exaggeration to say that developing it is digging humanity’s grave. If a new technology cannot improve the welfare of all humanity but instead eliminates job opportunities for everyone, I cannot be optimistic about it.
Current Limitations of AI
There’s another possibility: AI cannot function as imagined. Current AI is at this stage. When Large Language Models (LLMs) were first introduced, people were amazed at how quickly they could form relatively coherent text, but later discovered that they couldn’t be trusted with any serious tasks, as they couldn’t even determine whether 3.11 or 3.8 is larger.
LLMs are essentially an extension of auto-complete technology. Some Chinese input methods offer this, suggesting characters like “好,” “是,” or “们” after you type “你.” However, such auto-completion only works for a few characters. LLMs go much further, continuing to guess based on the prompter’s requirements until forming a coherent text.
They accomplish this feat, like input software, by analyzing character associations, but while input methods only calculate associations between two characters, LLMs analyze associations throughout lengthy texts.
The Environmental Cost
Training a large model that can’t even determine which number is larger between 3.8 and 3.11 has already consumed enormous energy. Imagine how much energy would be required to make it work as expected.
Despite AI’s current underwhelming performance, will researchers abandon it? On the contrary, they will keep trying until they succeed. Think about it: no need to provide any benefits for machines, and all profits go to you—isn’t that tempting enough? If current AI isn’t performing well enough, it’s only because not enough energy has been consumed.
This energy consumption also demonstrates how efficient carbon-based life forms like humans are compared to silicon-based ones. If technically possible, the scenario in “The Matrix” where humans serve as energy sources for a silicon-based system might not be far-fetched. Of course, this would face enormous humanitarian disasters and essentially announce humanity’s “death.”
Climate Crisis Acceleration
Let’s optimistically assume humans won’t become energy sources for AI training. People will continue developing AI through the current energy-intensive methods.
Although some of this energy comes from clean sources like nuclear power, clean energy supply clearly cannot grow exponentially like AI demand. Energy consumption will inevitably further exacerbate the global warming crisis.
In recent years, we’ve frequently encountered extreme weather events that should occur only once in a millennium. This summer, Shanghai was unimaginably hot, and due to prolonged drought, many people noticed fewer mosquitoes.
Mosquitoes may not be important to humans, but if we don’t side with capitalists, surely people themselves are important. Due to global warming, humans are gradually losing their habitats. The Maldives, a country with minimal carbon emissions, will be among the first to be submerged if global temperatures rise by another 1.5 degrees.
This doesn’t mean that countries with vast territories can remain indifferent. Rising sea levels will submerge large areas of farmland, potentially leading to another food crisis for humanity.
Who would have thought that food crises or the sinking of the Maldives could be linked to some humans continuously trying to train an AI that will ultimately replace humanity?