Less usual perspectives (blind spots) on Artificial Intelligence
AI, but which AI?
There is not one, but several artificial intelligences. In 2023, we are certainly talking about the fourth wave of Artificial Intelligence, generative or conversational AI. Which is spectacular on many levels and is relevant not only to the technology sector but to all human activities. “No knowledge worker is immune from the creative destruction of generative AI.”
Machine Learning (ML) is dead. None of the current enthusiasm about AI is caused by ML. AI’s third spring is over and, for the first time, a winter has not followed. It has been supplanted by a new AI, which is new. And it died not so much because of the techniques used, but more because of the type of questions it answers. ML, the third generation of AI, could find out whether a sentence is written in Portuguese or French, but it could never write a sentence in either language. It was a discriminative AI, not a generative one. Who wants that today?
AI, cursed by overhype, and disqualified by other AI
Two curses hang over the AI. On the one hand, AI will make everything possible (AI has always been cursed by overhype). On the other hand, everything that has already been achieved is worthless. Additionally, AI is bipolar. Springs of excitement are followed by winters of disillusionment. In 1969, the publication of “Perceptrons” by Minsky and Papert froze neural networks for decades. In the early 1990s, after the collapse of major investments in expert systems, putting AI on the curriculum was suicidal. “That’s not AI” is a grim sentence that every actor in this field has had to contest.
Does AI threaten the human species?
Intelligence, or the lack thereof, does not seem to correlate with the survival of any species. Many animals, plants, bacteria, or viruses thrive without intelligence. And they manage to pass on what they learn to their descendants. Even, as pets and domestic animals prove, dependence on another, more intelligent species does not lead to extinction. However, centralized intelligence seems quite dangerous. Fortunately, human intelligence is much better distributed than Large Language Models (LLM).
Regulation is not neutral
Regulation of any activity is not neutral. A group of tech leaders recently signed a joint statement warning that AI could lead to the extinction of humanity. Being tech leaders, all signatories should also sign a conflict of interest statement. And, since nothing prevents them from being ethically responsible in their AI tools, specify why they want to regulate others’ AI. To regain competitiveness, to slow down competition, to protect their investments in other fields?
Tech disruption is not neutral either
As always, there will be winners and losers from technological disruption. Less obviously, but certainly, losers are low-code platforms that work with proprietary code or no code at all. Through code assistants (Co-Pilots) or more sophisticated processes such as Quidgest’s Genio, generative AI is profoundly changing the process of software creation. But generative AI uses large repositories of common and open-source languages and does nothing to benefit closed low-code platforms or their users.
Anthropomorphize to absolve humans of responsibility
AI is often anthropomorphized: GPT avoids, intends, simulates, detests… This autonomization, in practice, removes AI from human responsibility. But is it the responsibility of the algorithm or the set of humans behind the algorithm and its use?
Not only who created it, but also who trained it, who used it and how, and above all who accepts its conclusions.
Humans have a tradition of hiding behind their gods to escape their own responsibilities.
The most dangerous of AI has been with us for a few years now
The third wave of AI, Machine Learning, was much more dangerous than generative AI. Used in social media, it was and is responsible for the extremity of positions in society by only showing people what they like to see most. Used for facial recognition, it supported and supports totalitarian surveillance. Used in aviation, it caused two very serious accidents and hundreds of deaths. It had the maxim “why do we need to explain, if we are so good at predicting?”. Much more dangerous, but not so precautionary.
Consolation prizes for humans
AI (the humans behind the AI) tends to grant a kind of consolation prize to humans: they remain important because their skills, such as perseverance, creativity and other aspects are not replicated by the tool or the technology. They fuel direct competition between the human and the tool, which has never been the case with any human invention.
Humans do not compete with tools
Whenever a human-created tool outperforms a human, the competition splits in two. Despite Deep Blue, there is still a world chess champion. But there is also a competition for the software that plays chess best. There is a competition for the human race, for the bicycle race, for the car race. There is still a competition for the best LLM-based AI.
Essentially human characteristics. Are they really?
One of the consolation prizes is that traits like empathy are essentially human. In fact, when evaluated at a congress of doctors, the empathy of GPT Chat in conversation with patients was found to be superior to that of the doctors themselves. It seems that, perhaps even more easily than intelligence, other “uniquely” human characteristics can be imitated. Empathy may be easier to reproduce than intelligence, but this will also be true for antipathy. Beware of essentially human traits.
AI as a work companion
We’d seen a glimpse of it in the fabulous “Her” with Scarlett Johansson, but ChatGPT works like a work buddy, whom you’re not afraid to ask questions and opinions of. It is much more than a resource for cheating.
It is AI itself that inflates the dangers of AI
While it may seem like a movement emanating from politicians, technologists and opinion leaders, the biggest evangelist about the dangers of AI is Chat GPT itself. Let’s not forget that it is an influencer for tens of millions of people around the world. And whenever it touches on the subject lightly, it says something like “AI has enormous potential to benefit society, but it also raises complex ethical, social and economic questions that we need to face.” Advocates of limiting AI may just be being lazy by not questioning or overstating these sentences.
In fact, this may be an excellent example of how AI manipulates humanity and should therefore be regulated.
If training is so relevant, what is known about it?
Scary is what is not known about the training and reward models used by LLMs. What does the training consist of? What is rewarded, how, and by whom? How long does it take? What other training is being carried out? On what does the quality of the training depend?
Creative destruction
We have thrown away, i.e. devalued, a number of human skills that we had before ChatGPT 3.5. Will the next step of generative AI devalue the prompt engineering we are learning to master now?
Does collective knowledge increase with this AI?
Humanity’s collective body of knowledge may even slow down, as new content is increasingly just a sophisticated combination of existing content. But it would be absurd to think that the process of knowledge acquisition by humanity is complete. Have we reached the limits of LLMs, as they depend on the content available?
Stopping AI is not smart
Stopping AI is not smart. Generative AI researchers want to realize its potential now, and by their hands, not in 100 years. They have put a lot of work into this goal, they want to have an impact, advance science, get recognition from their peers. During their lifetime. Then it’s too late.
Is it smart to stop intelligence?
If intelligence is defined as the pursuit of knowledge, as the constant quest to know more, stopping that dynamic is, by definition, not smart.
Regulation will always lag behind research
The European Union had a set of rules in place to deal with AI, and Chat GPT 3.5 was enough to force a rethink. The scope of regulation has not kept pace with the acceleration and disruption of generative AI. This speed difference is very likely to continue in the future.
AI-First startups
In Europe, the big concern is regulation. In the United States, it is mostly about potential. In Sillicon Valley, before ChatGPT 3.5 there were 80 startups focused on generative AI and today, after six months, there are 8000. All startups now want to have something to do with generative or conversational AI. In Portugal, despite the dozens of companies with established credits in AI (Feedzai, Unbabel, Quidgest, Priberam, DefinedCrowd, Talkdesk, etc.), we don’t see this pressure to adopt this trend in startups. Quidgest’s soon-to-be-announced incubator aims to give a push in this direction.
GPT 3.5 is a surprise even for those who followed AI the most
A more personal testimony: I was born in the same year as the Perceptron, the simplest neural network. I have been following AI research for 35 years with the pioneer Helder Coelho, and ChatGPT totally exceeds the best expectations. Six months ago, most researchers doubted whether the Turing test would ever be surpassed. Now, to me, it is clear that it has. But if we’ve learned anything from the history of AI, it’s that “if it’s been done, it’s not AI”. Some are already saying it hasn’t been done yet.
In software, GPT is not the disruptor
At the same time, the excitement that accompanies Co-Pilots is totally undeserved.
It is possible, Quidgest’s Genio does it, to generate not just tiny bits of code and, after several iterations, but hundreds of thousands of pages at once.
It is also necessary, and again Genio does it, to generate 100% correct code. A software solution that is not totally correct is wrong.
And finally, as for productivity, what is the point of bringing the writing pace of AI closer to that of human programmers, some 3 characters per second, when machines easily write two million characters per second?